Double Feature: Prototyping the Future – Talk mit Ken Perlin / Gast-Talk Paul Debevec

Ken - Talk Prototyping the future

original image from


We are quite happy to announce that we could make it happen to get also Paul Debevec for a short talk on monday. After Kens talk about Prototyping the future Paul is giving a short introduction into:
Light Fields for Virtual Reality and Scanning the President of the United States
Everybody is welcome to attend.  The Talks are starting at 6 p.m. and will finish around 8 p.m. There will also be time for networking after the official event.
– – – – – – – – – – – – – – – – – – –
We are proud to anounce that Ken Perlin is going to give a talk in Munich. The talk takes place on Monday, 11th of May at 6.p.m. at the Ludwig Maximilian University – Main Building. Please register for the event here . Registration for the talk, public and other exklusive events for our Munich ACM SIGGRAPH members  will be published on our event area – stay tuned…

About „Prototyping the future“

The question our lab at NYU is asking is „How might people in the future communicate with each other in every day life, as computation and display technologies continue to develop, to the point where computer-mediated interfaces are so ubiquitous and intuitive as to be imperceptible?“ To address this, we are combining features of Augmented and Virtual Realities. Participants walk freely around in physical space, interacting with other people and physical objects, just as they do in everyday life. Yet everything those participants see and hear is computer-mediated, thereby allowing them to share any reality they wish.

A combination of wireless VR, motion capture and 3D audio synthesis simulate the experience of future high resolution contact lens and spatial audio displays. To interact with this environment, we have created Chalktalk, a casual interface for sketching ideas in VR, as on a chalkboard, while also containing many paths for hand drawn sketches to seamlessly morph into software mediated diagrams, 3d objects, and simulation components, which can all then be used together.  Sketches can be introduced in any order, without the requirement of a visible GUI. Participants in our environment make animated drawings in the space between them. Our vision is that in the future, people will freely combine verbal and visual description. Natural language itself will evolve to incorporate gestures for creating animated visuals. In the course of every day conversation, people will literally draw their ideas in the air.

Ken Perlin – Bio

Ken Perlin, a professor in the Department of Computer Science at New York University, directs the NYU Games For Learning Institute, and a participating faculty member in the NYU Media and Games Network (MAGNET). He was also founding director of the Media Research Laboratory and director of the NYU Center for Advanced Technology.

His research interests include graphics, animation, augmented and mixed reality, user interfaces, science education and multimedia. He received an Academy Award for Technical Achievement from the Academy of Motion Picture Arts and Sciences for his noise and turbulence procedural texturing techniques, which are widely used in feature films and television, as well as the 2008 ACM/SIGGRAPH Computer Graphics Achievement Award, the TrapCode award for achievement in computer graphics research, the NYC Mayor’s award for excellence in Science and Technology and the Sokol award for outstanding Science faculty at NYU, and a Presidential Young Investigator Award from the National Science Foundation. Dr. Perlin currently serves on the program committee of the AAAS. He was general chair of the UIST2010 conference, and has been a featured artist at the Whitney Museum of American Art.
Dr. Perlin received his Ph.D. in Computer Science from New York University, and a B.A. in theoretical mathematics from Harvard University. Before working at NYU he was Head of Software Development at R/GREENBERG Associates in New York, NY. Prior to that he was the System Architect for computer generated animation at Mathematical Applications Group, Inc. He serves on the Advisory Board for the Centre for Digital Media at GNWC, and has served on the Board of Directors of both the New York chapter of ACM/SIGGRAPH and the New York Software Industry Association.

– – – – – – – – – – – – – – – – – – –
About „Lightfields for VR“

Today’s VR headsets track the user’s head to produce low-latency viewpoint shifts in the virtual environment to allow you to believe that you are really present in the scene.  Unfortunately, live action recordings of concerts, news events, and sports are recorded from fixed eye positions and cannot take advantage of providing this crucial cue for presence.  Even recording left-eye and right-eye panoramas for 3D stereoscopic VR fails to record motion parallax and misses the most compelling aspect of virtual reality which is otherwise easily achieved with game environment scenes.  Light Field photography may provide a solution by recording arrays of images from different perspectives in a way which allows a continuous range of viewpoints to be generated after recording, including viewpoints never before photographed.  This presentation will introduce the concept of light field capture and rendering, provide a history of its applications in computer graphics, and discuss the opportunities and challenges in using light field capture and rendering to record and play back live-action content with the ability to move your head around and see the appropriate shifts in perspective.Paul Debevec – BioPaul Debevec is a Research Professor in the University of Southern California’s Viterbi School of Engineering and the Chief Visual Officer at USC’s Institute for Creative Technologies where he leads the Graphics Laboratory. Since his 1996 UC Berkeley Ph.D. Thesis, Paul has helped develop data‐driven techniques for photorealistic computer graphics including image‐based modeling and rendering, high dynamic range imaging, image‐based lighting, appearance capture, and 3D displays. His short films, including The Campanile Movie, Rendering with Natural Light and Fiat Lux provided early examples of the virtual cinematography and HDR lighting techniques seen in The Matrix trilogy and have become standard practice in visual effects. Debevec’s Light Stage systems for photoreal facial scanning have contributed to groundbreaking digital character work in movies such as Spider‐Man 2, Superman Returns, The Curious Case of Benjamin Button, Avatar, The Avengers, Oblivion, Gravity, and Maleficent and earned him and his colleagues a 2010 Scientific and Engineering Award from the Academy of Motion Picture Arts and Sciences (AMPAS). In 2014, he was profiled in The New Yorker magazine’s „Pixel Perfect: the scientist behind the digital cloning of actors“ article by Margaret Talbot. He also recently worked with the Smithsonian Institution to scan a 3D model of President Barack Obama.

Debevec is an IEEE Senior Member and Co-Chair of the Academy of Motion Picture Arts and Sciences (AMPAS) Science and Technology Council. He is also a member of the Visual Effects Society and ACM SIGGRAPH. He served on the Executive Committee and as Vice-President of ACM SIGGRAPH, chaired the SIGGRAPH 2007 Computer Animation Festival and co-chaired Pacific Graphics 2006 and the 2002 Eurographics Workshop on Rendering.