Archiv für den Autor: Jan Pflueger

SIGGRAPH (P)review – Impressionen


Vorab schon mal ein paar Impressionen von unserem SIGGRAPH (P)review: Bilder

SIG_Preview_0026

An dieser Stelle nochmals ein kurzes Dankeschön an alle Beteiligten – insbesondere unseren Speakern Nils Thuerey, Benjamin Keinert und Michael Sänger, den zahlreichen Helfern, die für den reibungslosen Ablauf gesorgt haben, Andi Michl (three10) für die Projektionstechnik und Thomas „Huba“ Harbers (Geek’s home) für die technische Unterstützung…und nicht zuletzt unseren Gästen. Wir hoffen, dass es euch gefallen hat und freuen uns auf ein Wiedersehen! Nachtrag mit Infos zu den Talks und Encore Beiträgen folgt in Kürze als separater Post.

SIGGRAPH (P)review


Nach der SIGGRAPH ist vor der SIGGRAPH
Unter diesem Motto lädt das Munich ACM SIGGRAPH Chapter e.V. am 10. April 2016 ab 13:30 Uhr zum gemeinsamen SIGGRAPH (P)review ein.
MASC_sig2016_event_300
Die Veranstaltung findet im großen Saal der FWU, Institut für Film und Bild in Wissenschaft und Unterricht, Bavariafilmplatz 3 in 82031 Grünwald statt.

Zur Einstimmung auf die SIGRAPH 2016 in Anaheim, CA, bieten wir allen Interessierten die Gelegenheit nochmals eine Prise der vergangenen SIGGRAPH 2015 zu schnuppern. Zum Auftakt stellen wir ausgewählte Beiträge aus allen Bereichen der SIGGRAPH 2015 vor.

Garniert wird die Veranstaltung durch einen Special Talk, der von einem unserer Mitglieder aktuell auf der SIGGRAPH ASIA gehalten wurde.
Zum Abschluss servieren wir Ausschnitte des Computer Animation Festival (CAF).
Zwischendurch wird es ausreichend Gelegenheit zum Netzwerken unter Gleichgesinnten geben.
Der Eintritt ist frei. Für den Einlass ist eine Anmeldung erforderlich – registriert euch bis spätestens 07.04.2016 einfach auf unserer Website unter Veranstaltungen > SIGGRAPH (P)review oder nutzt den direkten Link.
Die Agenda zur Veranstaltung findet ihr hier.
Wir freuen uns auf einen spannenden Tag und den Austausch mit euch!

Oliver Markowski & Jan Pflüger
Munich ACM SIGGRAPH Chapter e.V.

A Graphics Breakthrough Makes Perfect CGI Skin


Skin Microstructure acquisition
Während der diesjährigen SIGGRAPH in los Angeles wurde unter anderem auch ein Projekt des USC – Institute for Creative Technologies vorgestellt, das sich mit Methoden zur realistischen Darstellung der menschlichen Haut für den Einsatz in CGI beschäftigt.

Den vollständigen Artikel findet ihr hier.

Paul Debevec hielt bereits zu unserer Eröffnung einen Talk in München.
Aufgrund seiner Verbundenheit mit dem Munich ACM SIGGRAPH Chapter hoffen wir daher ihn bald wieder bei uns begrüßen zu dürfen und natürlich auch aus erster Hand Infromationen zu diesem Projekt zu erhalten.

SIGGRAPH 2015 – Munich ACM SIGGRAPH Chapter in L.A.


s2015_head_masc

Das Munich ACM SIGGRAPH Chapter besucht nicht nur die weltweit angesehenste Konferenz rund um das Thema CG – wir sind besonders Stolz darauf, dass unser Gründungsmitglied Michael Sänger Teil des Organsiationskomitees der Birds of a Feather Sessions der Demo Scene ist und neben Paneldiskussion auch einen Talk zum Thema Limitations Push Creativity – Thinking and Working the Demoscene Way halten wird.
Nähere Informationen gibt es hier.
Für diejenigen, die Michi direkt auf twitter folgen möchten: twitter.com/abductee_org

Allen Teilnehmern vor Ort viel Vergnügen und interessante Eindrücke! Vielleicht berichtet ihr ja bei einem der nächsten Stammtische von euren Erlebnissen bei der SIGGRAPH?

Wer es nicht gesehen hat – hat etwas verpasst…


Bei den Geek’s home open doors konnte man sich erstmals einen Überblick über die vorhandenen Schätze verschaffen – wenn auch nur ein Bruchteil der Sammlung tatsächlich ins Blickfeld gerückt werden konnte.
Inmitten der Exponate lernte man sich beim Fachsimpeln kennen oder tauschte Anekdoten über vorhandenes Equipment aus.
Bei Geek’s home freut man sich über jegliche Unterstüzung, um die Sammlung einem breiteren Publikum nicht nur zu präsentieren, sondern auch nutzbar einzusetzen.
Das Munich ACM SIGGRAPH Chapter wünscht dabei gutes Gelingen!
Nachfolgend einige Eindrücke der Veranstaltung:

geeks_entrygeeks_impression_1

geeks_impression_2 geeks_discussion geeks_explanation  projector_experts

Talk Ramesh Raskar – Extreme Computational Imaging: Photography, Health-tech and Displays


The Munich ACM SIGGRAPH Chapter is pleased to welcome Ramesh Raskar for a talk on June, 23rd. Please register for the event here .

Talk Ramesh Raskar

The Camera Culture Group at the MIT Media Lab aims to create a new class of imaging platforms. This talk will discuss three tracks of research: femto photography, retinal imaging, and 3D displays.

Femto Photography consists of femtosecond laser illumination, picosecond-accurate detectors and mathematical reconstruction techniques allowing researchers to visualize propagation of light. Direct recording of reflected or scattered light at such a frame rate with sufficient brightness is nearly impossible. Using an indirect ’stroboscopic‘ method that records millions of repeated measurements by careful scanning in time and viewpoints we can rearrange the data to create a ‚movie‘ of a nanosecond long event. Femto photography and a new generation of nano-photography (using ToF cameras) allow powerful inference with computer vision in presence of scattering.

EyeNetra is a mobile phone attachment that allows users to test their own eyesight. The device reveals corrective measures thus bringing vision to billions of people who would not have had access otherwise. Another project, eyeMITRA, is a mobile retinal imaging solution that brings retinal exams to the realm of routine care, by lowering the cost of the imaging device to a 10th of its current cost and integrating the device with image analysis software and predictive analytics. This provides early detection of Diabetic Retinopathy that can change the arc of growth of the world’s largest cause of blindness.

Finally the talk will describe novel lightfield cameras and lightfield displays that require a compressive optical architecture to deal with high bandwidth requirements of 4D signals.

Biography

Ramesh Raskar is an Associate Professor at MIT Media Lab. Ramesh Raskar joined the Media Lab from Mitsubishi Electric Research Laboratories in 2008 as head of the Lab’s Camera Culture research group. His research interests span the fields of computational photography, inverse problems in imaging and human-computer interaction. Recent projects and inventions include transient imaging to look around a corner, a next generation CAT-Scan machine, imperceptible markers for motion capture (Prakash), long distance barcodes (Bokode), touch+hover 3D interaction displays (BiDi screen), low-cost eye care devices (Netra,Catra), new theoretical models to augment light fields (ALF) to represent wave phenomena and algebraic rank constraints for 3D displays(HR3D).

In 2004, Raskar received the TR100 Award from Technology Review, which recognizes top young innovators under the age of 35, and in 2003, the Global Indus Technovator Award, instituted at MIT to recognize the top 20 Indian technology innovators worldwide. In 2009, he was awarded a Sloan Research Fellowship. In 2010, he received the Darpa Young Faculty award. Other awards include Marr Prize honorable mention 2009, LAUNCH Health Innovation Award, presented by NASA, USAID, US State Dept and NIKE, 2010, Vodafone Wireless Innovation Project Award (first place), 2011. He holds over 50 US patents and has received four Mitsubishi Electric Invention Awards. He is currently co-authoring a book on Computational Photography. Links: [Personal webpage] –  http://www.media.mit.edu/~raskar

Double Feature: Prototyping the Future – Talk mit Ken Perlin / Gast-Talk Paul Debevec


Ken - Talk Prototyping the future

original image from http://worldbuilding.institute/images/made/u/people

Update

We are quite happy to announce that we could make it happen to get also Paul Debevec for a short talk on monday. After Kens talk about Prototyping the future Paul is giving a short introduction into:
Light Fields for Virtual Reality and Scanning the President of the United States
Everybody is welcome to attend.  The Talks are starting at 6 p.m. and will finish around 8 p.m. There will also be time for networking after the official event.
– – – – – – – – – – – – – – – – – – –
We are proud to anounce that Ken Perlin is going to give a talk in Munich. The talk takes place on Monday, 11th of May at 6.p.m. at the Ludwig Maximilian University – Main Building. Please register for the event here . Registration for the talk, public and other exklusive events for our Munich ACM SIGGRAPH members  will be published on our event area – stay tuned…

About „Prototyping the future“

The question our lab at NYU is asking is „How might people in the future communicate with each other in every day life, as computation and display technologies continue to develop, to the point where computer-mediated interfaces are so ubiquitous and intuitive as to be imperceptible?“ To address this, we are combining features of Augmented and Virtual Realities. Participants walk freely around in physical space, interacting with other people and physical objects, just as they do in everyday life. Yet everything those participants see and hear is computer-mediated, thereby allowing them to share any reality they wish.

A combination of wireless VR, motion capture and 3D audio synthesis simulate the experience of future high resolution contact lens and spatial audio displays. To interact with this environment, we have created Chalktalk, a casual interface for sketching ideas in VR, as on a chalkboard, while also containing many paths for hand drawn sketches to seamlessly morph into software mediated diagrams, 3d objects, and simulation components, which can all then be used together.  Sketches can be introduced in any order, without the requirement of a visible GUI. Participants in our environment make animated drawings in the space between them. Our vision is that in the future, people will freely combine verbal and visual description. Natural language itself will evolve to incorporate gestures for creating animated visuals. In the course of every day conversation, people will literally draw their ideas in the air.

Ken Perlin – Bio

Ken Perlin, a professor in the Department of Computer Science at New York University, directs the NYU Games For Learning Institute, and a participating faculty member in the NYU Media and Games Network (MAGNET). He was also founding director of the Media Research Laboratory and director of the NYU Center for Advanced Technology.

His research interests include graphics, animation, augmented and mixed reality, user interfaces, science education and multimedia. He received an Academy Award for Technical Achievement from the Academy of Motion Picture Arts and Sciences for his noise and turbulence procedural texturing techniques, which are widely used in feature films and television, as well as the 2008 ACM/SIGGRAPH Computer Graphics Achievement Award, the TrapCode award for achievement in computer graphics research, the NYC Mayor’s award for excellence in Science and Technology and the Sokol award for outstanding Science faculty at NYU, and a Presidential Young Investigator Award from the National Science Foundation. Dr. Perlin currently serves on the program committee of the AAAS. He was general chair of the UIST2010 conference, and has been a featured artist at the Whitney Museum of American Art.
Dr. Perlin received his Ph.D. in Computer Science from New York University, and a B.A. in theoretical mathematics from Harvard University. Before working at NYU he was Head of Software Development at R/GREENBERG Associates in New York, NY. Prior to that he was the System Architect for computer generated animation at Mathematical Applications Group, Inc. He serves on the Advisory Board for the Centre for Digital Media at GNWC, and has served on the Board of Directors of both the New York chapter of ACM/SIGGRAPH and the New York Software Industry Association.

– – – – – – – – – – – – – – – – – – –
About „Lightfields for VR“

Today’s VR headsets track the user’s head to produce low-latency viewpoint shifts in the virtual environment to allow you to believe that you are really present in the scene.  Unfortunately, live action recordings of concerts, news events, and sports are recorded from fixed eye positions and cannot take advantage of providing this crucial cue for presence.  Even recording left-eye and right-eye panoramas for 3D stereoscopic VR fails to record motion parallax and misses the most compelling aspect of virtual reality which is otherwise easily achieved with game environment scenes.  Light Field photography may provide a solution by recording arrays of images from different perspectives in a way which allows a continuous range of viewpoints to be generated after recording, including viewpoints never before photographed.  This presentation will introduce the concept of light field capture and rendering, provide a history of its applications in computer graphics, and discuss the opportunities and challenges in using light field capture and rendering to record and play back live-action content with the ability to move your head around and see the appropriate shifts in perspective.Paul Debevec – BioPaul Debevec is a Research Professor in the University of Southern California’s Viterbi School of Engineering and the Chief Visual Officer at USC’s Institute for Creative Technologies where he leads the Graphics Laboratory. Since his 1996 UC Berkeley Ph.D. Thesis, Paul has helped develop data‐driven techniques for photorealistic computer graphics including image‐based modeling and rendering, high dynamic range imaging, image‐based lighting, appearance capture, and 3D displays. His short films, including The Campanile Movie, Rendering with Natural Light and Fiat Lux provided early examples of the virtual cinematography and HDR lighting techniques seen in The Matrix trilogy and have become standard practice in visual effects. Debevec’s Light Stage systems for photoreal facial scanning have contributed to groundbreaking digital character work in movies such as Spider‐Man 2, Superman Returns, The Curious Case of Benjamin Button, Avatar, The Avengers, Oblivion, Gravity, and Maleficent and earned him and his colleagues a 2010 Scientific and Engineering Award from the Academy of Motion Picture Arts and Sciences (AMPAS). In 2014, he was profiled in The New Yorker magazine’s „Pixel Perfect: the scientist behind the digital cloning of actors“ article by Margaret Talbot. He also recently worked with the Smithsonian Institution to scan a 3D model of President Barack Obama.

Debevec is an IEEE Senior Member and Co-Chair of the Academy of Motion Picture Arts and Sciences (AMPAS) Science and Technology Council. He is also a member of the Visual Effects Society and ACM SIGGRAPH. He served on the Executive Committee and as Vice-President of ACM SIGGRAPH, chaired the SIGGRAPH 2007 Computer Animation Festival and co-chaired Pacific Graphics 2006 and the 2002 Eurographics Workshop on Rendering. http://www.pauldebevec.com/.