Paul Debevec on Illuminating Effects

Page 1 of 1

Paul Debevec
Paul Debevec, executive producer of graphics research at the Institute of Creative Technologies in Marina Del Rey, CA, came to the entertainment industry with his thesis on architectural modeling and rendering for his Ph.D. in computer science from the University of California at Berkeley. “Campanile,” which created virtual cinematography of sets from still photos, was shown in the Electronic Theatre at Siggraph 1997, and visual effects supervisor John Gaeta realized it would help solve a problem he was facing in creating the “bullet-time” shots for The Matrix. Since then, Debevec has continued to work on ways of mixing computer graphics with computer vision.
[an error occurred while processing this directive]
How has your global illumination work been adopted and utilized by the entertainment industry?

So much of visual effects work involves trying to make the effects look “real.” An important requirement is that the CG characters need to appear to be illuminated by the light of the environments that they’re actually in. The solution I developed involves capturing the real-world lighting as a panoramic image, which I was able to do accurately by leveraging high dynamic-range photography techniques to capture the full range of light, all the way from small, bright light sources to large areas of bounce light from the ground and walls of the environment. We can then use one of several global illumination techniques to light the objects with these images of captured illumination, and even correctly simulate shadows. Today, these techniques are often known as HDRI (High Dynamic Range Imaging) and IBL (Image-Based Lighting), and major 3D rendering packages that specifically support these techniques now include LightWave 3D, Mental Ray, RenderMan, and Cinema 4D.

What’s in development at ICT?
An interesting side project we just finished is Vistarama HD, a digital update of the Cinerama process from the 1950s and 1960s. ICT executive director Richard Lindheim asked me if it would be possible to recreate Cinerama’s three-projector, three-camera immersive film experience in our institute’s virtual-reality theater, which uses three Christie digital projectors to display real-time computer graphics on a 150-degree curved screen. I realized that with modern HD video equipment and optics, we should be able to cover that field of view with a single camera rather than the unwieldy three-lens device used for Cinerama. Chris Tchou in my group and I then found a way that our lab’s 3D-scanning and camera-calibration techniques could measure the precise way to split the HD image onto the three projectors to recreate the scene’s original field of view for the viewer. A real force behind making this project a reality has been Randal Kleiser, who’s written and directed the first Vistarama HD short film about our institute, and is excited about other creative possibilities for this process.

What are you working on now that we can expect to have an impact on CG in the future?

In creating realistic computer graphics models, it is still very hard to digitize reflectance properties — how the different parts of costumes, props, and people’s faces reflect light in diffuse, shiny, and translucent ways. I think we’ve made a useful inroad into this problem by shooting movies of objects as a neon tube light passes over them, and then analyzing the reflections of this light to determine which parts are matte, which are shiny, and how sharp the reflection is coming from each point. With this we’ve taken a number of objects, including a 19th century daguerreotype and a 15th century illuminated manuscript, and made computer models that glint and reflect just the same way that the originals do. My hope is that, as we extend these techniques to larger and more three-dimensional objects, we’ll get closer to the digital-backlot concept, where filmmakers can have a library of entirely realistic furniture, vehicles and period costumes readily available to build and dress any sort of virtual scene.

The other problem we’re hoping to help solve is compositing — making actors filmed on a green screen really blend into background plates or virtual sets. To do this we’re using some of the same lighting capture techniques that we’ve used for synthetic objects, and we’ve built a prototype lighting stage out of a sphere of inward-pointing color LED light sources that can illuminate an actor with a very close approximation to lighting captured on location or derived from a virtual scene. We’re now working on understanding the color rendition issues in lighting actors with captured illumination, and in scaling up our prototype device into a full-scale production stage. My hope is that this sort of system will give cinematographers more direct control of how scenes and actors are illuminated, and give them the choice of having lighting that is anywhere between scientifically realistic and artistically interpretive, as their vision requires.
— Interviewed by Debra Kaufman


Source: Film & Video



Related sites: • Film and Video Magazine
Related forums:
[an error occurred while processing this directive]






 

Copyright © 2004 PBI Media, LLC. All rights reserved.


top      home      search      user forum      subscribe      media kit      contact      webmaster@digitalmedianet.com

Return to Digital Post Production Home Page