What is Relighting?

What is Relighting for After Effects?

For After Effects and compositing in general, relighting is the process of changing the perceived shading of an already rendered 3D image. This includes the diffuse shading and specularity of the 3D image. One could drastically change the perceived position of the sun or whether an object is shiny or not. In other compositing packages this may also include cast shadows.

Relighting is actually a commonplace process in many node-based compositing packages like Nuke. Until recently, this was not possible within After Effects. The fact that After Effects had a 2GB memory limitation as well as limited support for 32bit footage certainly limited its capability to relight a shot.

32bit images can take up a lot of memory and processing power. Take your heaviest comp and turn on 32bit. Your comp may take up all your memory and most of the effects aren’t 32bit ready making your output simply 8bit images with a decimal point on the end. Imagine relighting a 4K video in CS4. You couldn’t and the process would still be very sluggish on CS5 but the possibility is there. And that’s what Pixel Cloud will do. Extend that potential. And make that opportunity available.

The truth is, After Effects was never meant for this type of compositing. Node-based solutions filled a market that AE simply could not. But CS5 and above is changing all that. 64bit and no memory limitation means that After Effects is just beginning to compete in a market of film-quality compositing and special effects. After Effects and Premiere can help open up a world that is filling with digital filmmakers. Technology just continues to become more powerful and even cheaper.

After Effects has a bit of ways to go in order to become a compositing standard. It could still be faster. It should have stronger tools for customizing workflow or at least more education on the scripting interface. And why not create a more node-like interface for the flowchart for the many users who come from a 3D background? But for the price, a regular production cycle, pages of tutorial information and legions of users, CS5 already is the de facto standard.

Passes

To light a scene, you basically need 3 bits of information from the 3D package. The position of the surface and the direction that the surface is pointing. Lastly you will need the camera information which was used to render the scene in the first place. This would be the camera position, direction and focal length. The light information is provided by the compositing program.

The position information is rendered into a position pass. Some people call it a “P” pass but I prefer position pass. The surface direction is provided by the normal pass. The position pass takes the x,y,z coordinates of the points being rendered and saves them in a 32bit image as r,g,b color information. The normal pass takes the vector coordinates of the normals of those points and saves them the same way. The images have to be 32bit as these images can handle floating point values greater than 255 and can even be negative. These values also have to be in the same coordinate system. In other words, if the position pass is in camera coordinates then the normal pass must also be in camera coordinates.

Given these 2 passes and the camera, the compositing program can then use a bit of trigonometry and some fancy mathematics to simulate what the lighting would be at those corresponding screen coordinates with any given light.

Why would you do all this when you could simply go back into your 3D software and re-render the scene? Because that could mean hours of rendering and since compositing is generally much faster than re-rendering, if you could just fix it during compositing why not try that? By rendering extra passes during the rendering stage you could end up saving yourself tons of time instead of wasting hours trying to make sure the sun’s highlights are on the right side or your character’s nose is shiny enough.

Share this post!