Maya Export Position and Normal Pass to Pixel Cloud Tutorial

Hey guys, this tutorial is a brief overview on how to use Maya to quickly create a position pass and normal pass to be used with the Pixel Cloud plugin for After Effects. The process is pretty much the same as in my previous tutorials but this is dedicated to exporting the passes from Maya and the adjustments necessary after importing into After Effects.

On a side note, I’ve started setting up a Forum for any questions regarding Pixel Cloud, Scripts and even just general graphics talk! If you have any technical questions please feel free to post here and for my customers feel free to email me at support@blurrypixel.com I’ll get back to you! The forum is still in the beta stages but feel free to start using it!

Share this post!

Demo: Pixel Cloud Relight

The Pixel Cloud plugin effect for After Effects has several nifty features including the ability to relight a 3D rendered scene using separate passes. With this feature, you can drastically change the light source, direction of light and mood of a scene as well as the specularity and reflections in an image. Here is a demo/tutorial of how this can be achieved in After Effects with the Pixel Cloud plugin!

Share this post!

Recreality: The Future of Cinema?

I just received my Lytro and have been having a blast taking photos with it. It’s absolutely brilliant. If you’re unfamiliar with this camera, it allows you to focus the picture after it’s already been taken. The move from in-camera to in-post is certainly being lead by technology and economics. My Pixel Cloud plugin is also trying to bring more of those capabilities into After Effects. Just look at the Microsoft Kinect and how innovative pioneers are using its ranging features to recreate environments from recorded point clouds. And Samsung has developed a sensor that not only records RGB but Depth pixels as well! These are amazing innovations for effects artists.

Imagine a future where we can record whole rooms as animated environments. Think 3D scanners that scan whole rooms at one time and at 24fps. We could completely eliminate conventional camera motion control and create everything in-post. We could change the lighting setups, create digital camera rigs all after the video has already been shot. Not as CG but as recorded pixels in 3D space; An accurate representation of reality that we can manipulate to our choosing. This opens up possibilities for interactive story-telling, not to mention subjective 3D stereography. I could imagine a dozen more uses.

This isn’t just virtual reality but recreated reality, “Recreality.”

Share this post!

Pixel Cloud Demo: Animating a Photo

[CORRECTION] Although we use the term depth map, the correct term for usage should be height map. The main difference being that a height map denotes distance from a flat surface and a depth map denotes distance from the camera.

I’ve shown how we can use a CG render and a Position Pass in After Effects to animate and relight a 3D displaced Pixel Cloud. But not all 3D programs can produce a Position Pass and photographic sources obviously do not come with specialized passes. Pixel Cloud can still get around these limitations.

Although not yet released, this demo/tutorial gives a quick look at how simple it is to use the Pixel Cloud plugin to create a realistic camera animation with a simple photograph. Pixel Cloud can use not only Position Passes but Depth Map passes as well within an 8-bpc project.

So keep an eye out for more updates on how close we are getting to release Pixel Cloud!

Share this post!

What is Relighting?

What is Relighting for After Effects?

For After Effects and compositing in general, relighting is the process of changing the perceived shading of an already rendered 3D image. This includes the diffuse shading and specularity of the 3D image. One could drastically change the perceived position of the sun or whether an object is shiny or not. In other compositing packages this may also include cast shadows.

Relighting is actually a commonplace process in many node-based compositing packages like Nuke. Until recently, this was not possible within After Effects. The fact that After Effects had a 2GB memory limitation as well as limited support for 32bit footage certainly limited its capability to relight a shot.

32bit images can take up a lot of memory and processing power. Take your heaviest comp and turn on 32bit. Your comp may take up all your memory and most of the effects aren’t 32bit ready making your output simply 8bit images with a decimal point on the end. Imagine relighting a 4K video in CS4. You couldn’t and the process would still be very sluggish on CS5 but the possibility is there. And that’s what Pixel Cloud will do. Extend that potential. And make that opportunity available.

The truth is, After Effects was never meant for this type of compositing. Node-based solutions filled a market that AE simply could not. But CS5 and above is changing all that. 64bit and no memory limitation means that After Effects is just beginning to compete in a market of film-quality compositing and special effects. After Effects and Premiere can help open up a world that is filling with digital filmmakers. Technology just continues to become more powerful and even cheaper.

After Effects has a bit of ways to go in order to become a compositing standard. It could still be faster. It should have stronger tools for customizing workflow or at least more education on the scripting interface. And why not create a more node-like interface for the flowchart for the many users who come from a 3D background? But for the price, a regular production cycle, pages of tutorial information and legions of users, CS5 already is the de facto standard.

Passes

To light a scene, you basically need 3 bits of information from the 3D package. The position of the surface and the direction that the surface is pointing. Lastly you will need the camera information which was used to render the scene in the first place. This would be the camera position, direction and focal length. The light information is provided by the compositing program.

The position information is rendered into a position pass. Some people call it a “P” pass but I prefer position pass. The surface direction is provided by the normal pass. The position pass takes the x,y,z coordinates of the points being rendered and saves them in a 32bit image as r,g,b color information. The normal pass takes the vector coordinates of the normals of those points and saves them the same way. The images have to be 32bit as these images can handle floating point values greater than 255 and can even be negative. These values also have to be in the same coordinate system. In other words, if the position pass is in camera coordinates then the normal pass must also be in camera coordinates.

Given these 2 passes and the camera, the compositing program can then use a bit of trigonometry and some fancy mathematics to simulate what the lighting would be at those corresponding screen coordinates with any given light.

Why would you do all this when you could simply go back into your 3D software and re-render the scene? Because that could mean hours of rendering and since compositing is generally much faster than re-rendering, if you could just fix it during compositing why not try that? By rendering extra passes during the rendering stage you could end up saving yourself tons of time instead of wasting hours trying to make sure the sun’s highlights are on the right side or your character’s nose is shiny enough.

Share this post!