The Duplicate with Connections script has been updated and be downloaded at AEScripts + AEplugins. This update adds an option to use the property index instead of the property name. This can be useful in which multiple properties may have the same name and therefore can confuse the expressions engine. I understand this could be of help with those of you using expressions in Element.
In other news, I have to apologize for the lack of communication for a while. It’s been a busy few months at work and with a new baby at home! But I’ve got a video tutorial I am working on and an update for the Pixel Cloud plugin as well as a brand new plugin on the horizon!
More and more, I am using Pixel Cloud as a visualization tool. Although, I may not have the need to relight a 3D render, I am compositing it with another pass or a background. In order to make sure the composite works, it helps to visualize how it may look within a 3D space. Pixel Cloud can help with that. Last night, I was experimenting with outputting a PPass and a Normal Pass as a UV texture from within Maya. Using this technique, you can create a point cloud not just from the view of the camera, but from all textured points on the model. Maya’s Batch Bake function allows you to do this, with it’s support for baking 32-bit texture maps. Although you cannot create image sequences in this way, you can create a working reference of the CG model from all angles. This reference could be useful when visualizing a composite.
The application of this technique is quite simple. Create and texture your position pass as normal and use Batch Bake t0 create 32-bit floating point tiffs for each pass. You need to make sure that UVs are completely unfolded and not flipped, otherwise the command may not work. Also set your options to output 32-bit. Import into After Effects and use these passes with Pixel Cloud as you normally would.
Creating the PPass texture in Maya will be the same as connecting the samplerInfo node’s pointWorld with the outColor of a surface shader. Creating the Normal Pass is a little different. One needs to connect the samplerInfo’s normalCamera to a vector product node. Then connect the rendering camera’s worldMatrix to the vector product node. Set the vector product node to matrix multiply and connect the output to the outColor of the surface shader. You can then use Batch Bake to create the texture map/passes. These were my settings.
Hey guys, this tutorial is a brief overview on how to use Maya to quickly create a position pass and normal pass to be used with the Pixel Cloud plugin for After Effects. The process is pretty much the same as in my previous tutorials but this is dedicated to exporting the passes from Maya and the adjustments necessary after importing into After Effects.
On a side note, I’ve started setting up a Forum for any questions regarding Pixel Cloud, Scripts and even just general graphics talk! If you have any technical questions please feel free to post here and for my customers feel free to email me at firstname.lastname@example.org I’ll get back to you! The forum is still in the beta stages but feel free to start using it!
The Pixel Cloud plugin for After Effects is a powerful compositing tool that allows you to relight a 3D generated image, make 3D aware selections or displace the pixels in 3D space. Combine the use of a Position Pass and a Normal Pass with the power of After Effects’ 3D lights and cameras and change the lighting of your composited 3D graphics. This native plugin for After Effects can use the coordinate information from a Position Pass or depth map to generate a Pixel Cloud in 3D space. This Pixel Cloud can be viewed from all angles using AE’s own cameras. With a Normal Pass, the Pixel Cloud can be relit using After Effects’ own lights or using an image as an Image Based Light. There are a number of uses from 3D compositing to motion graphics! Find it at AEScripts.com!
Relighting with 32-bit passes
Use AE Lights and Cameras
Image Based Lighting
Alpha Lights for matte generation
Support for falloff in CS5.5 and above
Pixel Cloud generation with 8bpc to 32bpc
Lo-res Preview modes
Generating the position pass can be done in various 3D software packages. In Cinema 4D you may use the PointPosition C4D from AEScripts.com. There is also a tutorial for doing this here: http://youtu.be/yfoT7bxbBwo
For Maya, you may use the Point World output of a samplerInfo node connected to a surface shader and render an EXR using mental ray and the 32-bit framebuffer. There are also a variety of tutorials available.
For 3DS Max, you may add the XYZ Generator shader to the surface slot of a mental ray material. Set it’s Coordinate System to 3 and render to an EXR using the floating point framebuffer.
The Pixel Cloud plugin effect for After Effects has several nifty features including the ability to relight a 3D rendered scene using separate passes. With this feature, you can drastically change the light source, direction of light and mood of a scene as well as the specularity and reflections in an image. Here is a demo/tutorial of how this can be achieved in After Effects with the Pixel Cloud plugin!
The upcoming Pixel Cloud plugin for After Effects can be used to relight a source image affecting the diffuse and specular properties. Coming soon I’ll be posting a revealing demo of how easy it is to take a static image from a 3D package and use Pixel Cloud and After Effects’ own 3D lights to create a dynamic and believable composite with moving lights, shadows, and reflections. Here’s an end product of what it will look like!
One of the nifty features in the upcoming Pixel Cloud plugin for After Effects is the ability to use After Effects own 3D lights to “relight” the alpha of an image according to its depth/position information. This can be done using an additional position pass or height map pass. Here is a quick little demo of how this can be achieved!
A recent post on FXGuide regarding Pointcloud9, a European company that provides high quality 3D scanning services to the film industry, has me fascinated with how this technology is being used today. LIDAR is basically the process of using a laser to get the 3D information of an object or environment similar to the way desktop 3D scanners operate and it ties in perfectly with my previous Recreality post, about the future of cinema. These laser-based range finding cameras are ultra accurate. No Kinect hack here. Perhaps this is the way it will be done in the future? Incidentally, this technology was used 4 years ago for Radiohead’s House of Cards music video.
Pixel Cloud has a robust set of relighting tools. These include relighting tools for diffuse, specularity, reflections and the ability to relight the “Alpha” of a 3D displaced image using After Effects own lights. In Pixel Cloud we can specify lights that only affect the alpha channel of an image. Since the image is a 3D displaced Pixel Cloud, this means we can easily separate parts of an image according to their depth information. This is a powerful tool and a normal map isn’t a prerequisite, meaning you can relight non-CG generated images like photos and video. The video isn’t up yet but as a taste here is a preview of what the upcoming demo/tutorial will show how we can utilize this tool to relight the alpha of Jack, our cute little dog from the previous demo!
I just received my Lytro and have been having a blast taking photos with it. It’s absolutely brilliant. If you’re unfamiliar with this camera, it allows you to focus the picture after it’s already been taken. The move from in-camera to in-post is certainly being lead by technology and economics. My Pixel Cloud plugin is also trying to bring more of those capabilities into After Effects. Just look at the Microsoft Kinect and how innovative pioneers are using its ranging features to recreate environments from recorded point clouds. And Samsung has developed a sensor that not only records RGB but Depth pixels as well! These are amazing innovations for effects artists.
Imagine a future where we can record whole rooms as animated environments. Think 3D scanners that scan whole rooms at one time and at 24fps. We could completely eliminate conventional camera motion control and create everything in-post. We could change the lighting setups, create digital camera rigs all after the video has already been shot. Not as CG but as recorded pixels in 3D space; An accurate representation of reality that we can manipulate to our choosing. This opens up possibilities for interactive story-telling, not to mention subjective 3D stereography. I could imagine a dozen more uses.
This isn’t just virtual reality but recreated reality, “Recreality.”