Archive for the ‘Inspiration’ Category

Visual Type FIRE

Added another theme to my list of visual type experiments. Fire created using Maya fluids. If you haven’t seen the previous post with my other experiments. Check them out here.

Did you like this? Share it:

Visual Typography

It’s been a while since my last post. Life happens. It’s been a busy year. So sorry for the extended hiatus and I promise to come around more often!

Recently, I’ve been challenging myself during the few chances I have for experimentation to create short 5 second visual typography clips. A colleague of mine mentioned a Reddit game identifying movies from a single word uttered in the movie (without uttering the title, of course).

I tried to do similar creating short clips designing around a single word and completing within an hour or two. Here are four examples of what I’ve done so far.







Did you like this? Share it:

LIDAR: 3D Scanning Technology

A recent post on FXGuide regarding Pointcloud9, a European company that provides high quality 3D scanning services to the film industry, has me fascinated¬†with how this technology is being used today. LIDAR is basically the process of using a laser to get the 3D information of an object or environment similar to the way desktop 3D scanners operate and it ties in perfectly with my previous Recreality post, about the future of cinema. These laser-based range finding cameras are ultra accurate. No Kinect hack here. Perhaps this is the way it will be done in the future? Incidentally, this technology was used 4 years ago for Radiohead’s House of Cards music video.

 

Did you like this? Share it:

Recreality: The Future of Cinema?

I just received my Lytro and have been having a blast taking photos with it. It’s absolutely brilliant. If you’re unfamiliar with this camera, it allows you to focus the picture after it’s already been taken. The move from in-camera to in-post is certainly being lead by technology and economics. My Pixel Cloud plugin is also trying to bring more of those capabilities into After Effects. Just look at the Microsoft Kinect and how innovative pioneers are using its ranging features to recreate environments from recorded point clouds.¬†And Samsung has developed a sensor that not only records RGB but Depth pixels as well! These are amazing innovations for effects artists.

Imagine a future where we can record whole rooms as animated environments. Think 3D scanners that scan whole rooms at one time and at 24fps. We could completely eliminate conventional camera motion control and create everything in-post. We could change the lighting setups, create digital camera rigs all after the video has already been shot. Not as CG but as recorded pixels in 3D space; An accurate representation of reality that we can manipulate to our choosing. This opens up possibilities for interactive story-telling, not to mention subjective 3D stereography. I could imagine a dozen more uses.

This isn’t just virtual reality but recreated reality, “Recreality.”

Did you like this? Share it:

On the YouTubes!

Yeah! After some soul searching, I’ve moved all the blog videos onto YouTube! Our YouTube channel is at TheBlurrypixel. Of course all the videos are still embedded in the posts so no need to go directly to the channel but I will be posting some bonus extra videos on there that may not be on the blog so show us your support and Subscribe!

 

Did you like this? Share it:

Tutorial: Learning at the Playground

At school there were always teachers who in all honesty were experts in their field simply because they knew 10% more about it than 10% of the students they were teaching. But if I took the chance to look beyond my own “pretentiousness”, I would find there was always something invaluable to learn from them. It may not be what I was expecting and it may be completely different than what I was studying, but it was always beneficial and always unexpected. And that’s what learning is about, right?

So here is my first tutorial on Blurrypixel using an often overlooked particle effect in After Effects, Particle Playground. Which takes a bit of effort to learn compared to the plethora of turnkey particle generators out there, but I think it’s a good start for a tutorial since it encapsulates so much of digital compositing. So even though you might be saying, “I can make this effect happen in Particular in 5 minutes,” keep in mind there’s always another 90% out there.

It’s a video tutorial. I hope you find it invaluable. Please let me know how I did and if you have any tips or corrections please leave a note. I want to get better! So watch my first tutorial and let’s Learn at the Playground!

Did you like this? Share it:

What is Relighting?

What is Relighting for After Effects?

For After Effects and compositing in general, relighting is the process of changing the perceived shading of an already rendered 3D image. This includes the diffuse shading and specularity of the 3D image. One could drastically change the perceived position of the sun or whether an object is shiny or not. In other compositing packages this may also include cast shadows.

Relighting is actually a commonplace process in many node-based compositing packages like Nuke. Until recently, this was not possible within After Effects. The fact that After Effects had a 2GB memory limitation as well as limited support for 32bit footage certainly limited its capability to relight a shot.

32bit images can take up a lot of memory and processing power. Take your heaviest comp and turn on 32bit. Your comp may take up all your memory and most of the effects aren’t 32bit ready making your output simply 8bit images with a decimal point on the end. Imagine relighting a 4K video in CS4. You couldn’t and the process would still be very sluggish on CS5 but the possibility is there. And that’s what Pixel Cloud will do. Extend that potential. And make that opportunity available.

The truth is, After Effects was never meant for this type of compositing. Node-based solutions filled a market that AE simply could not. But CS5 and above is changing all that. 64bit and no memory limitation means that After Effects is just beginning to compete in a market of film-quality compositing and special effects. After Effects and Premiere can help open up a world that is filling with digital filmmakers. Technology just continues to become more powerful and even cheaper.

After Effects has a bit of ways to go in order to become a compositing standard. It could still be faster. It should have stronger tools for customizing workflow or at least more education on the scripting interface. And why not create a more node-like interface for the flowchart for the many users who come from a 3D background? But for the price, a regular production cycle, pages of tutorial information and legions of users, CS5 already is the de facto standard.

Passes

To light a scene, you basically need 3 bits of information from the 3D package. The position of the surface and the direction that the surface is pointing. Lastly you will need the camera information which was used to render the scene in the first place. This would be the camera position, direction and focal length. The light information is provided by the compositing program.

The position information is rendered into a position pass. Some people call it a “P” pass but I prefer position pass. The surface direction is provided by the normal pass. The position pass takes the x,y,z coordinates of the points being rendered and saves them in a 32bit image as r,g,b color information. The normal pass takes the vector coordinates of the normals of those points and saves them the same way. The images have to be 32bit as these images can handle floating point values greater than 255 and can even be negative. These values also have to be in the same coordinate system. In other words, if the position pass is in camera coordinates then the normal pass must also be in camera coordinates.

Given these 2 passes and the camera, the compositing program can then use a bit of trigonometry and some fancy mathematics to simulate what the lighting would be at those corresponding screen coordinates with any given light.

Why would you do all this when you could simply go back into your 3D software and re-render the scene? Because that could mean hours of rendering and since compositing is generally much faster than re-rendering, if you could just fix it during compositing why not try that? By rendering extra passes during the rendering stage you could end up saving yourself tons of time instead of wasting hours trying to make sure the sun’s highlights are on the right side or your character’s nose is shiny enough.

Did you like this? Share it:

World in Miniature


This recent video of the Tokyo Art Center and surrounding landscape by Darwinfish105 (who is well known for an exquisite Gundam video that went viral last year). I love authentic tilt-shift photography and this video is a breathtaking view of the Tokyo landscape. As creatives we have to constantly view the world through a different lens than the norm to inspire ourselves. We may find that the world is not as big as it might seem…

Did you like this? Share it:
Return top