Lollipopland Dad Rig Test

Here’s a preview of the rig for the Dad character. I considered using HIK but Maya’s HIK solution creates an excessively complicated network. Instead, I stuck with a custom rig with IK-FK switching, character sets and definitions.

With this rig, I can apply mocap and switch between IK controls, FK keys and mocap using a weighted value. At the same time, the network is not overly complicated. IK controls are held to just what I want and adding another control during the middle of production won’t mean recharacterizing or recreating keys.

Share this post!

Dad Lollipopland


 

I modeled out the daddy sketch. I want to create a couple background characters and to begin creating a style, theme and a treatment outlining a first pass narrative. After creating a couple more characters I will try out some rigs that cater to the look and feel that I want to approach.

Right now, my inspiration are some of the new 3D animated series that one can see on YouTube and Netflix. Num Noms, Tsum Tsum, True and the Rainbow Kingdom, Super Monsters, BabyBus are a handful of current 3D animations which I admire.

Share this post!

Kitbashed Mech Robot

 

I took a break from making food based character designs and kitbashed a model for a Gundam inspired mech robot using Andrew Averkin’s Hard Surface Kitbash Pack.

It took a couple hours of bashing the model together but creating what I thought would be a rudimentary rigid bind skeleton turned out to be a time consuming project. In this case, I found that since the rigid binding tools are no longer available, I needed to create a smooth bind instead, and then modify memberships and weights in order to mimic a rigid binding set up.

Share this post!

Ice Cream


 
Created another character for the animated series. This will be the main character who is an anthropomorphized soft serve ice cream cone. Below is the sketch and the first pass on the 3D model. I’m wanting to keep textures as simplistic as possible but there are some cases where you need more detail to convey the character. Like in this case, the waffle pattern around the face needs to stand out just enough otherwise you lose the ice cream cone quality but too much and you lose the human quality.

 
Since my son is a big fan of rainbows, as of course most five years old are, I was originally thinking of using a rainbow colored soft serve ice cream but on second thought decided that vanilla ice cream with sprinkles made more sense.

Share this post!

Character Sketches


Some first pass concepts and sketches for characters in my animated series. Just so you know, I am getting the bulk of my art direction from a 5-year-old and he is a tough boss! The characters will be food-based, anthropomorphized and cartoonish or at least as far as it is currently envisioned. Some of it or all of this may change of course. For now, I created some quick character sketches and modeled out some of the characters.


kento


daddy

Our main protagonist is a school kid envisioned as a humanized rainbow ice cream. Of course all of this is preliminary and I will be completing multiple iterations before I even get into production.

I also took the cake character and created a 3D model of her. Once I have a handful of characters to populate the world with, I’ll start some rigging and animated tests.


mama


mama model

Share this post!

Lollipopland Intro

Lollipopland Intro from Michael Gochoco on Vimeo.

This is a snippet for the intro shot of an animated serial currently called Lollipopland, a character-based children’s cartoon. It’s more inspirational as a way to get myself motivated.

Right now, the project is in the preproduction treatment phase, going through style and inspiration boards, concept, character sketches and narrative design. When production begins, I’ll keep this blog updated with my progress.

Share this post!

BMW Intro

BMW Non-work Intro from Michael Gochoco on Vimeo.

Utilizing Redshift car shaders and global illumination to create this non-work demo of what a BMW intro splash could look like. The car is a model of a BMW from Arte-3D. I re-shaded the car with Redshift materials, animated and rendered. Using a LUT created with Photoshop, I graded the render and composited the final end card in After Effects. Total project time: 3 hours including render time.

Share this post!

Visual Typography

It’s been a while since my last post. Life happens. It’s been a busy year. So sorry for the extended hiatus and I promise to come around more often!

Recently, I’ve been challenging myself during the few chances I have for experimentation to create short 5 second visual typography clips. A colleague of mine mentioned a Reddit game identifying movies from a single word uttered in the movie (without uttering the title, of course).

I tried to do similar creating short clips designing around a single word and completing within an hour or two. Here are four examples of what I’ve done so far.




Share this post!

LIDAR: 3D Scanning Technology

A recent post on FXGuide regarding Pointcloud9, a European company that provides high quality 3D scanning services to the film industry, has me fascinated with how this technology is being used today. LIDAR is basically the process of using a laser to get the 3D information of an object or environment similar to the way desktop 3D scanners operate and it ties in perfectly with my previous Recreality post, about the future of cinema. These laser-based range finding cameras are ultra accurate. No Kinect hack here. Perhaps this is the way it will be done in the future? Incidentally, this technology was used 4 years ago for Radiohead’s House of Cards music video.


 

Share this post!

Recreality: The Future of Cinema?

I just received my Lytro and have been having a blast taking photos with it. It’s absolutely brilliant. If you’re unfamiliar with this camera, it allows you to focus the picture after it’s already been taken. The move from in-camera to in-post is certainly being lead by technology and economics. My Pixel Cloud plugin is also trying to bring more of those capabilities into After Effects. Just look at the Microsoft Kinect and how innovative pioneers are using its ranging features to recreate environments from recorded point clouds. And Samsung has developed a sensor that not only records RGB but Depth pixels as well! These are amazing innovations for effects artists.

Imagine a future where we can record whole rooms as animated environments. Think 3D scanners that scan whole rooms at one time and at 24fps. We could completely eliminate conventional camera motion control and create everything in-post. We could change the lighting setups, create digital camera rigs all after the video has already been shot. Not as CG but as recorded pixels in 3D space; An accurate representation of reality that we can manipulate to our choosing. This opens up possibilities for interactive story-telling, not to mention subjective 3D stereography. I could imagine a dozen more uses.

This isn’t just virtual reality but recreated reality, “Recreality.”

Share this post!

On the YouTubes!

Yeah! After some soul searching, I’ve moved all the blog videos onto YouTube! Our YouTube channel is at TheBlurrypixel. Of course all the videos are still embedded in the posts so no need to go directly to the channel but I will be posting some bonus extra videos on there that may not be on the blog so show us your support and Subscribe!

Share this post!

Tutorial: Learning at the Playground

At school there were always teachers who in all honesty were experts in their field simply because they knew 10% more about it than 10% of the students they were teaching. But if I took the chance to look beyond my own “pretentiousness”, I would find there was always something invaluable to learn from them. It may not be what I was expecting and it may be completely different than what I was studying, but it was always beneficial and always unexpected. And that’s what learning is about, right?

So here is my first tutorial on Blurrypixel using an often overlooked particle effect in After Effects, Particle Playground. Which takes a bit of effort to learn compared to the plethora of turnkey particle generators out there, but I think it’s a good start for a tutorial since it encapsulates so much of digital compositing. So even though you might be saying, “I can make this effect happen in Particular in 5 minutes,” keep in mind there’s always another 90% out there.

It’s a video tutorial. I hope you find it invaluable. Please let me know how I did and if you have any tips or corrections please leave a note. I want to get better! So watch my first tutorial and let’s Learn at the Playground!

Share this post!

What is Relighting?

What is Relighting for After Effects?

For After Effects and compositing in general, relighting is the process of changing the perceived shading of an already rendered 3D image. This includes the diffuse shading and specularity of the 3D image. One could drastically change the perceived position of the sun or whether an object is shiny or not. In other compositing packages this may also include cast shadows.

Relighting is actually a commonplace process in many node-based compositing packages like Nuke. Until recently, this was not possible within After Effects. The fact that After Effects had a 2GB memory limitation as well as limited support for 32bit footage certainly limited its capability to relight a shot.

32bit images can take up a lot of memory and processing power. Take your heaviest comp and turn on 32bit. Your comp may take up all your memory and most of the effects aren’t 32bit ready making your output simply 8bit images with a decimal point on the end. Imagine relighting a 4K video in CS4. You couldn’t and the process would still be very sluggish on CS5 but the possibility is there. And that’s what Pixel Cloud will do. Extend that potential. And make that opportunity available.

The truth is, After Effects was never meant for this type of compositing. Node-based solutions filled a market that AE simply could not. But CS5 and above is changing all that. 64bit and no memory limitation means that After Effects is just beginning to compete in a market of film-quality compositing and special effects. After Effects and Premiere can help open up a world that is filling with digital filmmakers. Technology just continues to become more powerful and even cheaper.

After Effects has a bit of ways to go in order to become a compositing standard. It could still be faster. It should have stronger tools for customizing workflow or at least more education on the scripting interface. And why not create a more node-like interface for the flowchart for the many users who come from a 3D background? But for the price, a regular production cycle, pages of tutorial information and legions of users, CS5 already is the de facto standard.

Passes

To light a scene, you basically need 3 bits of information from the 3D package. The position of the surface and the direction that the surface is pointing. Lastly you will need the camera information which was used to render the scene in the first place. This would be the camera position, direction and focal length. The light information is provided by the compositing program.

The position information is rendered into a position pass. Some people call it a “P” pass but I prefer position pass. The surface direction is provided by the normal pass. The position pass takes the x,y,z coordinates of the points being rendered and saves them in a 32bit image as r,g,b color information. The normal pass takes the vector coordinates of the normals of those points and saves them the same way. The images have to be 32bit as these images can handle floating point values greater than 255 and can even be negative. These values also have to be in the same coordinate system. In other words, if the position pass is in camera coordinates then the normal pass must also be in camera coordinates.

Given these 2 passes and the camera, the compositing program can then use a bit of trigonometry and some fancy mathematics to simulate what the lighting would be at those corresponding screen coordinates with any given light.

Why would you do all this when you could simply go back into your 3D software and re-render the scene? Because that could mean hours of rendering and since compositing is generally much faster than re-rendering, if you could just fix it during compositing why not try that? By rendering extra passes during the rendering stage you could end up saving yourself tons of time instead of wasting hours trying to make sure the sun’s highlights are on the right side or your character’s nose is shiny enough.

Share this post!

World in Miniature


This recent video of the Tokyo Art Center and surrounding landscape by Darwinfish105 (who is well known for an exquisite Gundam video that went viral last year). I love authentic tilt-shift photography and this video is a breathtaking view of the Tokyo landscape. As creatives we have to constantly view the world through a different lens than the norm to inspire ourselves. We may find that the world is not as big as it might seem…

Share this post!