andy

Status: May ’10

What a crazy month. Green lantern gig interrupted by family member injured in an accident, happily recovery going well and I should be back at it soon. Meanwhile Adobe releases an interesting video about Avatar featuring PLF’s Stephen Lawes who went straight from Avatar onto Iron Man 2 which had to have been exhausting. You see some glimpses of the studio and Stephen’s team, who have done some amazing work!

On location

Working on location at Green Lantern for a bit, and having fun with ICE as a happy by product. I’ve been putting together some interesting deformation compounds which I’ll eventually put online, but currently too busy to do much besides this quick note. Back to work!

I am really getting tired of the color green, BTW...

Diffusion limited aggregation redux

Over time people have been asking me to share the DLA compound I wrote waaay back. So here it is.

[vimeo]http://www.vimeo.com/11122031[/vimeo]

The compound can be found in the downloads section.

Diffusion limited aggregation is a natural phenomena in which particles in solution build up over time – as particles are deposited they limit the locations where subsequent particles can be deposited, resulting in a pseudo-random growth of deposits with characteristic nodules and ‘fjords’. You can see examples of DLA systems in nature all over, from veins of copper to urban growth patterns. In this simple ICE implementation particles are emitted from geometry and then pulled towards a “seed” point, namely the point cloud the compound is applied to. If there are no existing points in the cloud the first will be deposited at the origin. This causes DLA structures to build towards the provided surface and then to spread across it or inside it’s volume. The compound has a turbulence node which adds a random tropisim as well as a vector input for user-directed tropisims. Please be aware this was never intended as more than an experiment: it works fine but is a pretty simple implementation.

To use: apply the compound to a point cloud (typically with a single point) in a simulated ice tree… …

MSpro from XSI

Models in .obj format into XSI, then to MSpro for materials, lighting and a 5k render. The sculpture mesh is a full res scan, about as thick as a zbrush export with several hundred thousand polys.

Screen capture

More realtime test renders

Since I’ve been putting MSpro through it’s paces today, I decided to see how it did with a larger data set. So I fed it a reasonably large model of Manhattan. Mach Studio did great, export/import was fast, the machine was responsive, and even hitting it with realtime ambient occlusion, shadows and depth of field didn’t get the scene to a point where it chugged. Rendering this scene with similar settings in Mental Ray would have meant significant time per frame, certainly enough to preclude much in the way of any kind of immediate feedback. MSpro was fine onscreen and rendering 5k resolution images with settings turned up relatively high resulted in render times still under a half second a frame.

 

Not high art, but we’re talking test images people. I rather enjoy the simcity look of the orthographic one…Hey it looks like I nudged the ground mesh in one of those images. That would be a serious drag with a traditional render process, you’d have to submit it to the farm again. Realtime, it’s no big deal. Of course, there’s a lot of room for improvements on these images… the majority are not issues with the renderer but the artist (errr, me) who for some reason didn’t want to spend all night obsessing over test images. You can speed up render times, but there’s still tweaking galore. But at least you can tweak interactively.

Mach Studio Realtime Test

This is another mach studio realtime test. This is one of the face robot heads exported out of XSI. Less than half a second per frame to render. The SSS shader in MSpro is clearly capable of much better output than this, but I’m still a beginner and I didn’t want to take more than an hour fiddling around from start to finish, same thing with the eyes which are flat in this image: I just didn’t feel like spending the time necessary to bring them out when this is just a test.

Head Test

Testing Mach Studio

Here’s the result of 1/2 hour (probably less) fiddling around with Mach Studio Pro and  Photoshop to try to see how fast I could produce an acceptable concept image.

I exported the buildings from Maya which took a few minutes, slapped on some materials, threw in an environment light and a spot light. Rendered a frame (which took 0.13 seconds to render) and took it into photoshop where again I just threw some elements together – a sketch filter blended with the CG, some vignetting and color manipulation. It’s not going to win any awards but that’s not the point… I was able to get this image out from start to finish incredibly fast, and that’s despite my being a novice with MSpro.  Very promising…

Gorillaz and Miku. Who can’t resist a cg character performing onstage? Not these people. I dig the leek-shaped lightsticks. (For the impatient, Miku makes her stage appearance at about 5:25 in this video.)

The mocap performance is nice I guess but what really interests me about Hatsyune Miko is that her voice is synthesized, not recorded. Would have been cooler if she was puppeteer-ed in real time ala Henson’s creature shop…

Update – no video available anymore, it was removed from youtube. Bah.