Various test images as I play around… Mantra and Houdini.
Pingo van der Brinkloev hit me up for a very rough discussion of the conceptual approach to building knitted patterns in CG and came up with a pretty cool workflow to get similar results in C4D. Cool, nice work!
Art is collaborative. It loops back and forth between artists, techniques grow and we present constant challenges to ourselves and each other. Nothing makes me happier than to see this kind of mutual exchange of ideas across different toolsets. We are artists first, inspiring one another – the borders of studios, tools, location, language or genre should never keep us from exchange of ideas. Thriving ideas lead to a thriving market as a whole, employed artists, and most importantly better art across the board. I really believe lifting each other up is the best route to success for the individual artist as well as the industry as a whole, so when I saw how Pingo built this I was as excited as I was when I saw Jonah Friedman make entwiner in ICE. How can you not get excited? This is cool stuff!
Two of 3 spots. “Evolution” was a small team, 3 weeks, lighting and effects with Softimage and Arnold.
I love small but intense projects like this.
“Run” was primarily Maya/Vray with a touch of ICE. The studio (Royale) is only 6 years old but advancing fast, and it’s been a real pleasure working with them. For their first exploration of ICE, Royale invited in some familiar SI friends – Ciaran Moloney, Steven Caron, Leonard Kotch, Billy Morrison and yours truly doing a first gig start to finish as a CG sup (which with guys like this mostly involved saying “go for it.”)
Like the Psyop “Telstra” spot, this commercial essentially required us to create a system for knitting cloth from massive numbers of strands. Leonard Kotch wrote a system which performs many of the same tasks as the Psyop “Entwiner” tool, but he took a slightly different direction, it was fascinating to compare how the two diverged. The progressive animation required for these two shots resulted in a pretty flexible and broad system, which we are currently using for the last of the three spots, which will wrap in production soon.
Royale has been an enthusiastic and fun group to work with and it’s been great getting to show a studio as strong as they are in design some of the possibilities ICE can bring to jobs like this. Expect to see some version of Leonard’s “LKFabric” system gifted to the community before long – very cool Royale, thanks! (They also throw good parties, their 6’th birthday celebration was impressive and… unusual.)
Since at the moment I am heads-down in production and can’t share the art I’m working on, I thought I would take a second to share someone else’s art – in this case a favorite piece by the wonderful and whimsical dance troupe, Momix.
While I have been lucky enough to have seen Pilobolus, The Alvin Ailey Dance Company, Martha Graham, and even some rare offshoots like Iso and The Bobs, I have never seen Momix live. If you like good art and have an opportunity to do so yourself, don’t pass it up.
And while I’m mentioning performance artists of the kinesthetic sort, if you live on theWest Coast and haven’t spent an evening with the
Flying Karamozov Brothers, good lord – what’s keeping you?
My harddrive barfed up these scraps. What to do with all this? Put ’em on the internet of course.
I had nothing to do with the making of Tangled, but it’s a great Disney film which blew me away, and I always like “making of” breakdowns. So I took a look at this, and felt it did such a good job of showing how a sucessful final result comes from a steady progression of improvements. Whether you’re making a fully animated film or a live action sequence, the best results come from a back-and-forth iterative process of test images to build up the final product.
In today’s “faster/cheaper” production mentality there is often significant pressure to reduce iterations and the time spent making a shot – if you watch through this sequence and imagine cutting out a lot of the improvements you see which are made over time you can get a feel for how the extra time and effort makes all of the difference. The flip side of this, of course, is that without good artistry and good direction all the time in the world won’t make a difference.
Too many great ideas are killed by a rush to minimize costs and get stuff out fast. This may make profits for those who deal with high-volume-low-quality kinds of projects, but I note that the giants with some of the most incredible work (and full coffers) tend to follow a mentality where extra time, effort and resources are spent to make a final product which is clearly of superior quality.
In the end, quality pays, and its the result of investing in talent, tools, and organization.
In this sequence we see a lot of things being done right…
- Time taken for previsualization.
- Tools available to allow communication via painting on frames, showing thought and investment in pipeline.
- An area set aside for the artist to make reference, showing insight into artistry and the willingness to spend money to enable it.
- Many iterations, showing time was spent to get the shot right, and the production was organized with regular milestones in place (and time set aside to achieve them).
The final result, a great film which not only was a creative success but one which did well financially ($590 mil box office revenue and a new and lasting Disney heroine providing decades of continuing income.)
It had been a while since I wrote anything technique-specific for softimage, so I decided to come up with a rendertree setup for “lit sphere” rendering and share it on this blog. I would talk about normals and angles of incidence and it would be completely cool. Well, it’s still cool, but no need for much in the way of discussion… it turned out to be ridiculously simple.
Have you ever used mudbox or zBrush and noticed how nice their realtime clay-like materials are? That’s what we’re talking about.
A “lit sphere”, or what zBrush users might recognize as a “MatCap” material, is a technique first described (as far as I know) by Bruce and Amy Gooch, Peter Sloan and William Martin in their 2001 paper “The Lit Sphere – A model for Capturing NPR shading from Art.”
The basic idea is simple: a spherical image can act as a stand-in for the lighting of a more complex surface, by mapping the angle of the surface normal (as seen from the camera) to XY coordinates of an image, such that the center of the image relates to a surface facing the camera and every other angle of incidence maps to an X (horizontal angles) and Y (vertical) coordinate on the texture.
The result is “lighting” defined for every possible normal via a simple texture, and what’s really cool is that the result can easily approximate various painterly, sketchy or waxy surfaces. Since everything derives from a texture, it’s fast enough for realtime shaders and easy to change and edit.
Ok, so how do we get this result in realtime, and how do we get it in Mental Ray? Well, realtime requires a realtime shader. It’s easy to make a HLSL shader in Mental Mill without any shader programming expertise at all. Here’s an admittedly junky one for use in Maya, which works in Softimage as well.
But if you don’t want realtime display in your viewport, it’s just as simple to render litspheres in Mental Ray, regardless of what package you’re using. Just use the mip_Gray_Ball shader, and feed it a “litsphere” texture. That’s it. Done.
So, while this topic didn’t prove to be a basis for a insightful tour through the rendertree, at least it’s cool in the sense that you just can’t get a more powerful shading tool any simpler than this. If you are looking for an interesting approach to NPR or “Painterly” rendering styles, you want to specifically mimic a sketchy or painterly style of a traditional artist, or you want a good realtime material to model with that has the same feel of lighting you get in mudbox, now you’re set. Enjoy.