My harddrive barfed up these scraps. What to do with all this? Put ’em on the internet of course.
I had nothing to do with the making of Tangled, but it’s a great Disney film which blew me away, and I always like “making of” breakdowns. So I took a look at this, and felt it did such a good job of showing how a successful final result comes from a steady progression of improvements. Whether you’re making a fully animated film or a live action sequence, the best results come from a back-and-forth iterative process of test images to build up the final product.
In today’s “faster/cheaper” production mentality there is often significant pressure to reduce iterations and the time spent making a shot – if you watch through this sequence and imagine cutting out a lot of the improvements you see which are made over time you can get a feel for how the extra time and effort makes all of the difference. The flip side of this, of course, is that without good artistry and good direction all the time in the world won’t make a difference.
Too many great ideas are killed by a rush to minimize costs and get stuff out fast. This may make profits for those who deal with high-volume-low-quality kinds of projects, but I note that the giants with some of the most incredible work (and full coffers) tend to follow a mentality where extra time, effort and resources are spent to make a final product which is clearly of superior quality.
In the end, quality pays, and its the result of investing in talent, tools, and organization.
In this sequence we see a lot of things being done right…
- Time taken for previsualization.
- Tools available to allow communication via painting on frames, showing thought and investment in pipeline.
- An area set aside for the artist to make reference, showing insight into artistry and the willingness to spend money to enable it.
- Many iterations, showing time was spent to get the shot right, and the production was organized with regular milestones in place (and time set aside to achieve them).
The final result, a great film which not only was a creative success but one which did well financially ($590 mil box office revenue and a new and lasting Disney heroine providing decades of continuing income.)
It had been a while since I wrote anything technique-specific for softimage, so I decided to come up with a rendertree setup for “lit sphere” rendering and share it on this blog. I would talk about normals and angles of incidence and it would be completely cool. Well, it’s still cool, but no need for much in the way of discussion… it turned out to be ridiculously simple.
Have you ever used mudbox or zBrush and noticed how nice their realtime clay-like materials are? That’s what we’re talking about.
A “lit sphere”, or what zBrush users might recognize as a “MatCap” material, is a technique first described (as far as I know) by Bruce and Amy Gooch, Peter Sloan and William Martin in their 2001 paper “The Lit Sphere – A model for Capturing NPR shading from Art.”
The basic idea is simple: a spherical image can act as a stand-in for the lighting of a more complex surface, by mapping the angle of the surface normal (as seen from the camera) to XY coordinates of an image, such that the center of the image relates to a surface facing the camera and every other angle of incidence maps to an X (horizontal angles) and Y (vertical) coordinate on the texture.
The result is “lighting” defined for every possible normal via a simple texture, and what’s really cool is that the result can easily approximate various painterly, sketchy or waxy surfaces. Since everything derives from a texture, it’s fast enough for realtime shaders and easy to change and edit.
Ok, so how do we get this result in realtime, and how do we get it in Mental Ray? Well, realtime requires a realtime shader. It’s easy to make a HLSL shader in Mental Mill without any shader programming expertise at all. Here’s an admittedly junky one for use in Maya, which works in Softimage as well.
But if you don’t want realtime display in your viewport, it’s just as simple to render litspheres in Mental Ray, regardless of what package you’re using. Just use the mipGrayBall shader, and feed it a “litsphere” texture. That’s it. Done.
So, while this topic didn’t prove to be a basis for a insightful tour through the rendertree, at least it’s cool in the sense that you just can’t get a more powerful shading tool any simpler than this. If you are looking for an interesting approach to NPR or “Painterly” rendering styles, you want to specifically mimic a sketchy or painterly style of a traditional artist, or you want a good realtime material to model with that has the same feel of lighting you get in mudbox, now you’re set. Enjoy.
I’m going to have to check out “Darktable,” a GNU-license photo management app similar to Lightroom and Aperture. It’s claim to fame (aside from being free) is it’s blazing fast performance thanks to GPU acceleration. If you like photography as much as me, and you are using a *nix OS, this is probably worth a look.
Well, another siggraph has come and gone. As I await my flight in LAX, I might as well post some initial thoughts. The journalists will cover the changes to products and the like, so I’ll focus on overall experience and impressions.
First, the trivial… The weather was cool, which was a bit startling. And siggraph 2011 is going to be held in Canada. Wtf, we seem to have broken the long standing tradition of baking siggraph attendees in desert or tropical heat. Adding to the surreal feeling, the exhibits area was once again smaller, perhaps because Autodesk has consumed half the industry…
Speaking of Autodesk, this year their area was dominated by a pretty mediocre and endlessly repeating spiel about “virtual production” which basically tied together a hypothetical production workflow nobody actually uses but which gave them an overly polished way to show some basic new features of Maya and Mudbox. XSI and Max were, well, not really visible in the slightest. While I’m sure some lip service was given them, you sure couldn’t tell from a casual survey of their booth. And since there was no Softimage User Group event, siggraph pretty much was XSI free. Heck, lightwave was front and center by comparison. The only real mention of XSI was the announcement that it’s part of the basic bundle, phrased in such a way that ICE seemed like the only reason it was included at all. Thanks for the snub, Autodesk. I use Maya in production, sure, but frankly its showing its age, so why cram it down our throats when XSI is so robust?
(Edit – Some folks at autodesk disagree with my assessment. Sorry, but I watched demo people tweak the weighting of a character rigged in maya over and over on the big screen but didn’t see xsi anywhere. I know there is great work being done by the softimage folks – but marketing matters, and I’m reporting what I saw… which was that XSI was lacking visibility. That may step on taboos or be unmentionable in Autodesk circles. Tough.)
The show was far more interesting at the smaller booths, where stereo, 3d printing, and GPU rendering were all engaged in healthy competition. AMD was home to Mach Studio which was showing off it’s exciting new shader construction tools, and nearby pixel farm was showing off it’s rather bizzare companion to PFtrack, which adds a node-based workflow (cool) but which lacks some of the core power of PFtrack (wha?). Desktop 3d printers were everywhere, including the makerbot which was cleverly if less visibly off the exhibit floor and hanging out near emerging technologies with some other delinquents like the gigapan. How did that happen?
Emerging technologies was its usual combination of really cool stuff that doesn’t quite work, stupid stuff with no practical application beyond provoking thoght to a greater or lesser degree, and some eye popping technology that had me muttering “we wants it my preciouss.” Foremost in that latter category was the 3D volume display presented by Sony. Remember the 3d volume displays of a few years back that relied on spinning plates of leds? The ones that contained murky glimpses of CG objects you could walk around and wish were actually working? Well, they work now. Really well, in fact. As in they feel ready for market. Nice work, Sony, when you guys aren’t awash in marketing droids and hype you can still make some amazing stuff.
The papers and course presentations were quite good, which are as far as I’m concerned the living, beating heart of Siggraph, and I’m happy to say it’s still strong. There were the usual crowd-pleasers like an excellent panel on Tron which had the added benefit of showing us 8 minutes of unseen footage (it’s looking great.) But more importantly, the more academic presentations were still there, sharing and pushing the state of the art. I particularly enjoyed a half day course on volumetrics.
Disney had a highly visible and impressive showing on all fronts, from excellent and impressive presentations on procedural hair and trees in Rapun-oops”Tangled” to the Tron presentation, etc. They did a good job of re-establishing themselves in my mind as leaders in the industry, both artistically and technically. MPC, DD, Tippet and the Mill also brought their A-games.
The parties were parties, techies, geniuses and academics were abound, and students still want to get hired… all in all I’d say that while Siggraph is still the incredibly shrinking con, it was just as valuable as ever where it counts – in getting a feel for the state of the industry and the technology, in seeing colleagues and friends who are rarely in the same place at once, and in sharing and extending the techniques, tools and insights so vital to computer graphics. I plan to continue attending, and suggest that anyone serious about computer graphics shoulddo the same. Remember people, judging siggraph by the size of the showroom is missing the point entirely – we don’t really need sales pitches, but you won’t find any other venue where other studio professsionals and academics mingle and share like they do at Siggraph. See you there next year!
Ok, well no…. It’s just me fiddling around with photoshop to loosen up after a break from graphics.
I’ve been playing around with a little app on my iPhone which creates NPR sketch effects from photos (I gather made by Bruce Gooch, who has done a lot to advance the state of the art in ‘painterly’ and ‘sketchy’ image manipulation.) ToonFX
The app is simple but with a little effort and insight you can get some good results.