Since I’ve been putting MSpro through it’s paces today, I decided to see how it did with a larger data set. So I fed it a reasonably large model of Manhattan. Mach Studio did great, export/import was fast, the machine was responsive, and even hitting it with realtime ambient occlusion, shadows and depth of field didn’t get the scene to a point where it chugged. Rendering this scene with similar settings in Mental Ray would have meant significant time per frame, certainly enough to preclude much in the way of any kind of immediate feedback. MSpro was fine onscreen and rendering 5k resolution images with settings turned up relatively high resulted in render times still under a half second a frame.
Not high art, but we’re talking test images people. I rather enjoy the simcity look of the orthographic one…Hey it looks like I nudged the ground mesh in one of those images. That would be a serious drag with a traditional render process, you’d have to submit it to the farm again. Realtime, it’s no big deal. Of course, there’s a lot of room for improvements on these images… the majority are not issues with the renderer but the artist (errr, me) who for some reason didn’t want to spend all night obsessing over test images. You can speed up render times, but there’s still tweaking galore. But at least you can tweak interactively.
This is another mach studio realtime test. This is one of the face robot heads exported out of XSI. Less than half a second per frame to render. The SSS shader in MSpro is clearly capable of much better output than this, but I’m still a beginner and I didn’t want to take more than an hour fiddling around from start to finish, same thing with the eyes which are flat in this image: I just didn’t feel like spending the time necessary to bring them out when this is just a test.
Here’s the result of 1/2 hour (probably less) fiddling around with Mach Studio Pro and Photoshop to try to see how fast I could produce an acceptable concept image.
I exported the buildings from Maya which took a few minutes, slapped on some materials, threw in an environment light and a spot light. Rendered a frame (which took 0.13 seconds to render) and took it into photoshop where again I just threw some elements together – a sketch filter blended with the CG, some vignetting and color manipulation. It’s not going to win any awards but that’s not the point… I was able to get this image out from start to finish incredibly fast, and that’s despite my being a novice with MSpro. Very promising…
Did some more work on a basic realtime toon shader for XSI (and maya, it’s CGFX.) Here’s a sample…
Functionality is pretty basic at the moment: Ink threshold, 2 levels of paint and a hard spec hilight, overall color control, a single point light source. I still need to introduce diffusion and spec mapping, possibly some reflections and bump/normal. Remember, if you see an agent, run.
The studio where I’m currently employed, Pixel Liberation Front (aka PLF) is situated in the heart of Venice, California. It’s in an interesting area with a lot of character(s).
Read more for a collection of images taken while walking around the area….
I’ve been playing around with a little app on my iPhone which creates NPR sketch effects from photos (I gather made by Bruce Gooch, who has done a lot to advance the state of the art in ‘painterly’ and ‘sketchy’ image manipulation.) ToonFX
The app is simple but with a little effort and insight you can get some good results.
A lot of people have been using an ICE compound I made a while back, “Emit instance matched to SRTs.” I made it during a game cinematic project to allow me to animate pre-scored geometry for explosions.
Since I’ve received so much feedback (and not a few questions) I’ve uploaded some basic example scenes illustrating a couple of ways the compound is meant to be used.
I’ve been playing around with CGFX realtime shaders and mental mill. Here are some basic first shaders. Mental mill makes creating this kind of stuff pretty fast, and you can edit the generated code by hand once you’re in the ballpark.
I have been testing Mach Studio Pro for PLF for a while now, and while it’s a very new tool I am pretty pleased with it’s capabilities and potential. MSpro is an application which accepts scenes from most 3d packages (including Max, Maya, XSI, sketchup etc) and enables the artist to shade/light/render in realtime. Quality is high, with renders being competitive (and sometimes mistaken for) mental ray/vray etc in many cases. Being a realtime application there are caveats and limitations of course, for instance raytracing is not (yet) supported, nor are true radiosity/GI effects. Fair enough. And you’ll still need to render out vFX passes like particles and volumetrics in another app.
But most CG isn’t about all that – it’s about the basics and that’s where MSpro shines: on the 90% of the work you render which you can now do so in seconds rather than hours. It’s very liberating being able to light shots with immediate visual feedback, and MSpro was written with a fair eye towards being a production-friendly application with python scripting, linear lighting and HDRI workflow, output to open EXR as pass breakdowns etc.
This is clearly the direction the industry is moving and Mach Studio is not without competitors, but as a just-out-of-the-gate package they are off to a great start. And don’t get me wrong, MSpro isn’t just about the bare minimums… realtime microtesselated displacement maps aren’t basic, and realtime AO and SSS go a long way towards giving you the tools you need to create great imagery. In real time. No more waiting on farms. No more unpleasant surprises a day lat