Since I’ve been putting MSpro through it’s paces today, I decided to see how it did with a larger data set. So I fed it a reasonably large model of Manhattan. Mach Studio did great, export/import was fast, the machine was responsive, and even hitting it with realtime ambient occlusion, shadows and depth of field didn’t get the scene to a point where it chugged. Rendering this scene with similar settings in Mental Ray would have meant significant time per frame, certainly enough to preclude much in the way of any kind of immediate feedback. MSpro was fine onscreen and rendering 5k resolution images with settings turned up relatively high resulted in render times still under a half second a frame.
Not high art, but we’re talking test images people. I rather enjoy the simcity look of the orthographic one…Hey it looks like I nudged the ground mesh in one of those images. That would be a serious drag with a traditional render process, you’d have to submit it to the farm again. Realtime, it’s no big deal. Of course, there’s a lot of room for improvements on these images… the majority are not issues with the renderer but the artist (errr, me) who for some reason didn’t want to spend all night obsessing over test images. You can speed up render times, but there’s still tweaking galore. But at least you can tweak interactively.