Post-simulation and softimage ICE (part 1)

“What happens in the post simulation tree stays in the post simulation tree”

A quick review: ICE operators are evaluated differently depending on where they reside in the construction history (also known as the operator stack or modifier stack.) When an ICE tree is under the modeling region it is evaluated every frame unless a simulation region exists – if it does, it is evaluated once. An ice operator under simulation is evaluated every frame, and all data is updated every frame. In other words, changes persist and appear in the next frame – if you move a point, in the next frame that change is reflected. And when an ice operator is in a post-simulation region, it is evaluated every frame but changes are discarded. The “lower” regions evaluated first, then each region “above” it in the explorer, like so:

This is very, very useful. You can have an entire simulation going in the simulation region, and then do stuff to it prior to display. For instance, you could cull out all particles which aren’t visible to the camera to reduce cache size and speed evaluation. You can calculate per-particle lighting prior to rendering it, to control lighting entirely within ICE. Or, in the example below, you can move points around without altering the original simulation.

Example: Strand shapes

Here’s an interesting “look” done entirely with strands in ICE. A typical particle simulation has been used as input for an ICE operator in a post-simulation region, which uses the simulation as a basis to draw many circular strands.

First, I made a very basic particle simulation:

Then, in a post-simulation tree, I drew the strands.

… So you can see that for this effect the bulk of the work was done in the post-sim ICE tree. I use the point positions and orientation from the simulation as a center point around which I add new particles with strands. In my next post I’ll show exactly what I did, but the point I’m getting at here is that things don’t have to end with merely a simulation. You can get into some very cool stuff by considering each frame of a simulation (or a cached result) as a starting point. You can deform your entire simulation, rig it to a character, light it, or perform housekeeping tasks like camera frustrum culling. You can even treat the entire simulation as a single unit and scatter it – they sky’s the limit.

A quick note about motion blur:

When I wax rhapsodic about the post simulation tree the most frequent argument I run into is that moving particles around in a post-simulation operator invalidates motion blur calculations. This is true. Motion blur is based on the velocity of a particle, which can be considered a vector from the previous point position to the current. In the post simulation operators, the previous location is unavailable… it’s like a dog’s sense of time, only what exists “now” has any meaning. So, you have to do some extra work if you need motion blur. Basically, you store the previous point position into a user variable which can be read by your post simulation ICE  tree and use that information to calculate not only how you wish to move a particle, but how you moved it in the prior frame as well. From this you can calculate a valid velocity to pass to the renderer for motion blurring. That sounds awful, but it’s not really that bad. (Still, it would be handy if the devs gave us an easier workflow than this, for instance could they store and give us access to post-sim point positions and velocity or something?)


We will look at creating the offset strand shapes – circles, stars etc. Stay tuned!

Higgs Bosons and why you should care.

I’ll get back to some CGI related posts soon… But in the meantime, what’s all this about the Higgs Boson, and why should we care? I mean, the math seems impossibly deep, the experiments staggeringly expensive, and then when we ask physicists to explain how all this matters to us in our daily lives they mumble stuff about the beginning of the universe and dark matter.

I know, it’s annoying, right?

Instead of asking physicists, who understandably are up to their waders in some pretty deep stuff, we should ask some engineers why we should care. The answer becomes something like this:

“If you want to see a day where perhaps things like pod racers or gravity skateboards are a reality, this matters. This is an important step which will help us know if such seemingly fanciful devices are even possible.”

Suddenly, the staggering costs and daunting math make a lot more sense. All this is about advancing knowledge that ultimately lends humanity more capability to DO stuff, from feeding ourselves to making iPads.

This video does a pretty good job of putting the “discovery” into some context:

sphRand and even random distributions on a sphere

While taking an excellent math course over at fxPhd someone asked about the math behind Maya’s sphRand command, and it kicked off a back and forth exploration into the difficulties behind getting an even distribution of random points on a sphere. This is useful for any number of applications where random vectors are needed.

[Note: This topic has been covered (better) elsewhere on the web, but I thought it was interesting enough to preserve for myself on this blog. -am]

So how do you generate a random vector in 3D space? The natural inclination is to generate each point based on a simple random azimuth and elevation. But because the circumference of latitudes decreases from the equator towards the poles, the result is an uneven distribution. Generate a few vectors and things will seem fine. But if you generate a large set of thousands of points, you get this:

But we need an even distribution. How about this: generate random values between -1 and 1 for each X, Y and Z. Then normalize those vectors, and again you have a collection of random points on a sphere. But the distribution still isn’t even… you are basically projecting the points of a cubic volume onto the sphere surface:



You can solve this problem by culling points outside the radius of the sphere (magnitude >1) as they are being generated, and the result is an even distribution on a sphere. This is known as the rejection method, and it works, but it’s inefficient. You are computing a lot of rejected points. Additionally, as a practical matter “N iterations” yields some number less than “N” vectors, making it difficult to specify the total number of vectors to be created by such a function.


Enter the trigonometric method (something of a misnomer seeing as how the method involves the integral of the surface of revolution.)

θ = υ(-1,1)

And Φ = υ(0,2π)

Where υ(a,b) is equal to a uniform random scalar.

So, go from Polar to Cartesian coordinates, and you get:

x = cos(θ) cos(Φ)

y = sin(Φ)

z = cos(θ) sin(Φ)


And it works.


Digression: if you want to get down to the heart of things, getting a random scalar value, or more specifically a repeatable pseudo-random value based on a specified seed, is it’s own challenge. Here’s a snippet of GLSL code, a function written by Stephen Gustavson which generates a “random” scalar value from a scalar seed input… (cos and mod should be familiar…the “fract” function returns the fractional (decimal) portion of it’s input.)

float randomizer(const float x) { float z = mod(x, 5612.0); z = mod(z, 3.1415927 * 2.0); return(fract(cos(z) * 56812.5453)); }