vfx

sphRand and even random distributions on a sphere

While taking an excellent math course over at fxPhd someone asked about the math behind Maya’s sphRand command, and it kicked off a back and forth exploration into the difficulties behind getting an even distribution of random points on a sphere. This is useful for any number of applications where random vectors are needed.

[Note: This topic has been covered (better) elsewhere on the web, but I thought it was interesting enough to preserve for myself on this blog. -am]

So how do you generate a random vector in 3D space? The natural inclination is to generate each point based on a simple random azimuth and elevation. But because the circumference of latitudes decreases from the equator towards the poles, the result is an uneven distribution. Generate a few vectors and things will seem fine. But if you generate a large set of thousands of points, you get this:

But we need an even distribution. How about this: generate random values between -1 and 1 for each X, Y and Z. Then normalize those vectors, and again you have a collection of random points on a sphere. But the distribution still isn’t even… you are basically projecting the points of a cubic volume onto the sphere surface:

 

 

You can solve this problem by culling points outside the radius of the sphere (magnitude >1) as they are being generated, and the result is an even distribution on a sphere. This is known as the rejection method, and it works, but it’s inefficient. You are computing a lot of rejected points. Additionally, as a practical matter “N iterations” yields some number less than “N” vectors, making it difficult to specify the total number of vectors to be created by such a function.

 

Enter the trigonometric method (something of a misnomer seeing as how the method involves the integral of the surface of revolution.)

θ = υ(-1,1)

And Φ = υ(0,2π)

Where υ(a,b) is equal to a uniform random scalar.

So, go from Polar to Cartesian coordinates, and you get:

x = cos(θ) cos(Φ)

y = sin(Φ)

z = cos(θ) sin(Φ)

 

And it works.

 

Digression: if you want to get down to the heart of things, getting a random scalar value, or more specifically a repeatable pseudo-random value based on a specified seed, is it’s own challenge. Here’s a snippet of GLSL code, a function written by Stephen Gustavson which generates a “random” scalar value from a scalar seed input… (cos and mod should be familiar…the “fract” function returns the fractional (decimal) portion of it’s input.)

float randomizer(const float x) { float z = mod(x, 5612.0); z = mod(z, 3.1415927 * 2.0); return(fract(cos(z) * 56812.5453)); }

Terrain Generation Basis Functions

Having established a good start on hydraulic erosion, I moved to another area necessary for any good terrain toolkit – establishing a set of basis functions, from which you can achieve a the natural complexity needed by mixing various basis functions based on criteria such as latitude, slope, height, other fractals etc. Each basis function has it’s own character and “look” so to get a good heterogeneous result it’s valuable to be able to draw from a number of different functions. For example, here is a set of simple spheres deformed by a compound in which I use ICE’s excellent worley noise as a basis, which is then iterated through over a user-defined number of octaves, some of which are further modulated by a simplex fractal. You can see the character of the worley noise clearly, but where the simplex modulation comes into play you get a much more interesting result:

 

 

Less “terrain-like” is this output from a reticulation compound, which uses as it’s basis the “computational” noise described by Stephen Gustafson et al:

 

Interestingly, I approached this project having not read Ken Musgrave’s various writing on the subject of terrain generation outside of white papers. Once I did, I found that much of what I have discovered anecdotally he thought through in detail long ago. This is awesome, because it shows that my thinking has validity (after all, there are few who have spent more time and energy on the subject than he) and also because it gives me more threads to pull. I now have a series of compounds at my disposal that do various “terrain stuff,” now I can take a step back and decide how to assemble those experimental compounds into a more user-friendly and extensible set of ICE building blocks…

ICE Terrain Project

My personal project of creating a series of nodes useful for terrain generation in Softimage ICE is going well. Here’s a terrain made with two of the compounds, and is based on a pyroclastic noise with slope suppression followed by 24 iterations of a compound implementing a fast hydraulic erosion scheme I’m playing with. There is some tendency of the erosion to create bands where edges flow in an even grid due to my use of Von Neumann sampling in the erosion routine, which is fast but I may have to add an option to take a speed hit and sample more thoroughly. After this I will implement a more thorough and more traditional hydraulic erosion scheme and compare the two.

 

Leadership

Disney Animation is promoting the use of Partio, an open source particle library for use across various toolsets. You can find out more here.

Ice – stretchy strands

Doing some stuff with strands. This is pretty primitive still, have to get dynamics working with strands which are constrained on either end, and come up with a scheme for breaking strands, tweak texture approach etc.

SIGGRAPH 2010 Redux

Well, another siggraph has come and gone. As I await my flight in LAX, I might as well post some initial thoughts. The journalists will cover the changes to products and the like, so I’ll focus on overall experience and impressions.

First, the trivial… The weather was cool, which was a bit startling. And siggraph 2011 is going to be held in Canada. Wtf, we seem to have broken the long standing tradition of baking siggraph attendees in desert or tropical heat. Adding to the surreal feeling, the exhibits area was once again smaller, perhaps because Autodesk has consumed half the industry…

Speaking of Autodesk, this year their area was dominated by a pretty mediocre and endlessly repeating spiel about “virtual production” which basically tied together a hypothetical production workflow nobody actually uses but which gave them an overly polished way to show some basic new features of Maya and Mudbox. XSI and Max were, well, not really visible in the slightest. While I’m sure some lip service was given them, you sure couldn’t tell from a casual survey of their booth. And since there was no Softimage User Group event, siggraph pretty much was XSI free. Heck, lightwave was front and center by comparison. The only real mention of XSI was the announcement that it’s part of the basic bundle, phrased in such a way that ICE seemed like the only reason it was included at all. Thanks for the snub, Autodesk. I use Maya in production, sure, but frankly its showing its age, so why cram it down our throats when XSI is so robust?

(Edit – Some folks at autodesk disagree with my assessment. Sorry, but I watched demo people tweak the weighting of a character rigged in maya over and over on the big screen but didn’t see xsi anywhere. I know there is great work being done by the softimage folks – but marketing matters, and I’m reporting what I saw… which was that XSI was lacking visibility. That may step on taboos or be unmentionable in Autodesk circles. Tough.)

The show was far more interesting at the smaller booths, where stereo, 3d printing, and GPU rendering were all engaged in healthy competition. AMD was home to Mach Studio which was showing off it’s exciting new shader construction tools, and nearby pixel farm was showing off it’s rather bizzare companion to PFtrack, which adds a node-based workflow (cool) but which lacks some of the core power of PFtrack (wha?). Desktop 3d printers were everywhere, including the makerbot which was cleverly if less visibly off the exhibit floor and hanging out near emerging technologies with some other delinquents like the gigapan. How did that happen?

Emerging technologies was its usual combination of really cool stuff that doesn’t quite work, stupid stuff with no practical application beyond provoking thoght to a greater or lesser degree, and some eye popping technology that had me muttering “we wants it my preciouss.” Foremost in that latter category was the 3D volume display presented by Sony. Remember the 3d volume displays of a few years back that relied on spinning plates of leds? The ones that contained murky glimpses of CG objects you could walk around and wish were actually working? Well, they work now. Really well, in fact. As in they feel ready for market. Nice work, Sony, when you guys aren’t awash in marketing droids and hype you can still make some amazing stuff.

The papers and course presentations were quite good, which are as far as I’m concerned the living, beating heart of Siggraph, and I’m happy to say it’s still strong. There were the usual crowd-pleasers like an excellent panel on Tron which had the added benefit of showing us 8 minutes of unseen footage (it’s looking great.) But more importantly, the more academic presentations were still there, sharing and pushing the state of the art. I particularly enjoyed a half day course on volumetrics.

Disney had a highly visible and impressive showing on all fronts, from excellent and impressive presentations on procedural hair and trees in Rapun-oops”Tangled” to the Tron presentation, etc. They did a good job of re-establishing themselves in my mind as leaders in the industry, both artistically and technically. MPC, DD, Tippet and the Mill also brought their A-games.

The parties were parties, techies, geniuses and academics were abound, and students still want to get hired… all in all I’d say that while Siggraph is still the incredibly shrinking con, it was just as valuable as ever where it counts – in getting a feel for the state of the industry and the technology, in seeing colleagues and friends who are rarely in the same place at once, and in sharing and extending the techniques, tools and insights so vital to computer graphics. I plan to continue attending, and suggest that anyone serious about computer graphics shoulddo the same. Remember people, judging siggraph by the size of the showroom is missing the point entirely – we don’t really need sales pitches, but you won’t find any other venue where other studio professsionals and academics mingle and share like they do at Siggraph. See you there next year!

Diffusion limited aggregation redux

Over time people have been asking me to share the DLA compound I wrote waaay back. So here it is.

The compound can be found in the downloads section.

Diffusion limited aggregation is a natural phenomena in which particles in solution build up over time – as particles are deposited they limit the locations where subsequent particles can be deposited, resulting in a pseudo-random growth of deposits with characteristic nodules and ‘fjords’. You can see examples of DLA systems in nature all over, from veins of copper to urban growth patterns. In this simple ICE implementation particles are emitted from geometry and then pulled towards a “seed” point, namely the point cloud the compound is applied to. If there are no existing points in the cloud the first will be deposited at the origin. This causes DLA structures to build towards the provided surface and then to spread across it or inside it’s volume. The compound has a turbulence node which adds a random tropisim as well as a vector input for user-directed tropisims. Please be aware this was never intended as more than an experiment: it works fine but is a pretty simple implementation.

To use: apply the compound to a point cloud (typically with a single point) in a simulated ice tree… …