3d

Fibonacci, Phi, and Nature

There are a billion discussions of the Fibonacci sequence, phi, the golden section etc. So I’m going to let you browse the wonderful web and largely find out about it for yourself (try here), with only this brief summary…

The Fibonacci sequence is a series of numbers such that the last two numbers of the sequence added together result in the next: 0,1,1,2,3,5,8,13… ie: Fn = Fn-1 + Fn-2.

If you take the ratio between any two consecutive numbers in the fibonacci sequence, they increasingly converge towards a single value, 1.61538 (memorize it!) which is called the “Golden Number” or Phi: Φ

This ratio is found throughout nature, as well as classical art, mathematics etc. It crops up in an amazing number of places. A logarithmic spiral in which points of the spiral are Φ units apart after a quarter turn is called a “golden spiral”, for instance, and can be found in seashells, seed pods, flowers, pinecones and as I said before, lots and lots of websites. If you’ve had a certain amount of coffee, this video might be illuminating:

am_DandilionWeb

Other artists using ICE have put out tutorials and compounds relating to these spirals, browse around  (hint I’m one of them.)

Recently as I fiddled around I came across an interesting point about these kinds of distributions that caught my attention: they are a very efficient way to pack particles evenly on a surface. This is an important point to an effects artist, because not only is a large part of this job mimicking nature, but distributing points efficiently on surfaces lets us maximize the number of non-overlapping particles we emit.

So, I built some compounds, first to calculate phi (or simply return it as a stored constant, depending on the accuracy needed.) Another to convert phi into angles in degrees and radians (the “golden angle”), and finally I took these and made an emitter. Hooray, it indeed did allow me to emit a sphere of particles packed efficiently, and even better since I didn’t have to use a “generate sample set” it allows millions of particles to be emitted much faster than simply emitting particles from spherical geometry, and without any resolution-dependance on the LOD of a polygonal sphere. And this “phi” distribution has a nice, natural look.

 

phiDistribution

Here are the compounds, enjoy: ICE_phiDistribution

 

Example – Camera Planes and Projections in ICE

Here’s a quick example of a number of handy tricks in ICE.

A compound I shared earlier on the softimage mailing list is used to create a grid of particles on a camera plane at a definable resolution, as if each particle was a pixel. The basic camera attributes like FOV etc are respected. This bit uses a little simple trig to identify the corners of the camera frustum at any distance from the camera, which can be incredibly useful. I’ll try to make some time to go over this in a future post.

A simple raycast is then used to project particles onto geometry and to color those particles based on the depth of the projection.

example_cameraDataInICE

A lot can be done with the (super) simple techniques in this scene, trust me. Perhaps the simplest and yet most useful is to cull particles outside the camera frustum prior to writing a cache – if you are dealing with a shot that has a locked down camera this can be used to reduce cache data by massive amounts. If you are dealing with a stereo production and have access to depth maps, you can use that information to cull particles which are “behind” footage elements, or even have particles react to the “surface” of footage elements. Very cool! Knowing where a particle or object is in camera “screen-space” is easy and has a lot of use, too.

File: Softimage 2013, ~2.5MB example_cameraDataInICE

Example – Particle Clumping

example_particleClumps

The simulation is “meh”, but it was a test. This scene wasn’t originally about particle clumping, either: that was just put in as a way for me to play with the idea but not really my original focus. I was using it as a testbed for a homebrew collision/bounce node, which functions, but in the end that part wasn’t anything particularly special or even sophisticated. The clumping part is more interesting in the long run.

Someone recently asked about clumping particles online, and I recalled that I had this scene on hand. So here it is, as built by yours truly while never intending it to be anything other than personal experimentation… with some comments put in after the fact.

The file (softimage2013 ~8mb): example_createParticleClusters

Update – MatCap (litsphere) shading in Softimage 2013

A discussion about Mudbox and Zbrush-style shading arose on the Softimage mailing list. Their signature look comes from “MatCap” shaders (originally known as lit-spheres.) It’s a popular way to achieve a custom lighting solution from a texture, in realtime, which is particularly useful when modeling – you can get a nice clay or sculpy “look” to geometry in realtime. It’s also useful for creating nonphotorealistic (NPR) looks in realtime, toon shading etc.

As mentioned in an earlier post, the grey-ball shader in mental ray can render litsphere textures, and a user suggested that in the high quality viewport you can get the desired result by plugging the metaSL node “Map_ball” into the environment channel. The problem with this is the result (on my machine, at least) appears in world space. A proper litsphere should be in view space.

But it called my attention to something important – almost all of the metaSL nodes used in Mental Mill are now accessible in the render tree and can be used similarly – meaning for most intents and purposes all softimage users now have Mental Mill. Which is awesome.

But we still needed a solution for matcap functionality in the high quality viewport. So I bit the bullet and wrote a metaSL shader which seems to do the trick. It can be used for both realtime performance in the high quality viewport as well as full renders in mental ray (and any other platform supporting metaSL.)

Update: Daniel Brassard kindly fixed some bugs, the new version is now available below. Thanks Daniel!

Here’s the shader (MetaSL ~2kb): litspherev11

More examples of the shader:

Example – Display Spotlight Data

This scene shows how ICE can be used to display custom information about aspects of a scene. In this case, the cone of a spotlight is displayed, with an option to display a projection of the spotlight on the surface of geometry. A second example shows how the same setup can be used for a near and far attenuation display.

One great thing about ICE is how easy it is to re-use and re-task your “code.” In this example, it is simple to save the null containing the ICE operator as a model. Any time you need this display, import the model and give it your spotlight’s name, and you’re done. Or better, write a script which does this for you so all you have to do is pick the light. – AM

Here’s the file (softimage 2013, 276kb): ICEspotlightDisplay

Update

Petr Zloty sent me a useful tip… when you place the ICE operator on an empty pointcloud rather than a null, the results display properly in any viewport. This is an important realization which is really good to know. In fact, the problem with ICE display attributes on nulls only appearing with wireframes is an annoyance which I’ve bumped against more than once. Thanks Petr!

Thanks Petr for the tip and the screenshot!
Thanks Petr for the tip and the screenshot!

Example – Ridge Turbulence and Whirlpool Fun

Rob Chapman posted a cool whirlpool deformer to the “Resource Dump” on SI-Community here. Since I had been doing a lot with logarithmic spirals recently I decided to make one from scratch and compare the two. Here’s the result.

Instead of using Rob’s wave deformer, since it’s fun to share it out there’s a “ridged turbulence” compound in there. Here’s what it looks like when used as a deformer:

 And here’s the file (softimage 2013 ~160kb): alt_whirlpool

Example – Taper Deform

Unlike most of the examples, I built this specifically to show how a deformer can be constructed with simple visual guides. So the good news is that it’s all nicely commented. This example shows the structure of a deformation, some tricks for making viewport guides, and the taper function itself (originally published by Alan Barr, it’s quite simple).

The down side is that I didn’t want to get into matrix transforms for the example, so this implementation is only on the object’s local Y axis. This limits it’s production value (not that there’s a great need for another taper deformer.) It’s an example, not a tool – adding in the transforms to make the taper axis arbitrary isn’t too difficult, but as I was making this I realized it was going to make the entire thing unreadable and that I was out of time – I have paying work I need to get to. ^_^ So here it is, as-is. Cheers – AM

File (softimage 2013 ~270kb): Example_TaperDeform

Update

Had a little time and I hated leaving this in a state that wasn’t that useful, so I added in the additional kinds of control I usually have in production compounds (in this case, mainly the optional ability to deform on an arbitrary axis as supplied by a control object like a null, and minor cosmetics.)

A taper deformer isn’t high-tech feeling and exciting, but it’s useful in everyday practice, and more to the point this is a structure that can be used for many useful kinds of deformations. For instance, once I had the taper structure it was pretty simple to replace the internal “taper” formula with one to do a twist:

I’ll share out the twist version eventually, in the meantime here’s the full taper compound (if you look inside you’ll see what I was getting at by the arbitrary axis adding a lot of seeming complexity, but it’s really not complex, ICE just tends to look that way when there are a lot of different things all connected.)

File (softimage 2013, 379kb): example_TaperDeform2

Vray strand testing

Been playing with Vray. I like it. This is an “out of box” render of some strands, no effort on my part to properly light it, which really is a requirement for fine strands/hair etc. Still, not bad, they have a nice hair shader. I’m less interested in character hair than in fine detail fibers as VFX elements, so I’ll have to see what other looks I can coax out.

This amount of strands doesn’t even come close to taxing Softimage, it’s still interactive on my box…but rendering it at 2k took about 20 minutes, without any optimization. I’d guess that with some effort adjusting some settings I could get a better look at about 2x the speed, which isn’t bad, considering this is a GI render.

It’s a bit splotchy and the exposure is meh – chalk that up to me being a Vray noob. Looks like I was a bit too aggressive with the jpeg compression, too… The full render is much cleaner, and of course this is just a beauty pass, no passes or color correction. A decent lighter no doubt could take this to another level. Hey, I’m a VFX guy I hardly ever get to render stuff.

I’d like to get a feel for how much impact more strands will have on render time. I may have to try a “stress test” over the weekend to see what the reasonable limit for a single machine is in terms of numbers of strands I can get into a 2k half float render while still maintaining this look as a minimum quality…

Example – Shapes in ICE and the Debugging Nodes

In the “postSim” tutorial we made shapes out of strands by constructing arrays of point positions. Many ICE users, myself included, forget that there are a bunch of shapes provided already which can be used in this fashion. They are in the “Debugging” section, because arrays like these can also be used with ICE attribute display properties to create onscreen widgets (hint: …or to add points to a pointCloud…)

I’ve found the factory nodes to be quite handy for a number of things, but I wanted some other shapes of my own (such as a star and spiral)  – so I made them, as well as made some adjustments to what was already available (with some digging) to suit my personal preferences. Here is a scene with a number of them, each packaged as a compound.

Freebies!

And here’s the file (softimage 2013 322kb): ICE Debug Elements

While experimenting around with these elements as display objects, I made a “protractor” compound.

It’s of limited production use because of a current problem with how softimage displays these elements in a shaded or hidden line view, and because it wasn’t particularly designed as a tool… it just evolved while I messed around. But it serves as an example of how you can use the “Debug” nodes to create visual feedback (another good example of these debug elements is the factory “bend” deformer in ICE.)

Judicious use of these display attributes can make a packaged compound much more useful, as well as act as a handy jumping off point to make interesting shapes with strands etc. Enjoy! – AM

File (softimage 2013 270kb): exampleICEprotractor

The spiral compound plus the postsim tutorial makes a nice basis for a “cloud chamber” like wallpaper… :)

Example – PostSim Needles

During the making of the film “Barnyard,” which happened to be around the time ICE was being first conceptualized and built, a number of effects were identified which were a challenge at the time. Some of those effects became some of the first demonstration compounds later, presumably Helge and others still had those challenges in mind… making footprints in mud, falling leaves, rain/snow interacting on characters etc.

One effect involved a bunch of hay getting dumped on a character. At the time Hans Payer completed the effect via some nice scripting and syflex tricks. Now in ICE it is pretty easy, so when a thread about a similar effect was mentioned on the softimage list, and so soon after I had done the post-sim tutorial, I gave it a quick go. They basically asked for a way to use strands with the “falling leaves” compound. Here’s the result… while the collisions are not accurate the way RBDs would be I hope it’s food for thought at least. :D

[vimeo]http://www.vimeo.com/56985570[/vimeo]

Caveat: like similar tricks using the post simulation tree, motion blur is a concern… this is the kind of “quickie” effect that can often get you by, but hero shots would take a lot more effort, for instance if you had to see clearly the fibers bend and react to collisions, collide with each other, accurate mb etc. That kind of difference in the details is why feature film effects can get so hairy – often you can’t get away with simple cheats like this one.

File: softimage 2013 .scn, 237 kb

example_postSimNeedles