softimage

Example: LK Fabric early test scene

Since LK Fabric is out I dug up one of the early test scenes we used on Nike “Evolution” at Royale in which we developed some of the techniques we used to get various looks. This setup is very basic but covers some of the key tricks for getting a natural result and was used as a starting point for a number of shots.

LKF_demo1b_am

I have replaced the original many-versions-old compounds with ones from Leonard’s public release, and have also left in a few “helper” compounds I built which weren’t part of his official release. I then removed everything superfulous to the basic effect and commented the resulting ICE tree throughout. Any typos or misspellings are entirely my fault. :D

LKF_demo1d_am

 

LKF_demo1_am

The actual setup is very simple.

This scene demonstrates:

  • A basic setup of a single evolving swatch of fabric, with the most basic pattern and the modifiers we used most often.
  • Using the “slide profile over U/V” compound and other techniques to shape the leading edge of fabric growth.
  • Using the “offset” core parameter to make the leading strands animate and form a shape for the thread tips.
  • Using a second ICE “post” effect to add per-strand variance and “frizz” effects.

 

 

LKF_demo1c_am

The early tests like this were pretty chaotic, we knew the system did a good job of creating a “perfect” weave so we were pushing in the other direction and adding ways to create chaos and randomness.

Ironically despite the first briefs being focused on “organic” and “evolving” concepts the client spent much of the latter half of the job dialing in a more out-of-box mechanical look… that’s the way it goes sometimes. But this means there is a lot of capability which hasn’t been seen yet. I’d love to see some people use some of the per-strand and per-thread modifiers, and the capability to create patterns beyond the basic “canvas,” to create a more organic, aggressive look.

LKF_demo1e_am

 

Here’s the file (Softimage 2013 scene file, ~0.6mb): LKFabric_AMexample1

LK Fabric released

LK Fabric has been released!

Congrats to Leonard Kotch who put his heart and soul into this and who put up with my endless demands for the system during the production.

Pretty much everything you see in the Nike commercials below was built with this system, with Leonard doing daily updates as we worked. We barely touched some of the possibilities of this system, I hope people will try it out because it is cable of some truly spectacular effects. Kudos to Royale for being so cool and sharing it out like this!

CG Supervisor for 3 Nike Spots

Two of 3 spots. “Evolution” was a small team, 3 weeks, lighting and effects with Softimage and Arnold.

I love small but intense projects like this.

“Run” was primarily Maya/Vray with a touch of ICE. The studio (Royale) is only 6 years old but advancing fast, and it’s been a real pleasure working with them. For their first exploration of ICE, Royale invited in some familiar SI friends – Ciaran Moloney, Steven Caron, Leonard Kotch, Billy Morrison and yours truly doing a first gig start to finish as a CG sup (which with guys like this mostly involved saying “go for it.”)

Like the Psyop “Telstra” spot, this commercial essentially required us to create a system for knitting cloth from massive numbers of strands. Leonard Kotch wrote a system which performs many of the same tasks as the Psyop “Entwiner” tool, but he took a slightly different direction, it was fascinating to compare how the two diverged. The progressive animation required for these two shots resulted in a pretty flexible and broad system, which we are currently using for the last of the three spots, which will wrap in production soon.

strandlayoutb_s40_062613 NikeEvo_shoe_comp_v001 NIKE_evolution_shot40

Royale has been an enthusiastic and fun group to work with and it’s been great getting to show a studio as strong as they are in design some of the possibilities ICE can bring to jobs like this. Expect to see some version of Leonard’s “LKFabric” system gifted to the community before long – very cool Royale, thanks! (They also throw good parties, their 6’th birthday celebration was impressive and… unusual.)

 

 

Strands are our friends

And so is Psyop!

This little commercial project from a while back was a LOT of fun. A very small team of us (5 or 6 total, I think) made this (plus a few shots more) in just a few short weeks (two or three I forget) at the LA studio, using a hybrid Maya/Softimage approach which Psyop does really well. All models and animation in maya are brought to Softimage for lighting with Arnold and additional ICE effects – in this case, the characters, and stuffing are entirely strands. No geometry aside from the little plastic eyes and their teeth.

Psyop’s lighting supervisor Jonah Friedman wowed me with the system for knitting via strands he built, and how fast he built it. He also quickly made it on my “favorite people” list in general, it was a blast working with him and the others there. The system, which we called “entwiner” builds layers upon layers of strands… these knit characters are built down to every individual fiber. And Arnold powered right through these millions of strands without a blip. Pretty cool. It was so efficient at it in fact, that it made sense to make the “stuffing” out of tiny fibers too, which gave it a nice volumetric kind of feel when lit.

The liquid is a simple lagoa setup, with wet maps generated in ICE. While the commercial was so simple I was really pleased with how the studio took pains to take their clients ideas and give it the very best. A couple of knit characters could have been faked with geometry and textures, but going that extra mile even when time was so short is what really impressed me, and the combination of maya/softimage, ICE and Arnold is a powerful one, as Psyop shows even on small jobs like this. My kind of studio. Thanks for having me you guys.

Update: Whirlpool and Ridged Turbulence Deformers, Revisited.

am_whirlpoolDeform2

 

A user on si-community asked how to “move” the deformation in the earlier whirlpool example. Doing so involves a couple of matrix transformations – you basically force the points of the geometry to the global origin where you perform your deformation and them move them back to the local space they were in.

It’s a simple operation that I haven’t really figured out how to illustrate in a simple and intuitive manner yet… about the best I could do was to revise the scene so people can compare a “before” and “after”. The first scene is a working scene, it’s where I was assembling the basic deformation, it’s all relative to the global origin. This new scene then goes through the steps of making the deformation “production ready.” I clean things up, make the deformation operate in the object’s space, and package it all up as compounds.

Here are the compounds and the revised scene (2013): example_whirlpoolDeformer2

Fibonacci, Phi, and Nature

There are a billion discussions of the Fibonacci sequence, phi, the golden section etc. So I’m going to let you browse the wonderful web and largely find out about it for yourself (try here), with only this brief summary…

The Fibonacci sequence is a series of numbers such that the last two numbers of the sequence added together result in the next: 0,1,1,2,3,5,8,13… ie: Fn = Fn-1 + Fn-2.

If you take the ratio between any two consecutive numbers in the fibonacci sequence, they increasingly converge towards a single value, 1.61538 (memorize it!) which is called the “Golden Number” or Phi: Φ

This ratio is found throughout nature, as well as classical art, mathematics etc. It crops up in an amazing number of places. A logarithmic spiral in which points of the spiral are Φ units apart after a quarter turn is called a “golden spiral”, for instance, and can be found in seashells, seed pods, flowers, pinecones and as I said before, lots and lots of websites. If you’ve had a certain amount of coffee, this video might be illuminating:

am_DandilionWeb

Other artists using ICE have put out tutorials and compounds relating to these spirals, browse around  (hint I’m one of them.)

Recently as I fiddled around I came across an interesting point about these kinds of distributions that caught my attention: they are a very efficient way to pack particles evenly on a surface. This is an important point to an effects artist, because not only is a large part of this job mimicking nature, but distributing points efficiently on surfaces lets us maximize the number of non-overlapping particles we emit.

So, I built some compounds, first to calculate phi (or simply return it as a stored constant, depending on the accuracy needed.) Another to convert phi into angles in degrees and radians (the “golden angle”), and finally I took these and made an emitter. Hooray, it indeed did allow me to emit a sphere of particles packed efficiently, and even better since I didn’t have to use a “generate sample set” it allows millions of particles to be emitted much faster than simply emitting particles from spherical geometry, and without any resolution-dependance on the LOD of a polygonal sphere. And this “phi” distribution has a nice, natural look.

 

phiDistribution

Here are the compounds, enjoy: ICE_phiDistribution

 

Example – Camera Planes and Projections in ICE

Here’s a quick example of a number of handy tricks in ICE.

A compound I shared earlier on the softimage mailing list is used to create a grid of particles on a camera plane at a definable resolution, as if each particle was a pixel. The basic camera attributes like FOV etc are respected. This bit uses a little simple trig to identify the corners of the camera frustum at any distance from the camera, which can be incredibly useful. I’ll try to make some time to go over this in a future post.

A simple raycast is then used to project particles onto geometry and to color those particles based on the depth of the projection.

example_cameraDataInICE

A lot can be done with the (super) simple techniques in this scene, trust me. Perhaps the simplest and yet most useful is to cull particles outside the camera frustum prior to writing a cache – if you are dealing with a shot that has a locked down camera this can be used to reduce cache data by massive amounts. If you are dealing with a stereo production and have access to depth maps, you can use that information to cull particles which are “behind” footage elements, or even have particles react to the “surface” of footage elements. Very cool! Knowing where a particle or object is in camera “screen-space” is easy and has a lot of use, too.

File: Softimage 2013, ~2.5MB example_cameraDataInICE

Example – Particle Clumping

example_particleClumps

The simulation is “meh”, but it was a test. This scene wasn’t originally about particle clumping, either: that was just put in as a way for me to play with the idea but not really my original focus. I was using it as a testbed for a homebrew collision/bounce node, which functions, but in the end that part wasn’t anything particularly special or even sophisticated. The clumping part is more interesting in the long run.

Someone recently asked about clumping particles online, and I recalled that I had this scene on hand. So here it is, as built by yours truly while never intending it to be anything other than personal experimentation… with some comments put in after the fact.

The file (softimage2013 ~8mb): example_createParticleClusters

Update – MatCap (litsphere) shading in Softimage 2013

A discussion about Mudbox and Zbrush-style shading arose on the Softimage mailing list. Their signature look comes from “MatCap” shaders (originally known as lit-spheres.) It’s a popular way to achieve a custom lighting solution from a texture, in realtime, which is particularly useful when modeling – you can get a nice clay or sculpy “look” to geometry in realtime. It’s also useful for creating nonphotorealistic (NPR) looks in realtime, toon shading etc.

As mentioned in an earlier post, the grey-ball shader in mental ray can render litsphere textures, and a user suggested that in the high quality viewport you can get the desired result by plugging the metaSL node “Map_ball” into the environment channel. The problem with this is the result (on my machine, at least) appears in world space. A proper litsphere should be in view space.

But it called my attention to something important – almost all of the metaSL nodes used in Mental Mill are now accessible in the render tree and can be used similarly – meaning for most intents and purposes all softimage users now have Mental Mill. Which is awesome.

But we still needed a solution for matcap functionality in the high quality viewport. So I bit the bullet and wrote a metaSL shader which seems to do the trick. It can be used for both realtime performance in the high quality viewport as well as full renders in mental ray (and any other platform supporting metaSL.)

Update: Daniel Brassard kindly fixed some bugs, the new version is now available below. Thanks Daniel!

Here’s the shader (MetaSL ~2kb): litspherev11

More examples of the shader:

Example – Ridge Turbulence and Whirlpool Fun

Rob Chapman posted a cool whirlpool deformer to the “Resource Dump” on SI-Community here. Since I had been doing a lot with logarithmic spirals recently I decided to make one from scratch and compare the two. Here’s the result.

Instead of using Rob’s wave deformer, since it’s fun to share it out there’s a “ridged turbulence” compound in there. Here’s what it looks like when used as a deformer:

 And here’s the file (softimage 2013 ~160kb): alt_whirlpool