Uncategorized

Getting a camera position in Vex, using optransform and cracktransform

In an earlier post I used a very unwieldy approach to get the position of a null in vex. Since then I have used a method where an object_merge SOP is set to transform “into this object” and then a point function can easily read the first point in the null to arrive at a center position. The problem with this is that you are reading the icon geometry for the object and in the case of a null this is ok, but the camera icon has each point offset from it’s origin.

Enter optransform and cracktransform. The optransform function will let you set a path to the camera (or whatever else you may want to use) and then populates a matrix with the target’s transforms. You can then extract out the position as a vector using the cracktransform function… this function if pretty powerful and at first confusing but essentially it has a switch which let’s you specify the portion of the matrix you need.

In the example above I used these functions to get the camera position and then used that position to compute a direction in “camera space’ to allow a ray SOP to project scattered points onto an object as a camera projection. Sounds complex, but it’s not. Here’s the wrangle which uses optransform and cracktransform to get the camera position:

vector @cdir;
vector @raydir;

matrix camMatrix = optransform(chs("camera")); 
//get a matrix with the camera's transforms.

@cdir = cracktransform(0, 0, 0, {0,0,0}, camMatrix);
//extract out the camera position as a vector

@raydir = normalize(@P-@cdir);
//get a vector to project pointo from camera

Hiatus

20130319-225338.jpg

On a gig with the very cool people at Psyop, one of my all time favorite studios. So no posts for a bit. Stand by for more useful stuff in a few weeks. – AM

2500 of you might need a life

I finally got around to enabling metrics on this site, and have a first month of data. ~2500 visits, wow. I’m glad people are visiting and finding the site useful. Visitors to the site hail from all over the world, with only about 50% from the States. The second most common nation of origin is Japan. 訪問をありがとうございました!

The closest visitors to my home in Orlando (yes, I spend a lot of time flying to LA and NYC lol) hail from Davenport, FL, with 88 visits this month. Thats a lot for such a small town, is there a school or studio there?

Thanks for dropping in, all of you.

Post-simulation and softimage ICE (part 3 – vectors and modulo)

[vimeo]http://www.vimeo.com/56339933[/vimeo]

So, we’ve made a strand circle which rings our original point positions from the simulation, now let’s make those circles align with the rotation of the simulated particle. In practice this is very, very simple. Just insert a “rotate vector” node after you calculate the coordinates on a circle for the strand points, and then use the particle orientation as the input rotation.

Ok, that was nice, but why did it work?

Point positions are vectors. Vectors are displacements:

This is one of the simple but critical concepts that is the real purpose for me writing this series of posts, because it’s a foundational way of thinking which lets you come up with solutions to problems every day.

What is a point? It’s a place, within a space. A particle can have all kinds of attributes, like color size and orientation. But a point is just a position, it only has a single value, a vector (x,y,z). In a manner of thinking, a point is a vector. The vector which describes a point is a direction and distance from the origin. It’s a displacement from that point of reference. When you talk about “global” and “local” space you are talking about different frames of reference, different points of origin from which to draw a vector describing points.

So, what we really did when we calculated an array of strand positions on a circle was make an array of vectors. Hence, rotating those vectors is really the same thing as rotating the strand point positions to match the orientation of our original particle. Points are vectors, they are offsets (or displacements) from an origin. And ICE is very, very good at doing stuff with points vectors. You can call these manipulations vector math if you want, but that in itself doesn’t make it beyond your average artist, who like ICE are also very, very good at manipulating points. If you are an artist you already have an intuitive grasp of vectors! You just need to define some terms, so that saying stuff like “rotate a vector” translates to the visual adjustments you do in your head day in and day out.

Seeing stars… and why modulo is so handy:

Ok, cool. We have a bunch of strand rings centered and oriented where our simulated particles were. Let’s make the rings star shapes, as a way of talking about another useful technique in ICE (it’s useful all over in fact), making patterns via the modulo function.

Ok, a digression first – some housekeeping. By now you’ve realized this isn’t one of those step-by-step makes-a-scene tutorial, I’m discussing more and glossing over a lot of the details you may need to actually plug all this together. I really should have provided a sample scene earlier, I don’t want you focusing on plugging nodes together, the whole point of this is the underlying ideas. So here you go. A sample scene with nifty comments and stuff. If all you want is a scene that will make circles and stars, there you go. And it was made with an educational license, even. But if you want the ideas used so you can make all kinds of other stuff, well then dear reader, read on.

The modulo function is just an instruction to divide two numbers and pick out the decimal remainder of that division. If you feed a linear sequence of numbers (like the index of an array: 0,1,2,3… call any of these numbers “n”) into the modulo function, you get a value counting up between 0 and 1. You can use this to identify every “n’th” item in your list. In fact if you crack open ICE’s “every n’th particle compound etc you will basically see exactly this.) If you can do things every “n’th” time, you can make patterns. Think about it. Braiding hair, knitting, drawing a dotted line – making almost any pattern involves counting and every “n’th” count doing something differently. Modulo is how you do that kind of thing in ICE (and elsewhere. Hey, realflow has an ICE like system now. And it works in scripting too. This math stuff pops up everywhere. The big secret is this – it’s just a way of looking at things you probably already do really well.

I’m a visual thinker so when I was first learning about modulo I had to scribble on a napkin, with results something like this:

All a star shape is, is a pattern where we take every other point on our circle and change it’s radius. Now we know how to find every other point from our list (the array) of strand positions, so we just change the radius of the formula we used to make the circle for those points. And you get a star.

Cooooool.

Ok, so just one last part to this tutorial, and a brief one – how to take the single circles we made and, using the earth-shaking power of ICE and the post-simulation region, turn that result into a lot of circles: all different and making little atom things like we see in the example video. And in fact, the example scene here already shows you how, so we’re not even going to do much besides discuss it and crack bad jokes. Cheers. – AM

The file (softimage 2013 educational, 1.2 mb)

strandPostSimOffset_tutorialPart4

Higgs Bosons and why you should care.

I’ll get back to some CGI related posts soon… But in the meantime, what’s all this about the Higgs Boson, and why should we care? I mean, the math seems impossibly deep, the experiments staggeringly expensive, and then when we ask physicists to explain how all this matters to us in our daily lives they mumble stuff about the beginning of the universe and dark matter.

I know, it’s annoying, right?

Instead of asking physicists, who understandably are up to their waders in some pretty deep stuff, we should ask some engineers why we should care. The answer becomes something like this:

“If you want to see a day where perhaps things like pod racers or gravity skateboards are a reality, this matters. This is an important step which will help us know if such seemingly fanciful devices are even possible.”

Suddenly, the staggering costs and daunting math make a lot more sense. All this is about advancing knowledge that ultimately lends humanity more capability to DO stuff, from feeding ourselves to making iPads.

This video does a pretty good job of putting the “discovery” into some context:

[youtube]http://www.youtube.com/watch?v=9Uh5mTxRQcg[/youtube]

ALVH

[youtube]http://www.youtube.com/watch?v=6Eo766iZZ0c&feature=youtube_gdata_player[/youtube]

 

Soon.

Vimeo iOS app now available

This is an FYI for you iPhone and iPad2 users – vimeo now has a free app supporting their site, which allows very basic video editing, submission, and account management.

(If you aren’t a vimeo user, what is it you ask? Vimeo is a bit like youtube, it’s a free video sharing site, but it’s more targeted towards creatives and professionals. Most of the videos on my site are hosted at vimeo.)

The down side: if your device has no camera, you can’t use the app. Meaning original iPad owners are left out. This is pretty dumb. Just because the device doesn’t have a camera doesn’t mean you don’t have footage on it.

Case in point – a vimeo member takes their iPad, a camera and the camera connection kit (in a day pack, perhaps) to some photogenic place. User shoots some gorgeous footage (with something a lot nicer than a craptastic phone camera) which they want to upload… maybe to be reviewed while they are still on location, or for a family member to see, whatever. So they use the camera connection kit to load it to their pad, select the shots they want to share and then pointedly don’t edit and load the results to vimeo… they can’t

Fail!

But still, it’s nice that there’s an app for the iPhone. Cheers.