One of my tasks for Pinball was to create a process that automatically animated physically accurate rotation of the pinball, allowing animators to focus solely on keyframing its position. After weighing some trade-offs between storytelling and physics, we decided that the ball should move in a self-directed manner. This meant that it would generally "face" the direction of its motion as much as possible. In order to achieve this look I wrote some code to expose Maya's quaternions, which are a mathematical construct that are handy for all sorts of things, particularly keeping track of cumulative rotations. Then I wrote a MEL script which analyzed the ball's path of motion and generated a proper rotation sequence based on it.
One of the primary goals for Pinball was to capture as much of the motion as possible in-camera. The actor was filmed on a greenscreen stage on top of a to-scale pinball which had its own ability to move via a track and using pivoting hydraulics. This motion was supplemented by a motion control camera. A team of previz artists from Third Floor expressed as much of the movement of each shot as possible through a combination of rig and camera motion, but inevitably physics and safety prevented us from creating the full range of motion necessary. Prompted by a suggestion from lead artist Alex Frisch, I wrote a tool called Projections of Hope which attempted to make up the difference in 2D. This was a standalone application which took as input two different Maya scenes. The first scene was a track of the greenscreen ball from the stage shoot. The second was the final Maya scene featuring the actual motion of the CG pinball. From there, Projections of Hope would analyze both scenes, and create the ideal animated translation and scaling of the greenscreen plate in order to position the actor in exactly the right spot on top of the pinball. The tool exported both a Flame Action setup and a Shake script which contained these transformations. Inevitably the 2D artists would tweak the results to varying degrees, but in general the whole process worked quite well and saved many hours of tedium for the compositors.
Geek Alert: This section contains full-frontal nerdity.
One of the most esoteric problems we had to solve in this spot had to do with the motion blur of the ball's reflection. Picture a perfectly reflective sphere spinning around a single axis - not translating, just rotating. How should its reflection of the environment motion blur? It turns out that in the case of a sphere, it really shouldn't motion blur at all. However in mental ray (and various commercial renderers we experimented with) the reflection "smears". After some discussion with mental images directly, we determined that the issue was due to an optimization in mental ray which didn't transform the surface normal according to the motion transformation. In the vast majority of scenes, this would never be an issue. But a spinning reflective sphere is a pathological case for that optimization, and unfortunately there seemed to be no straightforward workaround. We also couldn't kill motion blur on the reflection pass outright because we still needed the motion blur that came from the translation. After some experimentation, I ended up writing a mental ray shader which corrected the surface normal to account for motion. Because we knew the object was a perfect sphere, I was able to take the point's object space position and determine the surface normal from that. In fact, on a sphere the object space position is the surface normal. So the shader modified
state->dot_ndand then called the shader it was connected to. The connected shader calculated correctly because a proper normal had been supplied, and to my own surprise, it worked exactly as we hoped.