VFX specialist Amitaabh Naaraayan takes us through the second part of this series on making a CG movie
Last month, we looked at the concepts of Modeling and Texturing. In this issue, we shall discuss Rigging and Animation.
Before objects are rendered, they must be laid out within a scene. This is what defines the spatial relationships between objects in a scene including location and size. Animation refers to the temporal description of an object, i.e., how it moves and deforms over time. Popular methods include keyframing, inverse kinematics, and motion capture, though many of these techniques are used in conjunction with each other. As with modeling, physical simulation is another way of specifying motion.a
You can animate characters or vehicles for computer games or you can animate special effects for film or broadcast. You can create animation for serious purposes such as medical illustration or forensic presentation in the courtroom. Whatever reason you have to animate, you’ll find most 3D applications provide an environment for achieving these goals. The basic way to animate is quite simple. You animate the transform parameters of any object to change its position, rotation, and scale over time.
Most commercially available computer animation systems are based on animating with keyframes. At first, this seems like the same thing as keyframes in traditional hand-drawn animation, but it is slightly different. In hand-drawn animation, you work on the basic poses of the scene first, drawing poses of the entire character so the timing and acting can be worked out with a minimum of drawings created. Once the poses are finalised, the in-between drawings are created to complete the action.
With computer animation, keyframes are values at certain frames for the articulation controls of a model, which are usually set up in a hierarchy. The computer calculates the in-betweens values based on a spline curve connecting the keyframe values.
In software packages that support animation such as Autodesk Maya, Autodesk 3DS Max, Lightwave, Autodesk Softimage, Houdini, Poser, etc, there are many parameters that can be changed for any one object.
One example of such an object is a light. Lights have many parameters including light intensity, beam size, light colour, and the texture cast by the light. Supposing an animator wants the beam size of the light to change smoothly from one value to another within a pre-defined period of time, it could be achieved by using keyframes. At the start of the animation, a beam size value is set.
Another value is set for the end of the animation. Thus, the software programme automatically interpolates the two values, creating a smooth transition.
However, the human visual system does not see in terms of frames; it works with a continuous flow of light/information. This is known as Persistence of vision. Because of this phenomenon, the higher the Frames per Second, the smoother the motion appears. In general, the minimum FPS needed to avoid jerky motion is about 30 FPS. For high-motion content, an encoding session around 60 FPS may be more beneficial.
When dealing with FPS, it is important to also understand other terms that are used throughout the Industry:
PAL: Phase Alternating Line is the dominant television standard in Europe, Middle East and Asia. The PAL standard delivers 25 FPS.
NTSC: National Television Standards Committee is responsible for setting television and video standards in the United States. The NTSC standard for television defines a composite video signal with a refresh rate of 29.97 FPS. NTSC also requires these frames to be interlaced.
Telecine: Most film content is created at 24 FPS. To meet the NTSC standard, extra frames are added to reach the 30 FPS requirement. This is done through an algorithm that creates an intermediate frame between two other frames. The process that removes the frames that were added when 24 FPS film was converted to 30 FPS video is known as Inverse Telecine.
Apart from the most basic form of animation like Keyframing 3D applications have increased and infused many new techniques of animation within their environment. We shall discuss a few notable and popular techniques used by 3D animators.
Dynamical simulations is used in computer animation to assist animators to produce realistic motion, in industrial design (for example to simulate crashes as an early step in crash testing), and in video games. Dynamics simulations can be a powerful tool when trying to generate realistic looking effects that would be very difficult to achieve manually. Calculations of animations such as object collisions, cloth objects, in conjunction with fields like air, gravity, waves, etc are all done within the working environment. Hence, it becomes comparatively easy to simulate realistic dynamic objects such as a windy flag, bouncing ball or a water body with a floating log.
Motion capture, motion tracking, or mocap are terms used to describe the process of recording movement and translating that movement onto a digital model. It is used in military, entertainment, sports, and medical applications. In filmmaking it refers to recording actions of human actors, and using that information to animate digital character models in 2D or 3D computer animation. When it includes face, fingers and captures subtle expressions, it is often referred to as performance capture.
Crowd simulation is the process of simulating the movement of a large number of objects or characters, now often appearing in 3D computer graphics for film. While simulating these crowds, observed human behavior interaction is taken into account, to replicate the collective behavior. The need for crowd simulation arises when a scene calls for more characters than can be practically animated using conventional systems, such as skeletons/bones. Simulating crowds offer the advantages of being cost effective as well as allow for total control of each simulated character or agent.
Animators typically create a library of motions, either for the entire character or for individual body parts. To simplify processing, these animations are sometimes baked as morphs. Alternatively, the motions can be generated procedurally - i.e. choreographed automatically by software.
Rigging, very simply, is the process of setting up the 3D scene for animation. Because of the complexities of the mesh and the number of various different objects that have to be animated, rigging is essentially used to reduce the number of keyframes, and facilitate the animator to focus on the aesthetics of animation instead of the technicalities of handling the software requirements. Rigging classically consists of two main steps: skeleton embedding and skin attachment.
But first, let’s familiarise ourselves with a few terms
Parenting: In complex animation involving several objects, there is a procedure of forming hierarchies, to allow for transfer of animation from the parent objects to the child objects, and not the other way round. Child is free to have its own transformations, but as a rule of hierarchy, it has to obey the parent transformations and inherits it naturally.
For example, the hip could be the top most of the hierarchy and the main joint, and then you have the upper torso which is the spine which at the neck bifurcates into the two arms, so on and so forth.
Bones/Skeleton: Joints or bones are a system provided by most of the character animation softwares to be able to allow the construction of the skeletal frame which behave like real life bones. Each joint has an XYZ coordinate system and an orientation which can be defined by the joints; some joints rotate in all direction for eg: neck, some only in one for eg. Elbow. All these can be setup by using the skeleton or bones system and then skinned to one single 3d mesh of a human or a beast for that matter. There are readymade skeletal rigs like the biped in max.
The concept of parenting is the basis of this skeletal setup. Bones further allows skinning with a single mesh object with several bends and twists and deformations.
Skinning: The process of skinning also allows for deformations like muscle bulges or skin deformation overlaps by providing weights. Weighting skin means one can decide which part of the mesh reacts to any given joint for eg shoulder weights would have to be distributed between the collar joints armpit region and the shoulder joints. Once the object is skinned the mesh is bound to the bone transformations and behaves exactly like the skeleton.
Secondary Animation: When the characters have tails or tentacles like appendages, which have a connection with the actual body but have follow- through motion, like a jiggle or wobble then these, have to be rigged accordingly to achieve this result without having to set separate manual keys. There are many different tools provided by most of the animation softwares to achieve this.
Forward and Inverse kinematics: are tools which allow for control between joints like the hip and the ankle so that the knee is automatically take care of, or the fingers. In a human body there is a combination of both forward and Inverse Kinematics. Forward kinematics essentially means that the animation of the parent is transferred to the child objects/ bones and Inverse allows for the opposite to be true. In the case of the leg, even though the knee is the parent of the ankle, with the help of the IK handle you can get the know how to obey the transform of the ankle.
Character Rigging: Characters could mean Humans- Bipeds or animals- Quadrupeds or any other inanimate objects we wish to give human like characteristics, like a ball or a tree.
Animating an articulated 3D character requires manual rigging to specify it’s internal skeletal structure and to define how the input motion deforms its surface. As mention above, this process involves joints and bones which are then skinned or attached to the 3d mesh.
Mechanical rigging or rigging without bones: It is not always essential to skin the objects to the joints or have joints at all. Simple hierarchical connections could be used. This is true in the case of cogs of machinery, the popular Pixar lamp, phones or robots. This is where we use the basic rule of animation multiple objects in 3d space- Parenting. Parenting forms a hierarchy or a tree which helps us to set relations between the various objects or groups of objects. The attempt of a good rig is to minimize the number keys of all the objects properties. Props in a character can also be treated the same way, for eg: eyes, watch, a hat etc.
Facial Rigging: Since the human face is so very expressive and has so many different muscles involved, one needs to approach the rigging of facial muscles differently. One also needs to consider the phonetics in any given language, in which the lip assumes very different shapes with every next word.
Therefore, the facial rigging requires a whole different set of procedures. Very simply a technique called morph targets or blend Shapes is used. Morphing as the term suggests is the smooth blend or transition of one expression to another, there are several tools and softwares available for this technique.
Scripting/Programming: Most of the Software has their inbuilt scripting language, where as some of them are open source codes. In Softwares like Maya there is MEL (Maya embedded language), max has maxScript. How this helps is when you need the rigged characters to further squash and stretch, like in The Incredibles.
Amitaabh Naaraayan is a 3D and SFX professional.