SFX specialist Amitaabh Naaraayan sheds light on the art of motion capture and how it is increasingly being used in Hollywood.
Motion capture, as the name suggests is a process of capturing Motion from live actors. Motion capture requires more than just actors; it requires talented performing artists who are a good mix of actors, stuntmen and dancers. These artists have to wear a body suit with light points attached on the suit; these light points are usually placed to coincide with joints or on muscles with prominent movements like the facial muscles. The artists then rehearse to move their bodies to fit the virtual 3D characters as per the scripts.
The motion capture cameras capture the light positions as the actors move and enact their scenes.
The captured information is stored as lines (animation curves)and numeric data commonly referred to as keyframes. With the help of sophisticated software this data is then transferred to the 3D characters. The scenes could be a green screen setup, built to match the conditions of the 3D scenes in any sequence of the film. The moving characters are then simply placed in these 3D scenes.
With a combination of the new virtual camera software, filmmakers have realised unlimited freedom in computer-generated storytelling. The director can now move within a computer-generated 3D environment, in and around CG actors whose infinitely looping performances have been created using standard motion-capture technology, to get the results conceived by the writers and directors. James Cameron has set a new standard for filmmakers with his recent hit Avatar.
Cameron continuously maintained that Avatar was not a CG animated film; rather that it was “motion tracked and CG rendered”.
An increasing number of filmmakers have begun to use Mocap to capture the entire performance, ie. the acting, body language and the voice, vis-à-vis just the voice of the famous celebrities.
Motion capture based animation is essential for creating characters that move realistically, in situations that would be impractical or too dangerous for real actors. Director, Steven Spielberg is said to be using motion-capture technology, as it is allowing him to digitally recreate the look of the original Tintin comics by Hergé on the silver screen. Hergé wrote about fictional people in a real world, not in a fantasy universe. It was the real universe he was working with, and he used National Geographic to research his adventure stories. Not only are the actors represented in real time, they enter into a three-dimensional world.
Software tools for working with motion-captured data, such as Autodesk MotionBuilder have evolved to the point where animators now have the means to edit and blend takes from multiple capture sessions and mix and match them with keyframed animation techniques; allowing great control of style and quality of final output, for anything ranging from realistic to ‘cartoony’ motion.
Motion capture is accomplished by magnetic, electro-mechanical or optical technologies. While each technology has its strengths, there is not a single motion capture technology that is perfect for every possible use.
Magnetic motion capture systems utilise sensors placed on the body to measure the low-frequency magnetic field generated by a transmitter source. The sensors and source are cabled to an electronic control unit that correlates their reported locations within the field. The electronic control units are networked with a host computer that uses a software driver to represent these positions and rotations in 3D space.
Magnetic systems use six to 11 or more sensors per person to record body joint motion. Although six sensor systems are less expensive, they are more likely to produce ‘joint popping’ since the IK solution needs to guess about a lot of the information it is receiving. The markers tend to move a bit during capture sessions, and require repeated readjustment and recalibration. Since each sensor requires its own (fairly thick) shielded cable, the tether used by magnetic systems can be quite cumbersome.
There are two main technologies used in optical motion capture namely, Reflective (Passive) and Pulsed-LED (light emitting diodes) Active.
Optical motion capture systems tend to utilise proprietary video cameras to track the motion of reflective markers (or pulsed LED’s) attached to particular locations of the actor’s body. Single or dual camera systems are suitable for facial capture, while eight to 16 (or more) camera systems are necessary for full-body capture. Reflective optical motion capture systems use Infra-red (IR) LED’s mounted around the camera lens, along with IR pass filters placed over the camera lens. Optical motion capture systems based on Pulsed-LED’s measure the Infra-red light emitted by the LED’s rather than light reflected from markers.
Optical motion capture systems have the advantage of being very configurable (you can put the markers on an elephant or fabric, or baseballs or footballs, etc.) A large active area is possible, depending on budget and space limitations. Optical systems are useful for capturing gymnastic types of moves. Optical motion capture is most often used ‘out of house’ at specialty studios, but is very popular for animation for sports games as well as motion capture for film.
Mocap can provide substantial time savings for animation projects. Motion capture can make the animation process much easier, especially when trying to recreate character animation that is realistic, such as the interaction of multiple 3D characters, or characters engaged in sports activities. Simple ‘ambient animation, such as a character standing around doing nothing, is much easier (and more realistic) when captured than if these subtleties where animated by hand.
Game development is the largest market for motion capture. With games drawing as much revenue as movies, it is easy to see why game development often calls for enormous quantities of motion capture. The immense competition to produce the ‘coolest game possible’ means that greater production capabilities mean higher quality. More time is left for aesthetic finishing touches and fine-tuning of game play.
Real-time motion is becoming popular for live television broadcasts. Motion capture can be used to place a virtual character within a real scene, or to place live actors within a virtual scene with virtual actors, or virtual characters within a virtual scene.
Motion capture for real-time broadcast requires mock-ups of any non-standard physiology (big stomachs, tails, etc.) to keep the performer’s motions from causing the character’s limbs to interpenetrate its body. Joint limits on the shoulders and knees (such as found in Autodesk MotionBuilder) also help maintain believability of the character. A real-time adaptation feature such as MotionBuilder’s real-time motion mapping (from the performer’s skeleton to a different proportioned character’s skeleton) is essential when the character’s body is very different from the actor’s body.
When combining live elements with virtual elements the real and virtual cameras must share the same properties (perspective, focal length, depth of field, etc.) otherwise the illusion looks strange.
Motion capture is being used more and more in films nowadays. Motion capture based animation is essential for creating characters that move realistically, in situations that would be impractical or too dangerous for real actors (such as characters falling off the ship in Titanic. Motion capture was also used extensively in Titanic for ‘filler’ characters (fit in between real actors) or in situations with virtual camera fly-bys over a virtual ship.
Many of these shots would have been difficult or impossible to do with real cameras and a real ship, or real models, so virtual models, actors, and cameras were used. Some film characters require the use of motion capture, otherwise their animation seems fake. More and more independent companies are starting to put together desktop studios.
The idea of two or three people creating an entire movie is not that far off, if motion capture is used correctly. The Gypsy is ideal for small and large shops. Motion capture animation can be done very quickly and inexpensively, without scheduling expensive motion capture sessions in a studio.
Other industries that use motion capture include Web, Live events, scientific research, Biomechanical analysis, Engineering, Education and VR Gypsy is a commonly used setup. It is easy to use and transport, and works well in most environments.
The use of Phasespace optical motion combined with Motion Builder makes it easy to use. Mocap is ideal for small set-ups and could prove to be cost-effective and quick as opposed to paying expensive rentals to studios. Another newer solution is FacePro, a facial motion capture toolset plug-in that is used with the VICON Blade system. SkinFlex is an additional feature that helps deal nuances like how the skin rolls as it stretches and moves and captures every lip curl and purse.
Digital Concepts Group’s FacePro has helped take facial motion capture to the next generation of game, film, television and advertising media.
Alternatively, Motion capture data can be purchased off the net, and is very common with medium to small-sized production and special effects studios. You have various file formats, e.g. .bvh, .bip, .fbx that can be read by most of the software apps that are commonly used by artists.
Autodesk MotionBuilder (www.autodesk.com)
NaturalPoint (www.naturalpoint.com )
MoCap for Artists: Workflow and Techniques for Motion Capture by Midori Kitagawa
Understanding Motion Capture for Computer Animation and Video Games: by Alberto Menache
3D Game Animation for Dummies: by Kelly L. Murdock
The Animator’s Motion Capture Guide: Organizing, Managing, Editing by Matt Liverman
Amitaabh Naaraayan is a 3D and SFX professional based in Dubai.