Facial Mo Cap Puts Viewers Face to Face with a VR World

Faceware pavr mocap74a

VRWERX, a company in Los Angeles that develops complete VR experiences, from developing concepts through to distribution, recently worked with Paramount Pictures to develop ‘Paranormal Activity’: The Lost Soul’. This immersive POV gaming experience extends the studio’s ‘Paranormal Activity’ horror movie series.

As players begin the game, they explore a gloomy, abandoned house. To gradually build up tension as they progress, various tricks are employed - mirrors shatter, lights flicker and doors thud, all leading up to an inevitable jump scare climax. The spooked players’ specific head movements are detected inside the headsets and generate animation triggers that cause human-like creatures to lurch toward them from the shadows.

Realism or Speed – or Both

VRWERX were concerned about both achieving enough realism to differentiate their experience within the crowded horror market, and having the ability to iterate quickly within a virtual reality pipeline.  

Faceware pavr2

Bradan Dotson, VRWERX production manager said, "Characters will interact with you in a traditional game on one level, but when you're immersed in a 360 virtual environment and the characters move right up to the screen, you notice many details that you wouldn't see otherwise. We needed the facial expressions and emotions to be really on point to make sure those performances came through for people.”

Consequently, VRWERX decided to use motion capture for the facial animation, and adopted Faceware’s ProHD Headcam hardware and Analyzer and Retargeter software for this work. Faceware systems have been used for gaming and other kinds of entertainment projects ranging from the ‘NBA 2K’ titles to recent releases of the ‘Call of Duty’ franchise to feature films. While performing their roles, the actors wear the ProHD Headcam, which captures and records the motion data, and then an animation team uses Analyzer and Retargeter software to process the data and transfer it to digital characters.

Pixel System

Instead of monitoring markers via an array of fixed cameras, a microcamera inside the headrig captures HD video of the actor’s facial performance. The fixed frame rate, interchangeable lens camera has a HD sensor and glass optics for colour, sharpness and latitude and outputs an HD-SDI signal useable in typical production pipelines. VRWERX’s team can record to regular codecs like ProRes or DNxHD onto a deck or capture card. Because they are not relying on a camera array, the actors can work in any size volume and have their own key lighting built into the rig so they can look in any direction on stage.

Faceware pavr mocap52a

The Faceware facial recognition software Analyzer tracks – that is, follows - the recorded motion over time and, through a pixel-based sampling approach, converts the video into extremely detailed facial motion files ready to be transferred to a rigged digital character for animation. It works on the premise that every human face is similar, and through a series of assumptions identifies features common to all faces – including distinctive emoting areas such as the upper cheeks, lower eyelids, and jaw position that are difficult to capture accurately.

From there, Retargeter transfers the captured data to the facial rig through a plug-in to Autodesk Maya, 3ds Max or MotionBuilder where the rig was made, bringing the actor's performance into the game world. To gain the control and flexibility needed to create convincing facial animation, the animator can combine facial poses from a library with the movement data. The software’s region-based control, keeping the eyes, eyebrows, mouth etc independent from one another, gives the animator a higher degree of precision and a chance to increase detail and realism.

Precision Rigging

You do not need to build the rigs in a special way – if they can be used for keyframing, they can be driven by Retargeter. During the Character Setup process you define areas of the face as groups. Each has an associated set of attributes - bones, joints, blendshapes, morph targets, deformers and so on - which indicate to Retargeter where to apply the facial motion tracking data.

You define which attributes you want to affect each area of the face. Once the values of these attributes are set with Retargeter, you can animate. Because the data is applied very quickly, iterations are fast and efficient, and various functions including keyframe reduction, smoothing, a master timing tool and others make working with the data easier.

Faceware ProHD 02 2

Faceware pavr mocap56

"This process is a good way to accelerate the facial animation pipeline while still capturing detailed facial performances," Bradan said. "Running everything through the Analyzer and Retargeter usually results in great translation from the start, which is extremely helpful. In some cases, it allows us to jump straight to the polishing stage, minimizing the amount of key frame work required.

“When you’re working in VR, ease of use is vital. The player becomes immersed to the point where they feel as if they’re actually standing next to this character. Since you don’t want to shatter that, quick iteration, exploring the most terrifying options for the scene, during development is important.”

User Perspective in a 3D Environment

VP of Business Development at Faceware Peter Busch, feels that believable facial performance is important in virtual reality for the same reason that any single element of the experience is important. “Engaging with stories relies on drawing on emotional, character-driven, nuanced performance from the faces of the characters. Faceware’s system is built to capture an entire facial performance, including micro expressions, and display those subtleties in convincing ways,” he said. “If the developer is familiar with Faceware, then facial animation for VR will be similar to what they already produce for mobile, desktop or console games.

“What changes is the user perspective and 3D environment – the developer needs to take that into consideration when developing the game. However, the ultimate output of Faceware's technology is always facial movement, which remains consistent throughout, and obviously it’s still important to hire talented actors to draw on their acting skills in imaginary worlds like VR, delivering the best facial performance possible.

Faceware pavr mocap74

Pipeline Integration

The Faceware system is also designed to integrate directly into lots of different industry pipelines – as well as VR – which is why they are used in games and films. One contributing factor is markerless performance recording. Peter said, “The pixel-based approach samples and tracks the entire face, including subtle eye movement that gives soul and emotion to any character. Rather than limiting the quality of the data by tracking points on the face, the Faceware system tracks textures and features to create very detailed output. The software can handle anything from stylized to photo-real characters.

“Markers can also cause technical issues such as occlusion or coming loose during performance capture, and don’t work as well for two key elements of convincing CG animation - eyes and lip sync. Markers can’t be placed on the eyes or inner lips. Many users choose to apply ink markers, which can produce great reference from frame to frame about what is actually moving on the performers face, and also help with consistency for users of the data.”

Pose-Based Workflow

Another distinctive aspect of Faceware’s system is the pose-based workflow described above, which feels familiar to animators and can make it easier to focus on the animation rather than hand keying frames. “Recently, we worked with EA to improve their batch processing and create an automated pipeline that their entire team can use, enabling them to produce an average of 30-45,000 seconds of animation per month,” said Peter.

Faceware pavr

Faceware has been used in productions for more than a decade, leading to the development of functionality emphasizing efficiency and flexibility. A pose library can be used across a full team of animators, or by one animator to produce many shots, or even in some of the customized, highly automated workflows some of their customers are creating - like EA.
 
Peter said, “What has been interesting for me personally is to see how our company can follow the changes taking place across AAA games, VFX, feature film, television, VR, AR and indie development. By spotting these trends we can keep our products working closer to the cutting edge. Many people have preconceived notions of motion capture. We aim to break these by balancing the demands of users who look for absolute realism from performance capture, with requirements of creative users who want to apply the relative motion of the performance capture to their characters in an artistic and character-friendly way."  facewaretech.com