Deakin Motion.Lab Goes Face to Face with Real-time Virtual Production

Faceware Deakin MotionLab VirtualCam1

Deakin Motion.Lab, a motion capture R&D facility based at Melbourne’s Deakin University, has created and continue to develop their own virtual production pipeline called Alchemy. It assists filmmakers working on CG animation productions using motion capture, who frequently find it difficult to connect what they see live on the capture stage, with how the performances will look in the finished project.  

The virtual production pipeline transfers motion and facial capture data into on-screen characters in real time. Directors can see  see how a CG character or creature will react while they are on the stage with the performer. If they want to change the motion or try a different move they can adjust it straight away, instead of waiting for lab technicians to solve the data from the entire session and going back some time later for a re-shoot.

Building a Better Pipeline

Alchemy has now been refined to the point that clients who do not need a highly polished result can skip post-production altogether, which saves the production quite a bit of time and money. One of the most recent and sinificant refinements to the pipeline is the integration facial capture. For body motion, they use an OptiTrack optical motion capturesetup, but the lab’s virtual production supervisor Peter Divers said that the lack of facial capture had been a notable deficiency in Alchemy’s early days.   

Faceware Deakin MotionLab2

Faceware Deakin MotionLab3

‘The Adventures of Auntie Ada’ is an on-going project at Deakin Motion.Lab.

“All of our characters had a bland, static expression until post when we would have a chance to add keyframe facial animation, because live facial was not available until relatively recently. When Faceware released their Live server, we were among the early customers and tried to stretch the limits of what facial capture can be used for,” he said. As Faceware improved their software over time, more clients producing small-budget, short-form YouTube content, for example, publish directly from the motion capture stage and do not go through post.

Facial Capture in the Lab

Deakin streams data from Faceware Live and from the OptiTrack setup into the Unity game engine, which can display the results in real-time on monitors set up on the stage. Faceware Live’s facial system is markerless and therefore is easier for them to use and takes less time to set up compared to other equipment. For example, some facial systems’ software requires separate calibration processes for each performer and also for each digital character onto which capture data will be retargeted. Such initial processing gives good results but takes a fair amount of time.

Motion.Lab and their clients don’t always have time for preparation sessions before the shoot day, in which case, Faceware is very practical. Peter said, “When we put a facecam on someone, the software calibrates to that person’s face very quickly. That is especially handy during long, busy shoots or when we are working with new talent.”

Faceware Deakin MotionLab4

Faceware’s Live Driver SDK tracks the identifying features of the face on every frame, in real-time, and immediately analyses even very subtle facial expressions, allowing the character to match an actor’s precise facial movement and express emotion. Live Driver is a lightweight client application with low hardware requirements.

To calibrate, the Live server captures a single neutral expression from the performers, which the tracking system uses to ‘find’ and capture their performances. It captures almost 180 degrees of motion, allowing free motion, and calibration can be kept consistent between sessions with the same actors. Using a detailed interface, users store and recall calibration frames as an overlay to the live feed, and toggle a grid for consistent camera and framing. Faceware Live also allows a group of actors to be calibrated at once.

Connected Director

Faceware Deakin MotionLab6

ABC Education series 'Minibeast Heroes'.

Deakin Motion.Lab recently used their pipeline on a six-episode series called ‘Minibeast Heroes’ produced for the Australian Broadcast Company’s ABC Education program. The entire production only lasted  a few months, mostly spent in pre-production before a one-week shoot at their studio, directed by Stefan Wernik of Armchair Productions. The presenter, science journalist Carl Smith, portrayed a character that shrinks down to explore the lives of tiny bugs, and wore motion capture gear for the shoot so that his movements could guide the animation of his digital model. 

Using Faceware during the live capture gave the director and prodution team a good approximation of what the character would be doing in on screen, including facial expression, to show them how the he would react on screen and whether or not that would tell the right story.

Faceware Deakin MotionLab5

A problem often encountered on animated children’s programming is that the performances seen on screen don’t actually work very well for kids, which may happen because, typically, directors are only able to influence the production at script level. They can storyboard frame by frame, but this is completed before the animation is outsourced and no further input is possible until the production is finished in the editing room.  

Petr said, “The Alchemy pipeline has positively affected that process because the director sees it all while he or she is still on the stage, making performance choices with the cast. They also have the option to experiment, direct the performer and see how a new move or expression is going to change the scene. We’ve seen stories dramatically change on the motion capture stage.”

Dr Jordan Beth Vincent, research fellow at Deakin Motion.Lab believes this approach  creates a connection that is often missing between the performer and the director at the moment the character comes to life. “It takes the performer from just recording the ADR in a sound studio somewhere – perhaps on the other side of the world - to actually becoming the character. The process becomes a true collaboration and the story benefits.”

More R&D, More Applications

Faceware Deakin MotionLab

Real-time motion capture is not new or exclusive to Motion Lab. The mocap teams at some of the major visual effects studios use their mocap equipment in this way with game engines, and similar production-specific systems have been set up to use as a previs tool on some movie sets. But Motion Lab’s ability to use real time capture and retargeting, for face and body, as part of production is exciting. So far the team has produced motion capture for video games and commercials, and since then interest in their services has grown and diversified.

Deakin’s ongoing plan is to explore new techniques and capabilities in an R&D environment and meanwhile attract partners from other industries, such as health and engineering, to find applications for their developments.

At SIGGRAPH 2018, the Deakin team found a surge of interest in virtual production pipelines, and they feel encouraged that the work they are pusuing is catching on elsewhere. “It’s really exciting. We want to continue working with interesting industry partners who have stories to tell - we have a platform for those stories to be told in a faster, better, cheaper way,” said Dr Vincent. “We’re developing our own content now, thinking about the kind of stories that we want to tell.”   motionlab.deakin.edu.au