Vicon Shōgun 1.3 Brings Real-Time Body Tracking Down to the Fingertips
Vicon’s upcoming release of Shōgun 1.3, will include extremely precise finger solving so that artists can create and see fully animated characters in real time, saving time to help control costs. Users will also be able to stream their own character data directly into game engines and, with new support for the Pixar Universal Scene Description (USD) format, create content for mobile devices and augmented reality applications.
The addition of finger solving in Shōgun 1.3 means that users can record the entire body – from skeletal movements to the smallest hand gestures – making characters suitable for use in projects ranging from blockbuster movies to AAA games. Capturing full five-finger motion data has been among most animators’ top priorities. To achieve the necessary level of finger solving, Vicon partnered with Framestore VFX studio.
Finger Precision with Framestore
Framestore’s virtual production supervisor Richard Graham said, “For several years we’ve worked with Vicon on many different projects. Because artists have been waiting for years for accurate real-time finger solving, we welcomed the opportunity to work with their team to make it a part of Shōgun. We have already deployed successfully it on a number of projects, and given that it is part of the whole-body solve, it fits straight into our real-time and offline pipelines."
Vicon and Framestore collaborated on the project for over 18 months, basing their work on a dense set of 58-markers capable of tracking subtle finger and knuckle movements. Alternatively, the number of markers can be reduced and the resulting data combined in real-time with the user’s digital rig, producing a fully animated digital character capable of distinctive, intricate movement from writing by hand to playing an instrument. Thus, a process that used to take weeks of manual animation can now be done almost instantly.
Game Engine Retargeting Pipeline and Workflow
As well as finger solving, Shōgun 1.3 users can record data directly into either the Unreal Engine 4 or Unity game engine without needing an intermediary application. Shōgun users can retarget a performance onto any FBX skeleton while still in Shōgun, and within a few seconds see an animated version of their character by adding a full-performance digital avatar in the game engine.
From there, artists can shift to a virtual production workflow, with a 3D environment as a background, changing settings on the fly and altering the scene in real-time as needed. This process becomes a form of film and game development, helping to deliver complex, realistic projects in much less than the usual time.
This process is possible because Shōgun 1.3 can export skeletal data using the new Universal Scene Description (USD) format. In fact, Shōgun 1.3 is now the only motion capture platform able to export USD, a file type currently used by major VFX companies around the world, including ILM, Framestore and Pixar. Users can export data directly to iOS devices and view Vicon’s data directly on an Apple device, while also supporting Apple’s new AR kit system for use in augmented reality projects.
USD is very important now for 3D artists and productions because of the way CG pipelines are built to create movies and games. Pipelines generate massive amounts of diverse 3D data used in scene description. Each application – such as those for modelling, shading, animation, lighting, FX and rendering – in a pipeline has its own proprietary form of scene description designed to suit its purposes that is not readable or editable by any other application.
USD makes it possible to interchange assets and animations, and also to assemble and organise assets into virtual sets, scenes and shots, transmit them between applications and non-destructively edit them with a single, consistent API in a single scenegraph.
Camera Masking and Automated Clean-up
The Shogun platform is used through two separate applications. Shogun Live handles system set-up and actor calibration, and captures and records data, and now includes multiple machine support that scales processes across a number of PCs to improve performance for large captures. Camera mask painting is also added so that background noise like reflective objects and unwanted light sources can be selectively blocked out within the camera grid, creating a mask for use during capture.
Shōgun Post, where the mocap data is loaded, checked, cleaned and exported, will have a new gap list function used to first identify individual users and then separate a single performer. From there, Shōgun will identify any gaps in the performer’s movements and automatically fill them in based on the expected movement. If a waving hand is missing a marker on a finger, the software will add that finger, or fill a gap in a moving fist. Meanwhile artists don’t need to spend time looking for and individually filling in gaps.
Demos and Beta Stages
At SIGGRAPH 2019, Vicon demonstrated the Shōgun workflow by letting people direct their own animated movie starring live actors, in real-time. The actors worked with the attendees while using various virtual production tools, such as real-time capture from Apple’s new AR Kit.
All data was captured in Shōgun 1.3 and streamed into Epic Games’ Unreal Engine – see the workflow description above. The actors appeared as animated characters within a scene populated with digital assets borrowed from the Epic Marketplace.
Shōgun 1.3 is currently in closed beta with users including Electronic Arts, Framestore, ILM, Pixar, Ubisoft and others. The public beta will be available in September, with the full version expected later in 2019. www.vicon.com