disguise Open Graph Volinga

disguise’s new integration with the Volinga.ai SaaS platform makes it possible for filmmakers and virtual production specialists to generate 3D environments very quickly. Users can capture real-world 2D content on a mobile phone, upload the footage to Volinga's AI Platform, and, through disguise’s RenderStream bi-directional protocol, translate it into immersive virtual scenes that anyone on set can interact with.

The workflow serves as a practical way to use a virtual production volume to recreate environments shot on location. Users can upload their footage via the new Volinga RenderStream plugin, and modify the weather, atmosphere r others aspects of the scene – without having to recreate it in 3D from scratch. Scenes can easily be run in real-time with little optimisation needed, cutting down on lengthy pre-production work.

Volinga uses the uploaded images to train an AI model about the environment, generating a NVOL file (see below) that can be downloaded, imported to Unreal Engine and, because it is compatible with virtual production solutions like Disguise, used in a virtual production volume.

disguise Volinga3

This workflow uses NeRFs (neural radiance fields), a technique that generates 3D representations of an object or scene from 2D images using machine learning. It encodes the entire scene into an artificial neural network, which predicts the light intensity at any point in the 2D image to generate new 3D views from different angles. Volinga’s workflow is based on a robust pipeline for 3D content generation, designed to overcome common obstacles in VFX and virtual production workflows such as 3D asset creation and photogrammetry.

The use of NeRFs means the Volinga Suite pipeline allows efficient creation of 3D environments with a full parallax effect, which is essential for dynamic perspectives and depth in a 3D environment. DPs can move the camera freely on a virtual production set, without restrictions.

RenderStream connects the physical stage with the virtual set by integrating the disguise hardware, software, content engines and camera tracking. RenderStream also scales the production by distributing the rendering across multiple render nodes using cluster rendering. Cluster rendering ensures graphics are processed with enough performance to support detail in the content output.

“With this new integration, disguise and Volinga users can create and navigate 3D environments for virtual production, dramatically reducing the time and effort required in creating 3D environments,” said Volinga Co-Founder Fernando Rivas-Manzaneque.

disguise Volinga2

So far, feedback from over 700 beta users has been optimistic, noting significant reduction in time and budget needed to create photorealistic environments. For example, they could capture a scene on an overcast day, and load the NeRF into content engines like Unreal Engine. Disguise is aiming to include native support for 3D generative AI in disguise, and ease the process of capturing, manipulating, and visualising 3D environments.

To begin, users can download the disguise RenderStream plugin for Volinga, upload images or videos of a 3D environment to the Volinga.ai platform, and convert them into a NeRF. This will be in the form of an NVOL file that can be rendered in both Unreal Engine and disguise RenderStream software for use in virtual sets. www.disguise.one