NVIDIA, ILM and GPL Put the Force behind Render Pipelines
Lucasfilm’s ILMxLAB was created in 2015 to build experiences for new immersive platforms. After releasing a VR experiment, ‘Trials on Tatooine’, ILMxLAB recently announced a partnership with mixed-reality company Magic Leap to create original Star Wars-related content. The Lab is also working on third-party VR experiences using ILM’s traditional VFX pipeline in combination with real-time rendering, and experimenting with new systems using its own content.
Magic Leap is developing a head-mounted virtual retinal display - drawing a raster display directly onto the retina of the eye - that superimposes 3D CG imagery over real world objects, by projecting a digital light field into the user's eye. To do this they’re aiming to construct a special type of light-field chip.
ILMxLAB artists are using HP Z Workstations fitted with NVIDIA Quadro GPUs to work faster. Using NVIDIA VRWorks VR SLI, which allows multiple GPUs to be assigned to specific stereo ‘eyes’, they say they’ve accelerated stereo rendering markedly. To render a VR frame in stereo, the GPU must render the same scene from two different eye positions. A normal application using only one GPU must render these two images sequentially, which means twice the CPU and GPU workload.
With the OpenGL multicast extension, it is possible to upload the same scene to two different GPUs and render it from two different viewpoints with a single OpenGL rendering stream. This distributes the rendering workload across two GPUs and avoids the CPU overhead of sending the rendering commands twice. It is a simple way to achieve measurable acceleration.
ILMxLAB Principal Engineer and Technology Development Lead Lutz Latta said, “Since starting to use VR SLI, our rendering power has nearly doubled and we have been able to create higher fidelity renderings.”
Bringing the Star Wars universe to VR with the ‘Star Wars: Trials on Tatooine’ VR experience, now available on SteamVR, helped ILMxLAB experiment with what storytelling in VR could look like. “There is a fine line between a VR video game and an interactive cinematic experience that engages users in the story,” Lutz said. “We want engagement with the story and the world it plays in, but less of the competitive nature of a video game. ‘Trials on Tatooine’ was our first step in creating something meaningful.”
Lutz talked about these systems behind the production of cinematic VR in the NVIDIA Theatre at SIGGRAPH 2016, discussing his experiences of repurposing offline rendered movie-quality assets for real-time rendering in the sub-11 milliseconds per frame necessary for VR. He said, “We’re experimenting with using four to eight NVIDIA graphics cards working together for rendering, closing the gap between creating movie assets and VR assets with an eye towards continually increasing frame rates.” In the meantime, ILMxLAB continues to create further immersive VR experiences.
GPL Technologies and NVIDIA Show Virtualization and Deep Learning
NVIDIA also partnered with GPL Technologies at SIGGRAPH to show new systems for virtualization and deep learning. To demonstrate the potential of virtualization, the two companies built and implemented a workstation using Virtual Desktop Infrastructure, or VDI. Designed by GPL engineers with NVIDIA’s GRID system, the demo involved a remote workstation running Mechdyne TGX desktop software. The TGX Remote Desktop application optimizes centralized compute resources, cloud environments and Virtual Desktop Infrastructure, and is designed to work with NVIDIA hardware.
GPL Technologies CEO Brian Terrell said, “NVIDIA’s GRID has made it more practical to run graphics-intensive applications such as Maya and Nuke on virtual desktops. Visual effects producers can now take advantage of VDI, including security of assets, centralized workstations and efficient use of hardware resources in a pooled environment, just as they do with virtual machine servers.”
A separate demonstration showed a workstation remotely logged into a NVIDIA Quadro VCA and running an Iray GPU rendering plug-in. The demo explained how Iray facilitates the creation of photorealistic imagery by delivering immediate visual feedback. The VCA, or Visual Computing Appliance, is a network-attached appliance that focuses the performance of NVIDIA GPUs on the network for integration into design workflows, and can linearly scale to achieve interactive global illumination. As well as Iray, plugins exist for the V-Ray renderer and Optix engine.
Also, for the first time at SIGGRAPH, GPL and NVIDIA introduced NVIDIA’s new DGX-1, a deep learning super-computer in a box. In a visual effects context, deep learning refers to using machines to recognize images, thereby potentially automating many routine tasks - even certain creative tasks. www.nvidia.com