disguise develops hardware and software that helps producers bring live visual productions to complex video surfaces. However, the applications for such productions continue to expand as new types of virtual and extended reality production evolve, which has led to a need for dedicated research and development at the company.
disguise Labs was created in 2019 when Samuel Folkard, working as Sr xR Support Specialist at disguise, led an initiative that became an R&D division. A natural synergy arose between R&D and the team’s decision to undertake a series of real world projects, feeding the insights from those experiences to back into the wider R&D and product development teams at disguise.
From this initiative grew a bigger plan. Samuel, backed by disguise Chief Collaboration Officer Abi Bowman, opened disguise Labs New Zealand’s small office in late 2021 in Auckland. Since that low key beginning, the team has developed – starting with Sam, now Head of Labs, and two interns – into a team of 15 in a new location in Kingsland, New Zealand, with a complete XR Stage for testing and R&D.
R&D Past, Present and Future
Olivier Jean, Creative Technologist at disguise Labs, talked with Digital Media World about this team’s foundation, and about what they have been working on most recently. “Since the inception of Labs, we have always been motivated to explore new, emerging technology,” said Olivier Jean, Creative Technologist at disguise Labs. “Over the last couple of years, we have conducted extensive R&D, which helps us stay ahead of the sudden bursts of enthusiasm we’ve recently seen for techniques like extended reality, virtual production, metaverses, NFTs and now AI.”
Through this work, they remain focussed on their desire to take better advantage of those techniques to support storytelling and create unique experiences, mostly operating at the intersection of research, facilitation and creation.
“Our main R&D core exists to explore different technologies that allow us to weave a tighter knit between real life and virtual environments in the context of hybrid events and hybrid spaces. It’s something that feels familiar to us from our live event background, but is also interesting as it opens the way for a potentially wider range of creative projects,” Olivier said.
“In this space, we effectively look at the technology stack of various IO devices and consider how best to integrate them. Once the user has laid the basic groundwork, a creative phase usually follows, during which we find and explore the device’s functionality to see how to make the best use of it – that is, choreographing, exploiting and deploying it for the current project.
Portals Between Realities
“That kind of exploration has led to work on a concept for a new type of interactive mechanism called ‘portals’ as a means of connecting the two realities of real life and the virtual world, and efficiently communicating between them. In large spaces, for example, gestural communication suits the high latency nature of such a connection. Voice and music creation was harder to achieve. So, we set out to create a digital instrument that would keep a common sync between nodes, but still allow for interaction and improvisation on the part of the users.”
Portals is just one example among the team’s many newer pursuits. More recently, over the last few months, they have been actively trialling and using the new AI image generation tools, as well as seeing how machine learning in the context of computer vision and large language models (LLM) will interact with and become integrated into future pipelines.
Another area in which they have a growing interest is the emergence of consumer access to and uses of AR and MR. Olivier remarked, “This has been a niche field over the last few years, but announcements like Apple’s VisionPro wearable AR are signalling the strength of this trend as it emerges on the market. So far, it’s been a fascinating journey, as we have been seeing so many new approaches emerge to support the creative industries.”
Meanwhile they have been working on the foundations of such experiences, from their conceptualisation, UX design philosophy and visual design to the content development pipelines and the technology stack and integrations. What they call R&D ‘missions’ are, to a certain extent, a progressive series of tasks through which the core techniques and craft of storytelling are adapted to new media and technology.
“We ask ourselves how to best utilise a medium, and find its strengths and weaknesses,” said Olivier. “As we approach these projects, we have a good overlap of talent among the team members, all of whom have worked in live event and experiential industry, and some have various specialisations in scenography, film, VFX, content creation, technological development, project management or finance. This combination means we share a common language but can also amplify our distinctive creative visions and understanding.”
Barriers to Virtual Production
Olivier Jean and Sam Folkard also talked with us specifically about Virtual Production (VP). We found that they have a lot a practical experience to share, including with some common barriers to entry for teams interested in using virtual production. In particular, cost remains a subject of debate. “The reality is that high-end bespoke workflows are always going to be expensive,” Olivier said. “The positive trend we have been watching and getting involved with is the push to democratise the workflows and the equipment as much as possible, thus helping to reduce the costs.
“Another aspect to consider is that content creation costs across the industry continue to shift down. We are seeing fidelity improvements in real-time render engines, easier content creation and asset acquisition, and experimentation with AI assisted workflows. The future in general is headed toward more affordable, more accessible content pipelines, which in turn contribute to a reduced cost for in-camera VFX (ICVFX) and VP workflows.”
Nevertheless, people tend to look at the spend related to ICVFX / VP and overlook what has been saved. It’s easier to see a line item on a budget than the items that aren’t there. Marketing material abounds about ICVFX / VP productions that aren’t limited by weather conditions, how they can shoot the sunset for days and don’t need to travel to locations with a huge crew. “Those arguments can sound like a broken record,” said Olivier. “But they are extremely real and can result in real cost reduction.”
Another barrier to entry, or perhaps leading to a reduced ROI on VP, is the lack of committed decision-making at the earlier stages of a project. A certain minimum timeframe is required to deliver successful content for VP, notably on the bigger projects. While the industry at large is working on reducing the resources and time needed to produce content, decisions still need to be made ahead of time.
Sam Folkard said, “The biggest barrier I see is the changes people have to make to the timing of a production. VFX timelines need to be reversed, that is, content has to be signed off and approved ahead of the shoot, which requires earlier buy-in from the studio, director, art department, DOP and many others. This change in workflow is a time management challenge. Convincing teams and productions to make decisions earlier is the biggest hurdle.”
Olivier and Sam feel that productions are still finding their limits. Some see the benefits and elect to utilise a virtual production and ICVFX workflow, while others can’t commit to certain parts of their creative process until post, and are better served with a straight green screen approach.
When teams are starting to work with Virtual Production, the learning curve includes pre-planning, new workflow designs and new kinds of stagecraft as much as developing and learning to use new equipment. Olivier said, “The unsung heroes of VP are previz, techviz and the virtual art dept, resulting in an integrated, virtual approach to film planning, development and refinement.”
From a creative, technical and production perspective, VP is enabling teams to collaborate by iterating together and integrating their outputs. From a VFX or Special FX perspective, this has the effect of better designing to the strength of each complementary approach.
“SFX brings realism and tangibility, while VFX offers flexibility and limitless possibilities,” he said. “The convergence of these techniques also helps to bridge the gap between physical and digital realms. By designing their approach together, better & more integrated outcomes can be achieved, however, this type of digital collaboration workflow often requires a mindset shift as well as acquiring new skills and techniques. It can initially present some challenges, but people are quick to see and enjoy the benefits and the chance to push the boundaries of what's possible in film production.
One of the biggest advantages of Virtual Production is the extra time that can be gained for iteration. In practical terms, the disguise Labs team has observed that virtual production produces far more precise visualisations and context for a project, and at a considerably earlier phase, facilitating better-informed decision-making.
“Virtual production has undeniably pushed the creation of digital assets further upstream in the production pipeline,” said Olivier. “For instance, concept artists increasingly incorporate 3D-based workflows. This becomes the inception point and sets the stage for the lifecycle of CG assets as they transition from concept to layout, from previsualization to in-camera VFX, and from the shoot to post-production VFX.
“Moreover, in light of the growing trend towards unified and integrated systems, as exemplified by the USD framework or Omniverse, we can glimpse the true potential for CG assets and scenes to be smoothly transferred between various DDCs, offline/online renderers, and crucially between different teams. This integration and the corresponding technologies form the foundation for a more convergent, cohesive and integrated workflow between teams and departments.” www.disguise.one