Freefolk has promoted Rob Sheridan to VFX Supervisor in their Film and Episodic division and Paul Wight is now the company’s first Chief Operating Officer.
Golaem 9 includes a new animation engine, helps create new shots without simulations, and has better crowd control, helping artist improve the quality of their animated hero characters.
Adobe’s Generative Remove feature, based on the Firefly AI model, is now part of Lightroom, and the AI-powered Lens Blur tool has now become generally available with new presets.
V-Ray 6 Benchmark updates Chaos’ software for comparing CPU and GPU render capabilities, with test looping and a new test scene for direct NVIDIA CUDA/RTX GPU mode comparison.
Maxon One 3D software for motion design, broadcast and VFX has undergone an upgrade with new tools including Cinema 4D Particles, NPR rendering in Redshift and Red Giant Geo.
Pitch Black, parent company of VFX studios FuseFX, FOLKS, Rising Sun Pictures and El Ranchito, has announced the appointment of Mikaël Damant-Sirois as Vice President of Operations.
Autodesk has prioritised connected assets, data and workflows, and efficient new time-saving tools for the 2025 versions of its content creation software.
In Substance 3D’s most recent releases, two new Firefly-supported features have been integrated directly into Adobe Substance 3D design and creative workflows.
JAMM VFX and colour studio in Los Angeles is welcoming Alvin Cruz to serve as creative lead on projects across their client roster. Equipped with a rich background in visual effects.
Chaos announced the company’s first foray into AI tools through three new features, as well as the first look at a new visual storytelling product for 3D assembly and animation.
Foundry Modo 17.0 comes with performance updates to accelerate modelling, rigging, animation and interactivity, and a bundled Prime version of Octane for GPU rendering.
The Mill created an edgy Super Bowl ad titled ‘Mullets’ for Kawasaki’s first Super Bowl entry with a pipeline featuring Autodesk Maya, Houdini Vellum simulations and proprietary fur.
SIGGRAPH Asia 2023 conference and exhibition in Sydney attracted 5,690 attendees from over 40 countries, making a key contribution to the computer graphics industries.
Autodesk’s Flame 2021.2 update integrates the finishing environment to make it more flexible and customisable, with better performance. A new object-based keyer identifies and generates a matte for major objects in a bounding box. This update also increases speed and improves ease of use within Flame’s Effects environment. Users have a new in-context node Search tool, and wider media format compatibility.
Like earlier releases of semantic keying in Flame for the sky and part extraction from the human body, head and face, the new Salient Keyer uses object recognition machine learning to identify and generate a matte for the most prominent, or ‘salient’, object in a bounding box. However, this new keyer is not object-specific, but helps to isolate objects in an image. The bounding box can be animated, and reframing will produce good results for recognisable objects.
Salient keyer
The speed of interaction for navigating and scrubbing larger, more complex timelines is now much quicker. The scope of a timeline search preset can be determined in advance, and machine learning models now load on-demand. Caching is improved – the caches persist so you no longer have to recreate them after re-opening a project.
Storyboard thumbnail generation is optimised. Storyboard thumbnails are now generated asynchronously so that users no longer have to wait for all segment thumbnails to appear before selecting the previous or next segment. This improves shot-to-shot navigation. Starting a session by generating thumbnails first also speeds up navigation, and thumbnails are cached, persisting when the application is restarted. However, modifying a Timeline FX requires the thumbnail to be regenerated.
A new Single / Dual panel UI layout is also now available, displaying all the tools at the artist’s disposal. The single view shows one menu at a time, whereas the dual view displays two columns showing twice the amount of information.
A new Search tool has been added to allow users to access and add nodes faster in the Batch, BFX and Action Schematics, as well as in the Image node and Gmask Tracer tools. This means Flame artists can quickly search through all nodes that can be added and attach them to schematics as they see fit.
Searching colour
Any tools can be added to a schematic from any node bin. The new Search tool allows artists to add regular nodes in Batch, Action, OFX plugins and Matchboxes, and at the same time other community-generated Matchbox tools will also appear in the list. A new preference panel for Search allows artists to control which nodes they see and which are hidden based on a favourites, tagging and hiding system.
Compatibility with ARRI, Red, Codex X2X HDE, Pixspan and Sony XAVC formats have been updated with this release. Also, content created in Flame Family software can now be exported in Portable Network Graphics (PNG) format. www.autodesk.com
Angus Kneale, former Chief Creative Officer and Co-founder of The Mill New York, has launched Preymaker, a collective of creatives, technologists and producers. To innovate and create content for brands and companies, they use a custom cloud-based platform created with Amazon Web Services (AWS). Andus’ partners in this venture are Melanie Wickham, former Executive Producer and Director of Production and Verity Grantham, former Chief of Staff, both from The Mill New York as well.
Angus said. “Mel, Verity and I are proud to have had a hand in The Mill’s legacy of work, calibre of artists and producers and the creative culture that inspired and supported them. We’re continuing that spirit of innovation at Preymaker with our focus on creativity, technical development and people.
Cloud Native
“Our team of artists, producers and technologists collaborate globally, entirely in the cloud, making Preymaker one of the first content makers that is 100% cloud native, which means the team can use up to date systems and software at scale, as soon as it is available. This allows continuous experimentation and innovation, which is at the heart of Preymaker’s mission to create exceptional work with our clients and partners.”
The Preymaker name comes from Angus' working farm in upstate New York, which features orchards and apiaries. The surrounding area is a wild landscape of large trees, waterfalls and wildlife. Angus said, “We use it as a metaphor for what we do, creating that same spirit of wonder, magic and awe for our clients.”
Preymaker’s home base is a production studio in SoHo, New York City, serving as a central hub and connection for a growing staff who work both remotely and on-premises.
Background
After originally working at The Mill London, Angus co-founded The Mill in the US, transforming it from a London-based boutique to a multi-national facility. He was instrumental in creating significant IP such as The Blackbird, which was a Cannes Innovation Gold Lion winner. An electric car that transforms to match the dimensions of almost any car, it can also be programmed to replicate typical driving characteristics such as acceleration curves and gearing shifts. Meanwhile, it captures footage of the surrounding environment through its camera array and stabilisation unit.
He worked with his team to bring Mascot to market, a proprietary real-time animation system that enables CGI characters to be performed and animated live using a combination of Unreal game-engine technology and motion sensors.
He also directed PETA’s ‘98% Human’ spot that condemns the entertainment industry for its abuse of animal actors and advocates the alternative potential of using lifelike computer-generated creatures. The spot received a Cannes Gold award and a standing ovation led by Dr Jane Goodall at the Great Apes Summit.
Angus has been working most recently with teams of PhD researchers using computer vision and machine learning to create and develop new systems for advertising, film and media.
The Team
Over the past 20 years, Melanie Wickham has held senior production roles at creative studios including The Mill, Absolute Post and Animal Logic. “Preymaker is an opportunity to create a community where there are no boundaries, which extends to projects of varied media and disciplines we undertake, aspirations of our team and expectations of our clients.”
Verity Grantham’s experience includes films and commercials working with Michel Gondry, Fredrik Bond, Nicolai Fuglsig, Daniel Wolfe, Martin de Thurah, Jim Jenkins, Jonathan Glazer, Anthony Minghella and Stanley Kubrick. “Our virtual, cloud-based capabilities, which we began to develop well before the pandemic shut everything down, are serving us and our clients well. Technology married strategically and imaginatively to creative is the way forward and the key to success for us and our clients.”
Preymaker has simplified the processes on the company’s cloud-based platform for clients for ease of use and accessibility. The team has kicked off its first projects collaborating with McCann, BBDO, 72andSunny and Johannes Leonardo, and directors Peter Thwaites, Daniel Wolfe, Lance Accord and David Gordon Green. preymaker.com
The NVIDIA Omniverse platform is an RTX-based 3D simulation and collaboration platform capable of simulating photoreal 3D objects and scenes in real time. NVIDIA launched its open beta stage at the virtual GTC event this week.
Using the platform, remote teams can collaborate simultaneously on projects in a way similar to editing an online document. Typical users and applications would be architects iterating on 3D building design, animators revising 3D scenes, and engineers collaborating on autonomous vehicles.
Artists and engineers working in robotics, automotive, architecture, engineering and construction, manufacturing and M&E all need to continuously improve their creative processes and animation pipelines over time. The Omniverse Platform acts as a hub, where new capabilities are exposed as micro-services to connected clients and applications. It aims for universal interoperability across different applications and 3D systems vendors, and its real-time scene updates are based on open-standards and protocols.
Pixar’s USD and NVIDIA’s MDL
The platform supports real-time photorealistic rendering, physics, materials and interactive workflows between 3D software packages. It is based on Pixar’s Universal Scene Description (USD), a format for universal file interchange between 3D applications, directly sharing most aspects of a 3D scene while maintaining application-specific data.
The USD scene representation has an API allowing complex property inheritance, instancing, layering, loading on demand and other features. Omniverse uses USD for interchange through its central database service, called Nucleus (see below).
Materials in Omniverse are represented by NVIDIA’s open-source MDL (Material Definition Library). NVIDIA has developed a custom schema in USD to represent material assignments and parameters, preserving these during interchange between different application-specific material definitions. This standard definition enables materials to look similar if not identical across multiple applications.
USD structure allows you to only relay the changes you have made to objects, environments and other design elements within the collaborative scene, which means edits are efficiently communicated between applications while maintaining overall integrity.
Inside Omniverse – Tools and Services
On top of Omniverse’s USD / MDL foundation, the plaform has five main components – Omniverse Connect, Nucleus, Kit, Simulation and RTX. These components, plus the connected third party content creation (DCC) tools and other connected Omniverse microservices, make up the whole Omniverse system.
Omniverse Nucleus has a set of basic services that various client applications, renderers and microservices use to share and modify representations of virtual worlds. Nucleus works through a publish/subscribe model – that is, Omniverse clients can publish modifications to digital assets and virtual worlds to the Nucleus Database (DB), or subscribe to their changes. Changes are transmitted in real-time between connected applications.
Omniverse Connect libraries are distributed via plugins that client applications use to connect to Nucleus and to publish and subscribe to individual assets and complete worlds. Once synchronised, a software plugin will use the Omniverse Connect libraries to apply updates from outside and publish changes generated from inside – as necessary.
As the application makes changes to its USD representation of the scene, Omniverse Connect keeps track of the differences and publishes them to Nucleus for distribution to subscribers.
Omniverse Kit is a toolkit for building native Omniverse applications and microservices. It is built on a base framework with functionality accessed through light-weight extensions that are plugins authored in Python or C++. A flexible, extensible development platform for apps and microservices, Kit can be run headless or with a UI that can be customised with a UI engine.
The Extensions are building blocks that users assemble in many ways to create different types of Applications. They include RTX Viewport Extensions, Content Browser Extensions, USD Widgets and Window Extensions and the Omniverse UI. As they are all written in Python, they are very customisable and therefore the catalogue of extensions is expected to grow. They are supplied with complete source code to help developers create, add and modify tools and workflows.
In the Omniverse Pipeline, DCC applications, plus those the user has built using Omniverse Kit, can all be exported to the USD file format and have support for MDL materials. Using Omniverse Connector plugins, Omniverse portals are created between these apps and the Nucleus Database. The Nucleus server also supplies functionality as headless micro-services, and delivers rendered results to different visualisation clients - including VR headsets and AR devices.
Simulation in Omniverse is done through NVIDIA plug-ins or microservices for Omniverse Kit. Currently, Omniverse physics includes rigid body dynamics, destruction and fracture, vehicle dynamics and fluid dynamics. One of the first available simulation tools is NVIDIA’s PhysX, the open-source physical simulator used in computer games. The objects involved in the simulation, their properties, constraints and so on are specified in a custom USD schema. Kit has tools for editing the simulation set-up, start/stop and adjusting parameters.
Omniverse supports renderers that comply with Pixar’s Hydra architecture. One of these is the new Omniverse RTX viewport. RTX uses hardware RT cores in Turing and upcoming NVIDIA architectures for real-time ray tracing and path-tracing. Because the renderer doesn’t rasterise before ray-tracing, very large scenes can be handled in real-time. It has two modes – traditional ray tracing for fast performance and path tracing for high quality results.
Omniverse RTX natively supports multiple GPUs in a single system and will soon support interactive rendering – in which the rendered image updates in real time as changes are made in your scene – across multiple systems.
Early Access and Software Partners
The open beta of Omniverse follows a one-year early access program in which Ericsson, Foster + Partners, ILM and over 40 other companies – and as many as 400 individual creators and developers – have been evaluating the platform and sending reactions and ideas to the NVIDIA engineering team.
At this time, NVIDIA Omniverse connects to a range of content creation applications, and NVIDIA has created demos, called Aps and Experiences, to show how it works in the different workflows. Apps are built using Omniverse Kit and serve as a starting point for developers learning to create their own apps. They will continually gain new features and capabilities. Experiences, on the other hand, are packages containing all the components and extensions needed to address specific workflows.
Early adopters of NVIDIA Omniverse so far include architectural design and engineering firm Foster + Partners in the UK who is using Omniverse to help with data exchange workflows and collaborative design processes. Woods Bagot, an architectural and consulting practice, is working with Omniverse to set up a hybrid cloud workflow for the design of complex models and visualisations of buildings, and Ericsson telecommunications is using real-world city models in Omniverse to simulate and visualise the signal propagation of its 5G network deployment.
Omniverse has support from software companies including Adobe, Autodesk, Bentley Systems, Robert McNeel & Associates and SideFX. Blender is working with NVIDIA to add USD capabilities facilitating Omniverse integration with its software. The goal is to allow artists and designers to use the collaborative functionality of Omniverse while working with their preferred applications.
Autodesk’s senior vice president for Design and Creation Products Amy Bunszel said, “Projects and teams are becoming more complex and we are confident Autodesk users from all industries will respond to Omniverse’s ability to create a more collaborative and immersive experience. This is what the future of work looks like.”
Interested persons can sign up for the Omniverse open beta program here. It will be available for download in the coming months. www.nvidia.com
VFX Legion Founder James David Hattin has announced the signing of CG Supervisor Blake Anderson to the company’s studio in British Columbia. The newest addition to the recently launched BC division’s leadership team, Blake brings a range of skills and experience managing and collaborating with large work-from-home teams of artists on feature films and episodic series.
Blake’s 15-years of experience in the industry includes six years as a VFX supervisor for ABC’s hit fantasy TV series ‘Once Upon a Time’ while at Zoic Studios. ‘Wonderland’, ‘666’, ‘District 9’, ‘Stargate Universe’, ‘Stargate Atlantis’, ‘The 440’ and ‘Muppets in Oz’ are also among his credits.
A graduate of The Art Institute of Vancouver, formerly CDIS, Blake has experience as a 3D artist, CG generalist, 2D animator, matte painter, compositor and pre-vis artists. His in-depth knowledge of the process of creating visual effects brings insight to his role as CG Supervisor.
Led by Head of Production Dylan Yastremski, the new studio mirrors keeps the LA main facility's structure while expanding the company's capabilities. The opening of the division supports the increasing demand for VFX Legion's film and TV services, working exclusively with home-based teams of talent.
Blake will be joined by the BC operation's full core team of managers, support staff and lead artists over the coming weeks. Upgraded capabilities, in addition to the increased scale of the leadership team and the company's network of home-based talent, enable the new division to accommodate a greater volume of projects simultaneously and meet the needs of larger scale projects and more complex productions.
"Establishing a local management team with the scope and experience needed to guide work-from-home artists as seamless collaborative teams enables VFX Legion to use the BC studio's expanded resources effectively," said Dylan, a partner in the new venture. "Blake brings the skills needed to optimize the advantages of our remote capabilities and the scope and calibre of Legion’s growing network of talent around the world."
"Last year we brought Blake on board to lend his talents to the feature film, 'Black Christmas,” said James. "He impressed me with his depth of experience and mastery of our custom workflow. His strong managerial skills and experience make him a great fit with VFX Legion, and we are excited to have him on board."
"The role of CG Supervision at VFX Legion presents me with a unique opportunity to work with a company that launched as a fully remote resource for visual effects,” Blake said. “Their well-established, collaborative pipeline, scope of talent, efficiency and quality comes with years of experience working with home-based talent. While physical isolation and a certain disconnect are inevitable when talent is homebound, approaches to working remotely vary. I’ve worked with VFX Legion’s team on projects in the past and found the way they interacted with home-based artists, the level of accessibility, and smooth workflow made it a great collaborative experience.
“There’s no way of knowing how long social distancing will be a necessary precaution, but even looking beyond the current pandemic, VFX Legion provides artists and productions with advantages as the industry moves forward. It’s a great company that’s way ahead of the curve, and I’m thrilled to join its team.” www.VFXLegion.com
Katana lighting and look development software has been in development at Foundry for about 10 years. The Foundry team has continued to make regular updates during that time, the most recent of which have been versions 3.5, bringing multi-threading, the Monitor Layer and support for USD 19.11, and 3.6 when snapping, the Network Material Edit tool, a dockable UI and support for 3Delight 2.0 were added.
In a recent interview with Digital Media World, Jordan Thistlewood, Director of Product – Pre-production, LookDev & Lighting at Foundry said that their upgrades to Katana tend to fall into three main themes – improving the underlying approaches to artists’ challenges, enhancing performance and refining the user experience.
He said, “Changes to the underlying functionality re-shape Katana in response to changes in the demands placed on artists over time, but do not change its fundamental architecture, which is very solid and robust. We are mainly adding more modern approaches to the programming. Performance upgrades take advantage of Katana’s ability to handle massive, complex projects across many shots – and expand on it as people’s understanding of a ‘really big show’ continues to grow.
“Examples of how Foundry addresses UX upgrades in Katana are the Hydra Viewport, the change to Qt UI plugins and the Network Material Create tool, which are the new shading nodes added in 3.2. Changes don’t always happen in one step. Ahead of the Hydra Viewport, for instance, an API was added in 2.6 that allowed users to add their own viewer and meanwhile allowed Foundry developers to add the Monitor Layer in v3.5.”
It’s also interesting to know that Katana’s API draws the Viewport in layers such as the 3D Open GL geometry layer, the handles and so on. It looks like a single Viewport to the user, but is in fact a series of layers.
Visualising Workflows
Artists always have the potential to display complex workflows by visualising in the Katana UI anything or everything happening in the project. Jordan said, “That may sound useful, but would be overwhelming, and wouldn’t help lighting, look development or digital cinematography artists to do their jobs more efficiently or any better. Such artists need space to be creative on screen and need to see the image they are working on, larger and more clearly than anything else.”
This need has led to the development of a far less cluttered view in Katana 4.0, streamlined per user, with relevant information accessed through small HUDs, trimmed down for a specific purpose. Selection tools are available for portions of images, and users can interact with 3D objects, like lights – all functions are there close by, instead of spread across multiple monitors. Foundry also wanted to design the system to suit people currently working on Wacom tablets, or who plan to in the future, leading to UX combinations that would be easy both on a mouse and with a Wacom pen.
Interaction with Scenes
The Viewport improvements in Katana 4.0 concern the way artists interact with scenes. “Typically, you will see 3D models drawn up in a game-style system – this is a fairly quick approach and allows selections but does not scale well for complex crowded scenes, Drawing extensive geometry for OpenGL would bog the processor down. While raytracing cuts through the complexity and produces a beautiful result, it is slower.
Katana now supports both methods in the Viewport. But both still need a way for artists to interact with selections. So, Foundry took Katana’s ability to select items from a render, and moved it into the Viewport. Whether an artist is working with the OpenGL geometry OR a raytraced imge, he or she can interact and select parts of it natively without choosing a particular method, and both have the same workflow.
An example would be a large, crowded scene with, say, 100,000 characters. The artist can render and see the result in the Viewport – and can still select and work on a single character from that render in Katana. It isn’t necessary to track the separate elements from the hierarchy.
Artist-Focussed Lighting
Katana 4.0 has an Artist-focussed Lighting Mode that allows artists to think and act like cinematographers do on set and to use the application as a digital cinematography platform. For example, cinematographers will put a lamp in a scene, calling for a light fixture that exposes correctly and looks lit. They then think of the way the lamp will light the characters and scene, and what other off-camera light sources may be needed to achieve the effect they want, while still looking as though the scene is lit by the lamp.
{media id=154,layout=solo}
“This is a typical process on a real set,” said Jordan. “In Katana, artists have access to modes and HUDs allowing them to place lights according to either where they want the light to fall OR where a light source is located in 3D space. Once the locations are selected, Katana handles the scene lighting.
“When the environment light is designed correctly, the area lights can be defined. This means that a higher priority can be given to how light strikes, wraps around or otherwise affects an asset or scene, than where a lamp sits in 3D space. For all of this work, artists have the small, specific HUDs to control intensity, exposure, colour, spread and so on, similar to working at a lighting mixer on set. Artists can also still clone and duplicate lights as usual – with all relevant controls directly in front of them.”
You might also choose to create a light that hits the scene at a certain angle that will be seen through a given camera position. Given those parameters, Katana can place the light to match. It removes much of the mechanical work – that is, the steps needed to feed in precise values – so that you can light the way you think.
Katana and Render Engines
Jordan commented that meanwhile, render engines are also developing in a similar ‘on-set’ way. He said, “It’s no longer always necessary to break down and recreate the physics of a lighting scenario. Once you place a light, tweaking and adjusting it according to how the light bounces through the scene as you work is always possible, but Katana is taking advantage of what the new rendering developments can achieve.” Since Katana has always been renderer agnostic, supporting 3Delight, Arnold, Redshift, RenderMan and V-Ray, all the new tools likewise work with each renderer plugin as well.
A new Light and Shadow Pattern tool is based on a similar approach. When casting shadows in a scene, the user can select a point on an object to cast a shadow and then select where the shadow should fall, for example, sending a shadow across a character’s face. The light will be placed accordingly. A certain amount of interaction is still needed to refine the look, but each iteration is used more effectively to arrive at the ones that follow.
Foundry has developed Multiple Simultaneous Renders to make rendering more productive for lighting artists. Earlier, it was necessary to complete, or cancel, one full render before you could stop, look, adjust and then do another. But Katana has its own separate rendering program, which feeds the rendered images back to the main Katana program UI.
Simultaneous Renders
“That separation allows a greater flow of traffic back to Katana and supports simultaneous renders, which means you can compare two renders more readily and toggle between them while they are processing,” said Jordan. “You can have several tasks underway at once on files from the same project. Katana accomplishes this kind of work very well, keeping multiple jobs on the go from one project file, such as a number of shots or asset setups for an animation.”
{media id=153,layout=solo}
To use simultaneous rendering at scale, Katana has a queuing system for controlling a series of renders and showing their status in resizable thumbnails. Instead of just waiting out the processing time, artists can watch progress under different lighting set ups, for example, or for grouped assets, when a change is made on one, Katana will allow it to affect the others in the group while the render carries on.
Also called Katana Foresight, the new workflows give artists the foresight to make the right choices, earlier in a project, and take a level of guess work out of rendering. It also makes use of all your processing cores while you figure out what to do next!
To help users follow the renders visually as they proceed, and make faster, better decisions, Foundry is planning to ship a Contact Sheet mode with version 4.1, similar to the contact sheet in 3Delight.
Interactive Network Rendering
Scalable Interactive Network Rendering is another new workflow that helps artists make decisions earlier on with less iteration by giving them the option to use external machines to complete specific renders. At times, artists are constrained by lack of horsepower and the cost of rendering any single item. Now, they can connect those tasks, including a series of simultaneous renders, to a render farm where the work can be distributed across machines to access as much power per render as possible. Less waiting for results makes the connection between user, software and hardware more direct.
Into the future, further extension of the new rendering capabilities is high on the agenda for Katana. Foundry is now working on Multiple Live Renders, for which ‘live’ refers to making changes during rendering and letting the render refresh as it proceeds. “Currently, Katana supports only one live render at a time, but the architecture could support multiple renders at once. The artist would make a change and then watch how it looks in all versions, adjusting each one as it goes,” said Jordan.
“The upcoming contact sheet mode, as well as the access to render farm capacity, should help this process. Having immediate feedback gives artists a level of freedom when taking a project further. We’re also planning a number of smaller upgrades to Katana 4.1 soon, ranging from using streaming to optimise network traffic, interacting with Nuke and Mari, and exporting information to and from Katana. The goal for all of this is to upgrade the way artists interact with their production work.
Katana 4.0 is expected to be released in October 2020. Foundry has also created a page where artists and readers can register their interest in Katana 4.0. www.foundry.com/