Freefolk has promoted Rob Sheridan to VFX Supervisor in their Film and Episodic division and Paul Wight is now the company’s first Chief Operating Officer.
Golaem 9 includes a new animation engine, helps create new shots without simulations, and has better crowd control, helping artist improve the quality of their animated hero characters.
Adobe’s Generative Remove feature, based on the Firefly AI model, is now part of Lightroom, and the AI-powered Lens Blur tool has now become generally available with new presets.
V-Ray 6 Benchmark updates Chaos’ software for comparing CPU and GPU render capabilities, with test looping and a new test scene for direct NVIDIA CUDA/RTX GPU mode comparison.
Maxon One 3D software for motion design, broadcast and VFX has undergone an upgrade with new tools including Cinema 4D Particles, NPR rendering in Redshift and Red Giant Geo.
Pitch Black, parent company of VFX studios FuseFX, FOLKS, Rising Sun Pictures and El Ranchito, has announced the appointment of Mikaël Damant-Sirois as Vice President of Operations.
Autodesk has prioritised connected assets, data and workflows, and efficient new time-saving tools for the 2025 versions of its content creation software.
In Substance 3D’s most recent releases, two new Firefly-supported features have been integrated directly into Adobe Substance 3D design and creative workflows.
JAMM VFX and colour studio in Los Angeles is welcoming Alvin Cruz to serve as creative lead on projects across their client roster. Equipped with a rich background in visual effects.
Chaos announced the company’s first foray into AI tools through three new features, as well as the first look at a new visual storytelling product for 3D assembly and animation.
Foundry Modo 17.0 comes with performance updates to accelerate modelling, rigging, animation and interactivity, and a bundled Prime version of Octane for GPU rendering.
The Mill created an edgy Super Bowl ad titled ‘Mullets’ for Kawasaki’s first Super Bowl entry with a pipeline featuring Autodesk Maya, Houdini Vellum simulations and proprietary fur.
SIGGRAPH Asia 2023 conference and exhibition in Sydney attracted 5,690 attendees from over 40 countries, making a key contribution to the computer graphics industries.
Unreal Engine 4.27 is now available with updates supporting filmmakers, broadcasters, game developers, architectural visualisation artists, and automotive and product designers.
In-camera VFX
The use of in-camera VFX is now more efficient, with results of a quality suitable for wider applications such as broadcast and live events.
Designing set-ups in nDisplay, Unreal’s tool for LED volumes and rendering to multiple displays, is simpler to manage due to a new 3D Config Editor. All nDisplay-related features and settings are placed in a single nDisplay Root Actor to make them easier to access. Setting up projects with multiple cameras is also easier.
nDisplay now supports OpenColorIO, improving the accuracy of the colour calibration, which associates content creation in Unreal Engine with what the physical camera captures from the LED volume.
For efficient scaling in nDisplay, multiple GPUs are supported. This also makes it possible to make the most of resolution on wide shots by dedicating a GPU for in-camera pixels, and to shoot with multiple cameras, each with its own uniquely tracked field-of-view.
A new drag-and-drop remote control web UI builder is now available to help build complex web widgets without writing code. This makes it possible for users without Unreal Engine experience to control their results from the engine on a tablet or laptop.
Camera Control
Also, the Virtual Camera system built for Unreal Engine 4.26 now includes Multi-User Editing, a redesigned user experience and an extensible core architecture – that is, it can be extended with new functionality without modifying the original codebase. A new iOS app, Live Link Vcam, is available for virtual camera control – users can drive a Cine Camera inside Unreal Engine using a tablet or other device.
A new Level Snapshots function will save the state of a given scene and later restore any or all of its elements, for pickup shots or as part of an iteration phase. Users also have more flexibility when producing correct motion blur for travelling shots that accounts for the look a physical camera would have with a moving background.
Recently, Epic Games and filmmakers’ collective Bullitt assembled a team to test all of these in-camera VFX tools by making a short test piece following a production workflow.
USD, Alembic and Workflow Connections
With this release, it’s now possible to export a bigger variety of elements to USD, including Levels, Sublevels, Landscape, Foliage and animation sequences, and to import materials as MDL nodes. You can now also edit USD attributes from the USD Stage Editor, including through Multi-User Editing, and bind hair and fur Grooms to GeometryCache data imported from Alembic.
Datasmith is Unreal’s set of tools for importing data from various sources. In 4.27, Datasmith Runtime allows more control over how the data is imported, including access to the scene hierarchy and the ability to import .udatasmith data into a packaged application built on Unreal Engine such as the Twinmotion real-time architectural visualisation tool, or a custom real-time design review tool.
A new Archicad Exporter plugin with Direct Link functionality is available, and Direct Link has been added to the existing Rhino and SketchUp Pro plugins. Datasmith Direct Link maintains a live connection between a source DCC tool and an Unreal Engine-based application for simpler iteration. You can also aggregate data from several sources, such as Revit and Rhino, while maintaining links with each DCC tool simultaneously.
GPU Light Baking
Unreal Engine’s GPU Lightmass uses the GPU instead of CPU to progressively render pre-computed lightmaps, using new ray tracing capabilities of the DirectX 12 (DX12) and Microsoft's DXR framework. It was developed to reduce the time needed to generate lighting data for scenes that require global illumination, soft shadows and other complex lighting effects that are expensive to render in real time.
Also, since the results can be seen progressively, the workflow becomes interactive. Users can stop, make changes and start over without waiting for the final bake. For in-camera VFX and other work, GPU Lightmass means that virtual set lighting can be modified much faster than before, for efficiency.
VR, AR and Mixed Reality
Support for the OpenXR framework, ready for production, is now added to make creating extended reality content – VR, AR and mixed reality – in Unreal Engine easier. OpenXR simplifies and unifies AR/VR software development, so that applications can be used on a wider variety of hardware platforms without having to port or re-write code, and compliant devices can access more applications.
The Unreal OpenXR plugin allows users to target multiple XR devices with the same API. It now supports Stereo Layers, Splash Screens, querying Playspace bounds to determine what coordinate space to play the camera animation relative to. Extension plugins from the Marketplace are available to add functionality to OpenXR without waiting for new game engine releases. The VR and AR templates have a new design with more features built-in and faster project set-up functionality.
Containers in the Cloud
Epic Games has continued to develop Pixel Streaming, which is now ready for production and has an upgraded version of WebRTC. It enables Unreal Engine, and applications built on it, to run on a cloud virtual machine and to allow end users, anywhere on a regular web browser, to use it as normal on any device. 4.27 also has Linux support and the ability to run Pixel Streaming from a container environment.
This new support for containers on Windows and Linux means that Unreal Engine can act as a self-contained, foundational technical layer. Containers are packages of software that encompass all of the necessary elements to run in any environment, including the cloud.
Container support includes new cloud-based development workflows and deployment strategies, such as AI/ML engine training, batch processing and rendering, and microservices. Continuous integration/continuous delivery (CI/CD) can be used to build, test, deploy and run applications in a continuous process. Unreal Engine containers can support production pipelines, develop cloud applications, deploy enterprise systems at scale and other development work. www.unrealengine.com
Autodesk’s Bifrost updates include virtual sliders for feedback port changes, unknown nodes for fixing broken graphs, expressive simulation graphs, and terminals for renderable geometry.
Animal Logic's USD Alab is a fully realised USD scene, intended to encourage further collaboration and exploration among the wider community into Pixar’s Universal Scene Description (USD). Animal Logic Group has now released USD Alab as open source software.
As an early adopter of USD, Animal Logic began transitioning their Sydney and Vancouver studios to an end-to-end USD based pipeline, starting during production on ‘Peter Rabbit’ in 2017 and completing with Peter Rabbit 2 in March 2020.
Seen earlier in their 2017 open source project AL_USDMaya, Animal Logic continues to promote broader USD adoption through the release of USD Alab, intending it to serve as a reference for many USD Concepts. “We believe USD to be a foundational tool for our industry and broader communities, and we encourage the release of open source assets to educate and inspire others to conduct their own exploration,” said Group Chief Technology Officer, Darin Grant.
While open source data sets exist aleady, USD ALab is one of the first real-world implementations of a complete USD production scene. It is a full scene description from global assets through to shot outputs, including referencing, point instancing, assemblies, technical variants, global assets and shot based overrides.
“There are two downloads available, including guiding documents and two sets of textures,” said Supervising Assets TD, Jens Jebens. “The first download contains the ALab scene assets themselves, derived from our production assets and conformed for compatibility to allow them to load in any tool that supports USD. The second download is an optional extra, a production rendering Texture Pack that delivers 4K OpenEXR textures with udims for production style rendering.”
“Beyond the USD assets, we’ve included documentation showing some new ideas and concepts from our experience using USD, including the idea of render procedural definitions, an extremely useful concept that we have not seen in USD to date,” Grant said. “We hope that this combination forms the starting point for future contributors to present their own ideas for discussion, promotion and, hopefully, adoption.”
“The ALab concept was born from Animal Logic’s Art Department,” said Head of Production Technology, Aidan Sarsfield. “Handed a brief for something ‘uniquely Animal’, the team came up with a great story that revolves around a secret backyard shed inhabited by a mad scientist of sorts. The resulting asset suite draws on the unique aesthetic that you’ll find in our studios, and there’s also some fun Easter eggs in there that link back to 30 years of the Animal Logic brand.”
USD ALab is also among the first sets of assets to adopt the Academy Software Foundation’s asset license. Animal Logic wanted to allow the broadest use of these assets to promote education, training and demonstration by students, studios and vendors. “Initially motivated by a desire to create unencumbered assets for our own demonstration and presentation purposes, we realised that the industry at-large could use something similar and pushed to release them,” Aidan said. “I’m excited to see how ALab develops in the community, particularly as we will be extending the data set over time.”
The USD ALab data set is now available and hosted here on Animal Logic’s website through Amazon Web Services. animallogic.com
Christy Anzelmo at Foundry has moved up to become Chief Product Officer, leading the delivery of Foundry’s product portfolio for Media & Entertainment and Digital Design. Christy first joined Foundry in 2015 and has been working as Senior Director of Product, heading product management and product design on Foundry's VFX compositing and review tools Nuke, Hiero, Cara VR and Flix.
In this new role of CPO, she will work with the company’s product management and product design teams on the vision and roadmap for Foundry’s products, taking responsibility for introducing new tools and software.
Christy studied at Cornell University and the University of Colorado Boulder. Her background in product design and business management extends over 15 years, and includes product leadership experience in fashion, electronics and software. Over that time, she has thought of her role in terms of developing creative ideas into successful products, and has focussed on that process throughout her career.
She said, “I have also been fortunate enough to work with passionate and dedicated VFX and animation artists and the teams that support them. They have been a continuous source of inspiration and motivation as innovation went ahead for the Nuke family.”
Striking a Balance
Christy Anzelmo, Chief Product Officer at Foundry
Christy and the teams she works with need to carefully balance innovation with the core improvements and maintenance. “That’s the ongoing challenge for established products like much of Foundry's portfolio. Because our products sit at the centre of pipelines, we tend to start from a baseline of technical and hardware requirements, following the VFX Reference Platform, for example.
“Then we build in core improvement projects that benefit the majority of artist workflows, and leave some space for the 'delighter' features – the ones artists aren’t expecting – and for niche workflows and experimentation. Since my background is in fashion, I tend to think of building a product plan like merchandising a collection. You need the right amount of black t-shirts and jeans, as well as trendy new ideas to excite customers and push the product line forward.”
Cultural Connections
She mentioned two aspects of Foundry’s culture that help manage this balance. “The first is the close connection between our Research team and the product development teams. Research tends to focus on longer-term technology trends, and product teams tackle mainly core engineering and shorter-term projects, but we are all focused on the same mission of accelerating artists and teams and we work collaboratively to deliver new technology to users.
“For example, the Machine Learning tools released in Nuke 13 came from a collaboration between the Research and Nuke teams. We could see potential in ML to accelerate image processing, often at the expense of artist control. To re-gain some of that control, the Research team started several projects, experimenting into whether implementing artist-driven ML training into Nuke would be feasible.
“Once the team established that this would be possible, the Nuke product designers, developers and product managers worked hand-in-hand with the Research engineers to develop this experiment into a production-usable toolset we could deliver in Nuke.”
The second area that Foundry’s product teams focus on is connecting and working with customers to stay on the pulse of what they need, in order to adapt quickly. Last year when the pandemic took hold, they quickly re-prioritised some longer-term projects related to remote review from Nuke Studio and Hiero, and roaming licensing features. As the industry shifted abruptly to remote working, those new tools and features were pushed out to clients straightaway.
Changes in the Workplace
Because the last 18-month period has resulted so much change for users, Christy sees her product management and product design teams’ roles changing as well. Nevertheless, she believes strong product management is crucial to keeping development efforts focused on what artists need. “That’s especially true during times when artist and studio requirements are evolving quickly. I expect we will push harder to collaborate and understand the artist experience, now that on site studio visits are less likely to happen for a while.
“With the explosion of remote work in VFX, in the next product cycles, I also see a greater emphasis on improving the user experience – that is, making Foundry tools easier to learn and use for artists working from home now, long term, as well as thinking about how we can enable more collaboration for remote teams. Luckily, we are seeing exciting technological developments that will enable new workflows of this kind, from USD to the expansion of cloud-based workflows.” www.foundry.com
Adobe released updates across the Creative Cloud applications Premiere Pro and After Effects, Substance 3D, Mixamo and the mobile painting tool, Fresco. These updates contribute to simpler, faster workflows, and launch new integrations with the open source 3D tool, Blender.
Speech to Text is a new tool and workflow in Premiere Pro that completes each step of the captioning workflow inside the NLE automatically, while leaving the creative control over the results to the user. As well as shortening the time it takes to create a transcription and captions, it gives the editor new ways to search and navigate video sequences – when you double-click on a word in the Text panel, the playhead moves to that position in the Premiere Pro timeline.
Where changes are needed, users can edit the text in the transcript. When finalised, captions are automatically created from the transcript on the Timeline, using Adobe Sensei machine learning to match the pacing of human speech. The captions can then be customised using the design tools in the Essential Graphics panel. Supporting 13 languages, Speech to Text is included with Premiere Pro or Creative Cloud All Apps subscriptions at no additional cost.
Substance 3D and Mixamo Plugins for Blender
As well as joining the Blender Development Fund, Adobe is launching two new plugins — Substance 3D in Blender and Mixamo Auto-Control Rig Plugin for Blender. The Substance 3D in Blender plugin allows Blender users to access the Substance library of textures and 3D assets without leaving Blender itself, and use the Substance materials (SBSAR) files in Blender projects. With the plugin, users can tweak parameters and switch presets to iterate and reuse assets faster, optimizing the workflow as they go.
Mixamo speeds up the process of rigging biped characters for video games or visual effects, which normally is very time consuming. Mixamo’s ability to create a control rig on a character model, by applying motion capture clips, can now be accessed directly inside Blender. The plugin allows you to edit Mixamo’s mocap animations – choosing from over 2,500 clips – non-destructively in Blender. Both of these plugins are now available in beta.
Rendering in After Effects
Since the release of After Effects Multi Frame Rendering for export in March 2021, more features have been developed that accelerate After Effects by using all of the cores in the system it is running on. For example, Multi-Frame Rendering for Previews accelerates projects by taking advantage of the system’s CPU cores. Speculative Preview will automatically render compositions and any associated pre-comps in the background while After Effects is idle.
For rendering out compositions, especially to H.264, Multi-Frame Rendering export from Adobe Media Encoder can render multiple compositions in the background while users continue working on others. After Effects will send a notification when renders are complete via the Creative Cloud app, and Render Queue Notifications will display on the user’s phone.
Non-destructive Layers in Fresco
Adobe Fresco is Creative Cloud’s drawing and painting app built for stylus and touch devices like iPad, Wacom tablets and phones. It has vector and raster watercolour and oil brushes, and AI-powered ‘live’ brushes. In continuous development, it now has non-destructive Adjustment Layers, Graph Grids for precise placement, expanded preferences and masking support in iPad and Windows. Like iPad users, Windows users can now access the Photoshop brush libraries by Kyle T. Webster, as well.
With the Colour Adjustment Layers, you can experiment and make a colour change without making a permanent colour commitment, which is useful for applying non-destructive tonal and colour edits. Hue/Saturation, Brightness/Contrast and Color Balance are accessed through the Appearances icon in the taskbar.
In Fresco's Precision panel, users can now access and control new Grids graph overlays used to align the elements of a design with the overall vision, Transform mode to snap a layer to the centre of the canvas, and Rotation Snapping. Masking support is now available for Vector, Type and Image layers. Layer masking makes changes reversible on every layer in Fresco.
Beginning with this version, Fresco will remember the customisation of your work surface each time you open the app. It will also remember which tool you were using when you last closed a document. www.adobe.com