Impossible Objects Visualises Mexico’s Sport Heritage for UFC at Sphere Las Vegas
VFX Supervisor Morgan McDermott at Impossible Objects talks about making their immersive film for UFC’s debut at the Sphere, combining heros and symbols of Mexican independence with UFC legends.
Chaos puts Project Arena’s Virtual Production tools to the test in a new short film, achieving accurate ICVFX with real-time raytracing and compositing. Christopher Nichols shares insights.
Moving Picture Company (MPC)has appointed Lucinda Keeler as Head of Production for its London studio, bringing over 20 years of experience and leadership in the VFX industry.
REALTIME studio has launched a Virtual Production division, following its grant from Media City Immersive Technologies Innovation Hub to develop a proprietary Virtual Production tool.
ZibraVDB plugin for Virtual Production and CG studios delivers high compression rates and fast render times, making it possible to work with very large volumetric effects in real-time.
Maxon One 2025 updates Cinema 4D, Redshift, Red Giant and Cineware, and releases ZBrush for iPad, putting ZBrush sculpting tools into a mobile device with a new UI and touch controls.
Das Element asset library software version 2.1 has new video playback controls, hierarchy tree customisation for libraries, faster set-up processes and simpler element migration.
Autodesk returned to SIGGRAPH 2024 to show software updates that include generative AI and cloud workflows for 3D animation in Maya, production scheduling and clip retiming in Flame.
Shutterstock launched a cloud-based generative 3D API, built on NVIDIA Edify AI architecture, trained on licensed Shutterstock content, as a fast way to produce realistic 3D models with AI.
Freefolk has promoted Rob Sheridan to VFX Supervisor in their Film and Episodic division and Paul Wight is now the company’s first Chief Operating Officer.
Golaem 9 includes a new animation engine, helps create new shots without simulations, and has better crowd control, helping artist improve the quality of their animated hero characters.
Adobe’s Generative Remove feature, based on the Firefly AI model, is now part of Lightroom, and the AI-powered Lens Blur tool has now become generally available with new presets.
FuseFX has announced its acquisition of Rising Sun Pictures (RSP) visual effects studio in Adelaide, Australia. Founded in 1995 by Tony Clark, Gail Fuller and Wayne Lewis, RSP has established itself as one of the world's top visual effects studios.
Co-Founder and Managing Director Tony Clark will continue to lead the studio and operate under the Rising Sun Pictures brand. Together, the combined companies have nearly 800 artists at eight locations around the world - Los Angeles, New York, Atlanta, Vancouver, Montréal, Toronto and Bogotá as well as Adelaide.
"Tony, Gail, Wayne and the entire team at Rising Sun Pictures have created one of the most well-established and respected independent studios in the world," said David Altenau, Founder and CEO of FuseFX. "Their commitment to delivering the highest quality art and service to their clients has helped establish the studio in the visual effects industry. Their previous work and position in the industry make them a fantastic partner for FuseFX."
Tony said, "We're very excited to be partnering with FuseFX, which comes at an ideal time as we grow to meet the demand over the coming years. Our vision for Rising Sun Pictures is to be a cornerstone component of the next generation global full-service visual effects company. With the FuseFX partnership, we can achieve this vision to ensure that we stay at the forefront of visual effects production and remain a trusted creative partner to our clients.
"I'd sincerely like to thank my fellow founders and shareholders for the last 25 years. We have all been crucial to the success of RSP, culminating at this pivotal moment in time. RSP will embark on an expansion plan over the next few years, and we’re grateful to be partnering with David Altenau and the team at FuseFX to help fully realise RSP's potential."
Thor: Ragnarok
Tony will be joined by RSP’s well-established executive management team, including Chief Financial Officer Gareth Eriksson, Head of Business Development Jennie Zeiher, Executive Assistant Maree Friday, Head of People & Culture Scott Buley and Head of Production & Executive Producer Meredith Meyer-Nichols. There will be no operational changes to the RSP business and the team will look to add talent after a recent expansion of the Adelaide headquarters that gives the studio a capacity of 270 crew.
The South Australian state government welcomes the news of the partnership between FuseFX and RSP. Minister for Innovation and Skills, David Pisoni said, "South Australia is enjoying a golden age in the production of film, television and streaming services."
"The state government’s incentives, in combination with federal incentives, mean that South Australia is a prime destination for visual effects production and will continue to be for years to come," Tony said.
David Altenau said, "We're thrilled to be joining forces with Rising Sun Pictures to help fuel their ambitious expansion plans and to offer an even broader range of skill sets, geographic locations and storytelling solutions to our clients at the level of quality and service they demand."
Over the past year, the studio has contributed to projects including Disney's upcoming Jungle Cruise led by VFX Supervisor Malte Sarnes and as lead vendor on New Line Cinema's Mortal Kombat under the direction of VFX Supervisor Dennis Jones. fusefx.com
Resulting from joint development work between Pixar, Animal Logic, Luma Pictures and Blue Sky Studios, USD is now integrated into Maya, and can be used to load and edit massive datasets very rapidly. Due to the full integration, artists can then work directly with the data using Maya’s native tools.
The new mayaUsdProxyShape node enables native Maya workflows directly on USD stages, which are in-memory containers of the composed USD scenegraph. This means users can work directly with USD data in common Maya editors like the Viewport, Outliner, Attribute Editor, Manipulators, Snapping, and so on. Also the new USD Layer Editor allows you to create, view and manage a USD Stage’s complex LayerStack.
Other advantages of working with USD are robust referencing functionality, nondestructive data editing workflows and support for complex variants on top of USD, which make pipelines and collaboration more efficient to help teams scale for high-volume data workflows.
USD plugin for Maya
USD Data as Native Maya Data
Users can import USD data as native Maya data, or export native Maya data as USD data, in effect using USD as a format for transferring data at high speed between Maya scenes or other applications that support USD.
A new USD Hierarchy View Window reveals a lightweight preview of USD scene structure, to show the contents of a USD file and set the state of the scene, including variants, before import. New features in Maya’s Outliner scene management editor make it possible to identify and work with USD data in the Outliner alongside standard Maya objects, such as different Data Branch colours, USD icons and badges and right-click contextual menus.
“Increasing support for open-source standards is a major area of focus for Autodesk, across all industries,” said Jocelyn Moffatt, Industry Marketing Manager, Entertainment. “The M&E team has worked toward USD integration, which is now crucial for innovation in visual effects and animation. Across tools, our goal is to simplify the complexities of working with massive datasets, aid in collaboration between artists and studios, and help teams creatively by managing technical pipeline challenges.”
USD stages
Paul Molodowitch, Lead Pipeline Technical Director, Luma Pictures, said, “We jumped at the opportunity to collaborate with Autodesk on USD for Maya, as we are strong believers in the power and usefulness of open-source projects. It was a complicated project to tackle, since it involved unifying the work of two separate codebases, and overseeing the contributions from many other sources.”
Animation Tools
Among Maya’s animation tool updates is a new Ghosting Editor that allows artists to see animation spacing – the change in an object’s position from one frame to the next – over time, making it easier to see necessary edits and how poses work together in animations. The Time Editor has been improved with support for cached playback, without the need to playblast, and for additive animation clips. When a clip is set to ‘additive’, its motion can added on top of the motion of an underlying clip, to create a new animation. This can help keep the total number of clips low.
Time editor
In the Graph Editor, which graphically represents the interpolation between keyframes as curves, filters are added to only display your selected animation curves. A new Peak Removal filter cleans up unwanted spikes and peaks in curves and the Smooth (Gaussian) filter improves control over the range and width of blur effects.
Maya 2022 has several procedural, topology-independent rigging workflows. Component Tags and Deformer Falloffs are new ways of defining membership and weighting, as well as seamlessly sharing that data between geometry and deformers. With Component Tags, geometry can store named sets of components directly on a shape node. The sets can then be passed to and used by other nodes. By reducing the number of nodes and connections required for deformation, the tags result in more efficient deformer graphs.
New rigging workflows
Deformer Falloffs can also be shared and reused in a topologically independent way with many commonly used deformers, including Skin Cluster, Cluster, BlendShape and all nonlinear deformers. Two new deformers have also been added – Solidify and Morph. The Morph deformer blends smoothly from one shape to another and, with the component lookup feature, morphs a shape using only a subset of its components.
Modelling and Rendering in Maya
Modellers in Maya have a new Sweep Mesh tool to procedurally generate geometry and adjust attributes such as profile shape and size. Using the new Game Vertex Count plugin, game artists are able to more accurately estimate how assets in Maya impact their in-game vertex count budgets – before exporting them to game engines. This feature also includes targeted settings for Unity and Unreal.
Modelling updates
Other modelling upgrades include the extrude thickness tool, faster lasso selection and pivot improvements. Match, translation and scaling enhancements give more precise control over scene transforms.
Arnold 6.2, Maya’s standard renderer, has new post-processing nodes including the Light Mixer and Bloom for better control of lighting effects, and tools for automatic denoising after each render. GPU improvements, including support for shadow linking and faster start-up, help to render scenes more efficiently. Improvements to USD support in Arnold add support for physical camera parameters, search paths, autobump visibility, per-face material assignments, and reading stages from the shared stage cache via the cache id parameter.
Arnold 6.2 in Maya
Also, Arnold’s integration with OpenColorIO v2 means artists can take advantage of OCIO’s native implementation of ACES (Academy Color Encoding System) and processing improvements directly in Arnold.
Overall, Maya now has a faster, up-to-date user experience with a shorter startup time and customisable preferences, splash screen improvements, and script editor updates that increase efficiency and with control. Python 3 is the new default programming language for Maya on Windows, Linux and Mac OS, although on Windows and Linux, Maya can still be started in Python 2 mode.
3DS Max Workflows – Modelling and Security
3ds Max workflows have been optimised with new texture baking, modelling and rendering capabilities. To protect scene integrity, new security improvements powered by the Scene Security Tools plugin include Malware Removal that automatically detects and eliminates malicious scripts from scene files and startup scripts. Scene Script Execution protects against malicious scripts embedded in 3ds Max scene files, regardless of whether the scripts use Maxscript, Python or .NET commands, by blocking the execution of unsafe commands.
Relax Modifier
Efficiency upgrades for workflows are added to the most common modelling tools including Smart Extrude, Slice Modifier, Symmetry Modifier and AutoSmooth. A volume preserve option has been added to the modifier that performs additional calculations to reduce small detail and noise from models, while retaining the shape and definition of the overall mesh. Reducing this small, noisy data, for example from Scan and Sculpt data, with Relax can improve the processing time of the Autodesk Retopology Tools.
The Extrude modifier now contains significant performance improvements to the speed and interactivity of the initial extrude calculation, and to the responsiveness of the output when adjusting the amount parameter, allowing complex splines, which used to take minutes to process, to be operated on in a few seconds.
Bake to texture
Simplified navigation and selection of baked map types allow users to bake frequently used maps in a few clicks without requiring complex material setups. Also, all floating viewports can now be viewed full screen without a border using a simple hotkey shortcut.
Rendering and Arnold
For speed and interactivity, Quicksilver render settings are now QT-based, and the Viewport Bloom settings have also been synced to the Quicksilver settings. With the new Viewport Ambient Occlusion sampling value in the Viewport configuration settings, users can refine their Viewport lookdev, and optimise their GPU performance by adjusting the Viewport Ambient Occlusion quality.
Arnold 6.2 in 3ds Max
Imagers can now be applied, removed, re-ordered and edited directly in a dedicated tab of the Arnold RenderView for post-process rendering. A new Light Mixer imager makes it possible to interactively edit the contribution of each light group AOVs during and after rendering, without restarting the render. Bloom or ‘glow’, the Noice Denoiser and OptiX Denoiser are all available now as post-processing effects.
MotionBuilder Efficiency
MotionBuilder now adopts Python 3 as the new default programming language on Windows and Linux. For developers, the MotionBuilder Python Command Line tool has been improved to allow new capabilities such as file processing and rendering from the Command Line. The MotionBuilder API has also been expanded making it easier to manage multiple script tabs in the Python Editor, and the startup experience can be customised to differentiate between different projects and tool versions.
A number of updates improve the overall animation experience and reduce the number of steps animators need to take to reach the desired result. Among these updates is the ability to visualise real Quaternion Rotation properties within the FCurve Editor, a new Add to Body Part property that simplifies keyframing for character extensions, and updates in the Character Controls that save time by reducing the number of actions required to expand IK auxiliaries when selecting individual effectors.
Maya, 3ds Max and MotionBuilder 2022 are available as standalone subscriptions or with the Autodesk Media & Entertainment Collection. www.autodesk.com
Foundry has launched into the Nuke 13.0 series of releases with a flexible set of machine learning tools, a new Hydra 3D viewport renderer and extended monitor out functionality, better workflows for collaborative review and Python 3 support.
With the Machine Learning Toolset, artists can use and control machine learning directly in Nuke 13. Foundry’s AI Research team developed the tools for artists who want to create bespoke effects in relatively little time for such applications as enhancing resolution, removing motion blur, tracker marker removal, beauty work and garbage matting.
Machine Learning CopyCat Node
Within the artificial intelligence research (AIR) menu is a new suite of plugins. The main node is the CopyCat node, which replicates customer effects by learning from a set of training images. The user feeds the node pairs of before/after images and the node will learn how to copy the transformation to progress from one to the other.
Upscale Node
Once the training is complete, the Create Inference knob will automatically create an inference node that applies the result of the training to the supplied input image. Effects trained using CopyCat can be shared with multiple artists, who load the effect using the inference node and apply the effect to individual shots. If it's not right for a given situation, you can continue training that data set with the new image data of their shot.
Two pre-trained tools are included in Nuke 13.0. The upscale node upscales footage by a factor of two, similar to Nuke's TBI scale node but with more refined results, especially when working with fine detail. The Deblurnode removes motion blur from the input and is a good match for working with stabilised footage. Its mask input helps isolate the effect to a specific area.
Initial Support for USD Hydra
Nuke 13.0 includes Foundry’s initial support for USD Hydra into Nuke. It upgrades Nuke's 3D Viewport to use hdStorm, one of the delegtes in the USD system of render delegates that serve as bridges between the Hydra viewport and a renderer. Nuke's viewport can now more accurately render a representation of the 3D scene from the ScanlineRender node. Users can also switch back to Nuke's old viewer, compare and choose between them.
USD Hydra support
The new Hydra viewer should support most of the familiar workflows for moving geometry around a scene, and particle systems and projection setups should display correctly. The main benefit of the Hydra viewport is Nuke’s ability to display lights, shadows, materials and textures, including moving shadows and animated textures, more accurately. It supports artistic decision making while working in the 3D viewport without switching back to the ScanlineRender’s 2D outputs.
At this stage the Hydra display is still not an entirely accurate representation of the output from the Scanline render, but it is much closer than has been achieved in Nuke before, and gives consistency with other applications using Hydra.
Direct USD Ingest
Nuke 13.0 users can load Camera, Light and Axis data from a USD file using native 3D Nodes, and carry USD data directly into Nuke without converting it to a different format. The newly supported data types will be separated out into the relevant Nuke 3D nodes. USD data can also be read directly within each of the updated nodes. For example, users can either automatically populate the camera node with all the cameras contained within the USD file and then toggle between them with the new scenegraph UI, or automatically populate the cameras into separate nodes.
USD ingest
Similar workflows for the light node bring point, spot and directional lights exported from other content creation tools into Nuke. The Axis node reads in a USD file and uses any primitives contained inside to generate position data for the Axis Node knobs, which is useful when matching a camera or geometry to another point in a scene. These extensions are being transferred to open-source so that pipelines can further extend and customise the nodes for their specific USD setup. Note that the USD version supported in Nuke has been upgraded to version 20.08.
Monitor Out Extension
The Monitor Out system in Nuke and Nuke Studio is extended across the Nuke software with a more stable, consistent experience – plus independent output transform controls and support for Nuke Studio’s floating window. With this window, a new Monitor Out properties panel allows Nuke users to view images on a second display, without the need of a Monitor Out card.
Nuke Monitor Out floating window
The new Output Transform Control allows independent colour settings to be applied to improve viewing accuracy regardless of the monitor the software is connected to. As a first step with HDR workflows, a beta feature now adds the ability to display HDR images on XDR- and EDR-enabled monitors on macOS.
A new Input Process knob will change the resolution of the output sent to your Monitor Out window, which helps when working at larger resolutions and monitoring to a lower resolution device. New Flip controls allow you to view images in new contexts and adjust the gamma and gain of the viewer, and toggle the display on your second monitor. This gives users more control during reviews.
In Nuke Studio, you can also switch directly between the Timeline and Node Graph with no disruption to Monitor Out output. The new Interactive mode allows greater control over how image buffers are displayed on the second screen, supporting side-by-side display of buffers. Customisable safe zone guides can also be overlaid.
Remote Sync Review
The extended Sync Review improves collaboration by allowing dispersed teams to review together or remotely. Sync Review can be used for client reviews or daily sessions, whole sequences or selected playlists, and helps make sure everyone is working towards the same goal. Teams can connect an unlimited number of Nuke Studio, Hiero or HieroPlayer sessions to a single sync session, so that teams can see projects in context, and sync all the relevant reviewing actions from playback and annotations to editorial changes. Teams can use any of the comparison modes such as side-by-side or wipe, and also make soft changes. All selected clips will update automatically to new versions.
Cryptomatte, Python 3, Annotations
Cryptomatte is now introduced natively to Nuke with a readjusted UI and Python 3 support, and has been moved into the Keyer section of the menu. Cryptomatte is developed by Psyop to create ID mattes, or image masks, automatically with support for motion blur, transparency and depth of field based on data available at render time. Because it ships with Nuke, downloading the 3rd party gizmo is not necessary.
Cryptomatte
This version of Cryptomatte has basic backwards compatibility, a new icon and a vertical matte list to make viewing selected mattes easier. In the new manifest source, users indicate whether the Cryptomatte manifest is embedded in the input image as metadata, or if it is contained in a separate sidecar manifest file. The wildcard functionality added to the matte list means you can make complex selections using an asterisk symbol.
HieroPlayer now has the same annotations capabilities as in Nuke Studio and Hiero. You can add annotations in the sequence or to each version to keep track of the changes, and to versions with different changes applied.
Adding annotations
Consequently, artists can interact on HieroPlayer sync sessions with access to all annotations and collaborate in real-time. Sync sessions are now fully interactive between Hiero, Nuke Studio and HieroPlayer. A user can import and create timelines, manually or from your assets manager, creating a playlist for review or last-minute additions to daily reviews.
Nuke 13.0 includes support for the VFX Reference Platform 2020, and a major upgrade to its Python API, updating to Python 3.7.7. As of Nuke 13.0, Python 2 support will no longer be available and Python scripts and integrations will need to be updated. Alongside this Python update, Nuke’s other 3rd party libraries have also been upgraded. www.foundry.com
SideFX Houdini Engine for UE4 and Houdini Engine for Unity are now available for commercial customers for free. Previously free for artists using Houdini Indie, this now gives commercial artists and studios the ability to deploy procedural assets created in Houdini to the UE4 and Unity real-time 3D platforms for use in game and XR development, virtual production and design visualisations.
Through the power of Houdini Engine, procedural tools and assets built in Houdini with custom-tailored interfaces can be brought into UE4 and Unity, and used by game artists whether they are familiar with Houdini or not. Houdini Engine does the processing work on the assets, and delivers the results back to the editor. These procedural assets work within the editor for content creation and are baked out before going to runtime.
The Houdini Engine plug-ins have been used on numerous shipped games including King’s Candy Crush, eXiin Ary and the Secret of Seasons, and Fishing Cactus Nanotale - Typing Chronicles.
Houdini Engine for Unity
The UE4 plug-in has been recently updated to a second version that has a redesigned core architecture and is more modular and lightweight. This version includes a new interface, support for world composition, blueprint support and a wealth of improvements and enhancements.
Customers can access up to 10 of these licenses per studio through the SideFX website, and request as many as they need through their account manager. For other host applications, such as Autodesk Maya, Autodesk 3DS Max, proprietary plug-ins, and for Batch processing on the farm, Houdini Engine licenses are available for rent. These licenses are also available as volume rentals for medium and large studios. Details are found here.
Houdini Engine Indie will continue to be free for limited commercial projects where the indie studio brings in less than $100K USD. www.sidefx.com
Artists can light multiple shots at the same time. Each shot can have global or precise per-shot changes.
Foundry’s Katana 4.0 has a new lighting mode and user experience, and updated USD capabilities. A new set of rendering workflows called Foresight is the main update in version 4.0. It comprises two new approaches, Multiple Simultaneous Renders and Networked Interactive Rendering, resulting in a fast, scalable feedback process that gives artists a chance to check their creative decisions ahead of final render.
Look Development
The look development architecture and UX in Katana 4.0, accessed through its tools and a shading node UI, allows look development to continue at the same time as shot production, whether artists are working on a single complex asset or a series of variations. Other tools can be used for procedural shot-based fixes or tweaks that all members of the production team can view and follow. Artists can also use Katana to drive and control look development with production metadata, so that teams can balance automation with manual work and achieve both efficiency and a high quality result.
A single Network Material Create node may be used to create multiple materials that share shading nodes. The ability to create complex groups of materials gives artists more freedom.
Artists interact with nodes built as part of a node graph system that can handle very complex shading networks. The workflows associated with these tools are, in turn, compatible with Katana’s complex pipeline workflows.
The Network Material Create node is able to create and manage multiple materials inside one node. Each material can have networks for any active renderer plugin, plus USD Preview Surface materials. using USD Preview Surface, artists can use Katana 4.0 to view materials, lights and shadows in the Hydra Viewer without rendering.
The Viewer is Katana’s viewport driven by Pixar’s USD Hydra system that was designed to work with modern graphics cards and handle massive scale. Due to a rewrite of the bridge that connects Katana to Hydra and the HdStorm render delegate, which aggregates and shares GPU render resources, users have better viewer performance and a more robust interpretation of USD information.
Here is an example of the PBR support available through HdStorm in Katana’s Hydra powered viewport.
Using Katana’s Network Material Edit node, look development and lighting artists make procedural edits to their existing network materials. They can customise a material from a material library for a specific purpose, for example, or make procedural shot edits to approved art direction. The Network Material Edit node’s UI visualises the full network, including materials designed in other packages and imported by USD, plus any edits, while each criteria is filterable.
Apart from using the Network Material Create node to create and manage multiple materials, its workflows include storing materials parameters for the surface, displacement or light shaders for multiple renderers at the same time, and constructing Network Materials using a combination of texture maps and renderer procedural nodes in a specialised UI. Nodes can be shared between multiple shading networks. You can develop looks for variations of assets using Katana’s parent and child material toolset, and place complex shading networks inside Katana’s Shading Groups to simplify sharing and reuse.
Lighting Workflows for Artists
This animation is lit from multiple camera angles, in this case showing multiple shots, but it could also be multiple frames of the same shot. Each change can be viewed by all the possible outcomes that affect it, which improves continuity and reduces revision cycles.
Using the lighting workflows in Katana 4.0, artists can create, place and edit lights in a way similar to the way cinematographers work live on-set. The UI was built for speed and ease of use so that artists and teams can respond directly to art direction. Users can work right on top of the image during a live render session with either gesture based controls, a mouse or a Wacom drawing tablet. Katana’s renderer plugins draw rendered pixels on top of the GL pixel information from the Hydra Viewport.
A major component of the new digital cinematography workflows is the Monitor Layer, used to view the output of the renderer plugin directly in the viewer. Objects can be selected directly from the image using image-based selection tools.
The artist gets to create and edit lights by interacting directly with the image and scene objects – that is, controlling lights based on where they illuminate, or where the light itself should be positioned. Like a cinematographer, you can think of the environment both in terms of practical light sources such as lamps, and in terms of lights that support the scene in reference to the practical lights.
The interactive HUDs from the Lighting Tools allow direct control of any light or group of lights directly from the viewer. The artist can work full screen directly on their image.
That means the usual trial and error process spent adjusting numerical values or 3D transforms is no longer necessary. Instead, you can work in Katana’s viewer with a heads-up display, simplified to focus on controlling properties such as intensity, exposure and colour. Users select which colour and numerical controls are shown in the HUD. You can place a small HUD on each light and set up a HUD spreadsheet through your own selected lights. This kind of work is what the viewer is designed for, using as much or as little on-screen control as you need to manage simple or complex lighting scenarios, straight from the viewer.
The idea is for artists to spend less time in the node graph and more time lighting, using the lighting tools to do more of the common tasks in the viewer – such as renaming, deleting and adopting lights for editing. Each of the controls is available for any of the available GafferThree nodes in the active node graph – which are controlled directly from the viewer as well. Sequence-based lighting edits can be managed through multiple GafferThree nodes.
Lighting Production Tools
Beyond the hands-on artist’s side of Katana’s lighting mode, the tools are designed for efficiency, so that fewer artists are needed to manage large numbers of shots at high quality in the least time. Artists can use deferred loading of scene information, procedural workflows, collaborative tools, viewer and live rendering feedback to speed up their work.
In this environment in GL, with an asset drawn with ray tracing the artist has fine grained control over how they interact with their scene.
Light creation and editing are now both handled in Katana’s GafferThree node, interacting with single lights directly or controlling multiple lights at the same time via the Template Materials. You can edit previously created lights procedurally, allowing lights to have multiple looks across a sequence of shots. By referencing lighting from a library, you can make specific updates without losing the ability to inherit changes made to the lighting library.
Katana interactively communicates all rendering edits as individual changes to the user’s rendering plugin, allowing the software to access very specific information. Instead of coping with crashes, artists can use Interactive Render Filters to override render settings for better performance, without changing the settings for the final render.
Through Katana’s configurable UI, lighting artists configure each session to make the most of the current task and project, interacting with the lights and shadows of the Hydra Viewport, in the rendered image in the Monitor Layer or in the Monitor Tab. Feedback on the full history of current and past renders can be viewed in the new Catalog system UI (more about Catalog below).
Foresight Rendering
Katana’s scalable interactive rendering and a new set of APIs now make it possible to simultaneously render multiple images as artists work across shots, frames, assets, asset variations and other tasks from within one Katana project file. They can multitask while waiting for renders to deliver feedback on art direction, reducing the iteration cycle time. Using these Multiple Simultaneous Interactive Renders, an artist can also make one choice that affects multiple shots or assets from a single control, and validate multiple outcomes simultaneously.
These toys all share a common vinyl material. The material nodes that make each toy unique only change the texture maps that are applied to the model, while the properties that make it look like vinyl come from a common parent material. In the Katana Foresight workflow, vinyl looks can be changed from one material node and viewed on all the assets that use it at the same time, as multiple live or preview renders.
Machines can also be networked for faster renders and scalable feedback. Because rendering requires computational power, Networked Interactive Rendering has been developed for artists to use networked machines, other than the one they’re working on, to facilitate Multiple Simultaneous Interactive Renders for a single Katana session.
Accessing the extra power makes traditional test rendering via batch farm renders unnecessary. Without affecting their own workstations, artists can see and respond to a render’s interactive progress in the Katana UI instead of waiting for a finished frame.
The farm rendering APIs in Katana support connections to render farm management applications like Deadline, Qube, Tractor or custom in-house tools. Users can then deploy existing render farm resources in dedicated pools for interactive renders during the day, and return them to the pool for final frame renders at night.
Katana 4.0 ships with tools that serve as examples for studios intending to customise their use of Katana Foresight workflows or connect them to a different render farm management application. Katana Queue is a small render queue system built to manage renders on an artist’s local machine, or a set of networked machines. You can also put your machine to work when multitasking with a scalable rendering tool controlled through a new process control interface called the Katana Queue Tab.
Creating multiple simultaneous renders poses the question of how to view them. The Catalog is a UI for multiple renders. It can show thumbnails at a user defined size, displayed as a vertical strip of thumbnails, and update while the images are rendered. The Graph State Variables and Interactive Render Filters are listed as combined or separate columns, to keep track of what each render shows.
Katana’s Monitor and Catalog allow an artist to see more than one render at a time. The Monitor can show two large images side by side or one on top of the other. The Catalog can now show larger thumbnails that update dynamically as the render progresses.
From the Catalog, an artist can choose two images to show at a larger resolution in the Monitor Tab. Displayed side-by-side or one on top of the other, or in a wipe comparison, panning and zoom can be synced or independent. Artists can compare two images or two parts of the same image while they work. Meanwhile, the front buffer always drives the Monitor Layer in the Viewer which can match the scene state and allow you to use Katana 4.0’s lighting environment.
Collaboration
Katana is designed for look development and lighting teams to work together. In the node graph workflows, for example, artists define a series of ordered steps that can be shared with other team members, or reused for other parts of the project. With Live Groups, users group and publish parts of the node graph and save them to disk for sharing or reuse. Similar to a referencing system, you can manage versions of these groups, and publish changes to them wherever they are used.
Without writing any C++ code, TDs can write custom tools to perform simple tasks, up to complex sets of action. Specific tools may be designed as macros via Katana’s OpScript node. Macros are for packaging up a set of nodes into a shareable tool, without scripting. By only exposing selected controls to, the user is left with a simpler, straightforward interface. You can also create tools with SuperTools with a custom UI that can dynamically create nodes. The nodes inside are created and controlled using Python scripts, presented as a single new node with a customisable interface.
Pipeline Development
Katana includes further functionality to support pipeline development. The software’s USD capabilities continue to expand. As well as the Hydra viewport described above, they now include USD-import and export nodes with source code that is open source, allowing teams to use USD in production with Katana.
The materials and lighting of the lion can be adjusted while viewing the outcome from multiple camera angles during a live render. The distance from the camera, the viewed angle all have an impact on how the artist would adjust lights or material.
OpenColorIO (OCIO) colour management is a part of all image-related operations, making certain that colour spaces are correct. The software supports Python, Lua, C and C++ scripting and programming languages, plus libraries such as ILM’s Imath. The software also includes APIs for connecting to asset management tools, for example to make your own custom tools with logic, custom UI and dynamic properties. Other APIs allow you to add your own render engine, and to make complex connections between Katana and render farms.
Katana's rendering system is compatible with existing renderer plugins including V-Ray, Arnold, 3Delight, RenderMan and Redshift. All use the same API, which can power custom renderer plugins. The 3Delight plugin is open sourced and can be used as a production reference.
Katana ships with a rendering plugin from 3Delight’s developers, and supports interactive live rendering. Live rendering responds to any changes in the scene made to lights, shaders, cameras, geometry, volumes, materials or global settings. Special workflows like light mixing adjust lighting interactively on finished frames, while the edits are fed back directly to lights or groups of lights in the GafferThree lighting tools. The OSL-based shading engine is compatible with both Katana and Maya and allows direct transfer of look development files between them. www.foundry.com