‘Ron’s Gone Wrong’ is DNEG Animation’s first feature film, following four years of hard work and the creation of a brand new animation pipeline. Made for Locksmith Animation and Twentieth Century Studios, DNEG Animation President Tom Jacomb was a co-producer on the project.
DNEG Animation was the sole digital production partner on the movie, producing 100 minutes of fast-paced animation. Their teams created over 1,600 final shots, including 132 robot character variants, over 1,900 human character variants and extensive stylised environments.
The story concerns Barney, a shy, fumbling school boy learning to negotiate the social media age with a sense of humour. In Barney’s world, the latest digitally connected device is a walking, talking Best Friend Out of the Box, called a B*Bot. However, Barney’s B*Bot, named Ron, is not quite like the ones his friends have, and malfunctions from day one, paving the way for comedy, adventure and ultimately, a much more genuine kind of friendship.
DNEG Animation needed to create and implement some completely new tools and techniques to deliver the filmmakers’ vision. Instead of starting from scratch, they adapted DNEG’s mature VFX pipeline to create a new, robust animation pipeline capable of delivering a full animated feature.
Digital Media World talked to Philippe Denis, the VFX Supervisor at DNEG Animation, to find out about their approach to the task. “A large number of both animators and VFX artists were involved, and the production wanted to take advantage of their collective expertise – all together – to make the most of DNEG’s pipeline,” he said.
From Shots to Sequences
“One of the most fundamental changes to make was creating a sequence-based workflow instead of focussing on shot production, as you would for visual effects. In animation, the standard metric is the sequence. An animation team is assigned a sequence and will only move to the next one when the first one is approved. There might be a degree of overlap in between, but generally, the team goes from one to the next.”
The approach is the same for character FX, FX, crowd work and lighting. For an efficient lighting workflow, for example, which was created in Katana and Renderman, Lead Lighters establish the look as well as the lighting rig by working on few pre-selected shots called key shots. Each key shot is representative of a sequence or part of a sequence. When that key lighting phase is done, not only is the look of the sequence in place – lighting direction, colours, exposure and so on – but a lighting rig is ready to be shared on all the shots that make up the sequence.
Philippe said, “This becomes the rig set-up that the Production Lighters will start working with, and then modify to suit each shot. By working this way, we only need to spend time figuring out the visual and technical challenges on a few shots, and then only scale up to the full sequence once everything is established.
“Because we have to create every single shot of an animated movie in every single sequence, it makes sense to work this way. But on the VFX side, the process can be different. Only a couple of shots in a sequence may require CG work, which means that developing a rig per sequence serves no real purpose.
“At other times, shots come in separately from the client, not as a sequence block, and need to be treated one by one. Also, CG is often driven by the plate and the goal is to integrate it as well as possible with the plate, so again, the work should be approached shot by shot. Having said that, many projects today include entirely CG shots and sequences, without a plate. In such cases, the approach can be similar to the animation process. Increasingly, I see the differences between animation and VFX falling away.”
Motion Graphics and Vector Shapes
From the start, the team also collaborated closely with DNEG’s Motion Graphics (MGFX) team to create the B*Bots’ faces. The robots’ build is extremely simple, which meant that everything about them as characters was expressed through their facial expressions, both conveying the emotions of the 132 robot variants in pixels and serving as device interfaces.
Most of the robots show downloadable displays – the team called them ‘skins’ – for faces, which are colourful and fun for the kids but are more about moods and style, and ultimately don’t express real emotion from within. Since Barney’s friend Ron is a defective robot, unable to download any skins, his design was very minimalist compared to the other B*Bots. His face was only made of two eyes and a mouth. However, as the hero of the movie, he also needed a full range of motion and expressions, and therefore the animators needed complete control of his performance.
This meant the teams would need two approaches for the Bots. “In order to achieve this, the best approach was going to be to drive Ron’s face with vector shapes. The results of the different poses could then be rasterized and allow the animators to audit their progress in a real time manner. The intention was to give full control to the animators and make the experience of animating Ron as close as possible to animating a normal face,” said Philippe.
More Than Pixels
Further to that, the DNEG R&D team developed another control for pixelating Ron's face, which brought vitality to Ron’s personality and character. He said, “Animation Supervisor Steven Meyer, the lead animator for Ron, came up with great poses and expressions able to define who Ron really was. Eventually, once they were familiar with the rig, the team of animators could collaborate with the directors Jean-Philippe Vine and Sarah Smith, successfully adding a lot of creativity and developing Ron’s performance even further.”
Then there were the normal, properly functioning B*Bots. The MGFX artists designed 132 different skins as motion graphics textures, ranging from animals, American football players and zombies to cars, motorcycles, abstract paintings and more. These bots also needed to perform and act, although less than Ron.
“To limit the scope of the work, we chose 12 skins that needed a full rigging solution,” said Philippe. “For these hero skins, our motion graphics department, supervised by Eliot Hobdell with the help of the Production Designer Aurelien Predal and the Director Jean-Philippe came up with the different expressions and phonemes required for their performances. The rig could then trigger these expressions in the same way that a rig triggers blend shapes, ensuring that the animators were in control of the performances.”
One robot uses both techniques. The B*Bot belonging to the character Savannah had both a motion graphics skin, and a cat face that was made of vector shapes. They also created 20 skins that were fully loop-able, giving them the scope to create a sense of animation across a limited range of B*Bots populating a crowd, while leaving the other 100 skins purely static.
“The result of these developments is a testament to the massive collaboration between rigging, animation, motion graphics and surfacing as well as lighting and compositing,” Philippe commented. “Bear in mind that all of these skins could be switched by the animators, going from a car skin to an ‘alert’ skin for instance, and that the bots could receive message emojis and other graphics on top of their skins.”
Certain scenes in the film feature crowds that needed generic characters. DNEG's VFX artists generally build variants for crowds by creating heads, bodies, clothes and grooms that they combine randomly. They used a similar approach on the Animation side, but varied the process by using those elements to build variants that they specifically wanted – that is, not working randomly. In other words, they opted for a more deterministic approach. They also found that the method of asset packaging in USD was a useful way of doing this.
Philippe said, “First, our team needed a way of allowing the Design Department at Locksmith Animation to make choices about the characters early on in the process, which we did by creating a variety of different design elements using Photoshop layers. Then, we asked the designers to assemble, in Photoshop, specific variants that they liked in order to give us a clear roadmap to follow throughout the build and crowd process.
“Once we ingested their Photoshop files, our task then was to create and assign each model type – body1, body2, face1, face2, hair1, hair2, shirt1 and so on – in the order the designers had indicated. However, from that point, USD gave us the ability to preview the states and looks of the characters using USDview without having to open any 3D software packages or do any rendering. Consequently, the creation of the character generics became much more flexible and efficient to manage.”
When it came to the final approval of the generic characters, they carried this out as much as possible in an actual crowd context, not character by character. The idea was to make the best creative choice possible, by making decisions in the best context possible. By reviewing the characters in a crowd context – using a small, medium and large crowd set-up – they made sure that the characters were pleasing as a group both from a model and surfacing standpoint.
Philippe said, “For instance, for the children at school, as soon as we started generating characters, we created a generic crowd set-up for both kids and B*Bots so that we could control how well they fitted together. We kept updating the crowd every time we created more characters so that we could make creative decisions and adjust our choices early on, but still be assured they would work together before going into production.”
The entire world of Barney’s home town of None Such was built as a procedural library. DNEG’s pipeline was excellent for this approach to the task. The town was going to be built on a hilly location, and was designed by placing 3D cameras into the storyboards. Observing the storyboard camera angles and views, and building the town as they went through the story, allowed them to build the terrain and buildings at the same time.
Philippe called this process contextual decision-making. “We were able to address and answer all of the usual story-based questions before production got underway, even for the very complex environments such as the Bubble HQ showroom and store. We could light the sets and then address surfacing after,” he said.
Reflecting on the way this initial feature animation project at DNEG Animation was handled, Philippe returned to the example of VFX compositing and lighting. “In Animation, we benefited greatly from the experience and tools that DNEG has developed over the years in VFX. Our lighters were composing their shots and, even if they are very talented well-rounded artists, they are not very specialised compositors,” he said.
“But we wanted to keep the same process of making lighting artists responsible for their compositing, as it simplifies the review process and the revision, involving only one department instead of two. VFX relies much more heavily on compositing because of the fine tuning it requires to integrate the CG elements into a live action plate. A lot of complexity and subtlety can be added at compositing time.”
The animators were keen to take advantage of this knowledge at the studio to improve their process and the quality of their final images in animation. For this reason, Jean-François Leroux from VFX joined the team as a Compositing Supervisor. His role was to share his knowledge by exposing new tools and processes to the rest of the team.
Jean-François was also part of the key lighting process Philippe mentioned earlier by brainstorming with the two Lighting Supervisors, Matt Waters and Pietro Materossi as well as the Lead Lighters. Doing this sometimes led to a more efficient way of doing things in compositing, as opposed to lighting, while pushing the visuals. In this instance, Jean-François was in charge of creating the compositing template for the sequence, in the same way a Lead Lighter creates a lighting rig.
Jean-François became the supervisor of a large team of compositors in Mumbai that were in charge of Image Finaling, or IMF. Philippe said, “This is the last phase of production that occurs just before post-production and the colour timing. The goal is to fine tune the images and make sure they are perfectly clean.
“To facilitate that work, Jean-Francois established the Nuke file template, in order for the Lighters to name and ingest their render layers in the same way. By adding some standardisation to the process, the compositors in charge of IMF could focus on the work and not on figuring out how the compositing files were put together.” www.dneg.com