English French German Italian Japanese Korean Portuguese Spanish

Jumping from Ubisoft’s video game, the characters and creatures in 'Prince of Persia' inhabit a mythical world of adventure and romance. Indepth interviews with artists from four major VFX houses reveal the diverse new tools and techniques used to skilfully bring the fantasy to life.From Digital Media World Magazine


Framestore’s Visual Effects Supervisor Ben Morris and team put creativity, R&D and a heavy dose of hard work into the project. Although in the final edit 124 shots remain, the artists worked on over 200 shots.

On Set in Morocco
Ben spent six weeks on set in Morocco working with the film’s VFX Supervisor Tom Wood. From week two of the shoot, the production ran two units. While Tom worked with the main unit, Ben would take second unit to work out the best way to shoot scenes requiring effects, set up witness cameras and act as the eyes and ears of the VFX team, should the unit make decisions that would affect their work later.

He also ensured they would have enough tracking data back in London, updated camera and VFX floor sheets, and collected references, textures and HDRI for CG, set augmentation and unexpected shots.

Framestore contributed a fair amount to the concepts and story for the shots they were assigned. Ben notes that this is a growing trend in VFX. The critical sand room sequence the team handled at the end of the film takes place in an entirely CG setting, and is driven by what the FX artists have done with the environment, the look and even the final action.

Concept Drawings
Developing their concepts virtually from scratch made the project dynamic and creative for the team. Framestore’s VFX Art Director Kevin Jenkins spent over four weeks in the production Art Department after his drawings had won approval, working through this sequence with Production Designer Wolf Kroeger. It required numerous iterations and cycles to get right, but their close involvement with the scene, which was integral to the story and had emerged late in the production schedule, was a huge asset. They could judge and control how complex it became and the way it was shot, literally as it happened. They would make a decision in the morning and a build would begin the same afternoon.
Their main preoccupation was handling and rendering the huge volumes of sand, required in wide shots. Seeking the simplest solution, they considered such options as creating it as an animated texture on a plane, or a matte painting, and assessed whether the movement of the sand would be visible at certain distances. In the end, they chose to render all the sand in the same way – as particles. Preparing animated textures, for example, would either require shooting huge amounts of sand, for which they lacked time and opportunity, or generating it through a simulation and render, converting to a texture sequence and putting it on a plane. They would still need interaction, falling debris, surges and airborne material. However, rendering it all as particles would cover all situations and give a consistent look.

Universal Particles
Furthermore, the compositors wouldn’t have to balance out multiple techniques for generating the look of the sand. They would receive a coherent package of rendered elements all shaded the same way and responding to the light consistently. Lead Effects Developer Alex Rothwell wrote a DSO and interface to RenderMan that allowed them to scale the particle renders, and became an efficient, controllable pipeline.

The particles could be generated from either Houdini or Maya. They wanted to be software agnostic to suit the skills of potential team members joining the project. The pipeline also had to be data-centric. As long as simulations of particles could be generated and stored in a universal or common cache format, then the RenderMan DSO pipeline could take data from any of these applications, and tools were developed in both that complimented each other. They ended up with a team of Houdini and Maya artists who could all write out the same p-cache file format that Alex developed for the job.

Ben explained, “To the renderer, those simulations all looked identical. We could concatenate them, manage them all off-line out of the packages and write them to disk. We could write a description file for the render which would serve as a shopping list specifying, for example, ‘These four caches should go there, those from Maya over here . . .’ and so on. They would be put together through the render pipeline.”

Light Occlusion
Alex Rothwell also modified the in-house software occVox originally developed to illuminate Aslan’s hair in the Narnia films. The occlusion calculation they aimed to build into their lighting pipeline for the particles actually proved similar to the challenge of achieving self-occlusion within a volume of hair. “When developing the lighting for Aslan's mane, we found that traditional shadowing techniques ground to halt due to the very large number of individual hairs involved in the calculations,” Alex explained.

“Also, because of how fine the hairs were, unless our calculations were very precise – and consequently very slow - a lot of noisy artifacts appeared in the final image. By grouping hairs into sets based on location and thickness, and by representing these sets in a special way, we could calculate the light interaction between large numbers of hairs quickly and without introducing any artefacts.

“When shading the sand, we found a similar problem - large numbers of small elements that all contribute to the lighting of their adjacent elements and surrounding geometry. By adjusting the primary occVox algorithms to group sand particles instead of hair, we could use the existing code to generate quick, accurate lighting interaction between literally billions of points.”
The large amount of dust rising from the avalanche of sand partly shared the same system, although dust has quite different characteristics. Some was volumetric and some fluid renders. Some came from Houdini, which worked well because it could be driven by the action of the simulation of the collapsing debris and the sand itself. Some of the dust was 2D elements.

Animated Collapse
The interactions between the sand and the massive chunks of collapsing architecture were handled shot by shot. Most shots were started by giving the animators very low res architecture, chopped into large chunks, and nurb or poly animation planes for the sand. They would block out the structure and height of the sand, showing areas of sink or of undulation and also, as post-vis, they would animate the dynamic structure of the rocks. These were totally hand animated to get the timing, cut, editorial pace and action they were looking for.

Once the animations were approved at a rough level, their effects artists would paint the low res sand planes from which they would simulate high-res collapse and destruction of the architecture, substituting in more accurate representations of the bricks and shattering bricks, simulating the action with their in-house dynamic solver F-bounce, using the animation ground plane as a collision surface for those chunks of architecture. Simultaneously, they would be simulating flow down the sand planes that were affected by intersections and collision with these objects. They would calculate surge profiles and undulation on the sand plane that Dastan slid down.

Layered Simulations
“It was a multilayered cake. We locked one simulation for rigid bodies, did the next simulation on top of that for splashes of sand where bricks hit the flow, back spray coming off the rear of structures through the arches. Then another rigid body layer might shatter larger bricks into smaller ones, based on simulation contact and impact. We layered them all up, hoping we’d never have to go back and start again. But of course once we started constructing the physical simulations, change was inevitable," said Ben. “Therefore, we stored the simulations in a modular fashion. If we did have to go back and re-simulate anything, it could be done in isolation.

“It’s not enough to make a dramatic sequence like this physically accurate and correct. It had to look the way the team, the director or Tom actually wanted it to look. We had to be able to massage and direct it.” Nevertheless, the artists watched demolitions of old and new buildings as reference, even footage of the crumbling towers on 9/11. Blown-up power station cooling towers and blocks of flats were useful too.

Six-Day Shoot
These sand-architecture interactions were constructed and approved one stage or layer at a time. Classes of falling sand - trickling, splashing or waterfalling - were written as individual caches also, and brought together in the render. The p-cache format thus became the core of the render pipeline.

Short on time, Framestore built minimal practical props for the six-day shoot planned for the sequence. They needed a door for Tamina and Jake to run through into the sand room, and a plinth, or mechanical slide on a rail, that Jake slid down in some early shots. The most problematic was an inclined sand slope for Jake and the stunt doubles to use in the main part of the sequence. At first, they tried using a heavy, awkward slope which they struggled to keep covered with rough intractable sand that interfered with the actors’ actions. But they gave up after wasting two days and decided to let Jake slide down a smooth board and add CG sand in post.

Later, towards the end of the sequence, Jake hangs perilously from a tower holding all of the sand. Framestore built the top surface of this for him to scramble up before his final back kick. These were the only practicals elements in the sequence – everything else was CG but they had made sure that the concept artwork they had submitted to the DP and cinematographers was enough o tell them how the final space would look. A lot of story and sequence editing followed, of course, distilling it all into a tight, 2-minute action sequence.

Reconstruction
Digital reconstruction of the scene back in London was scheduled over a 12 month period, and involved both 2D green screen and CG elements. Ben and Jake himself had been keen to do as much live action as possible, despite the very short time they had, though on the wider shots they had to rely on CG. The first 6 months of reconstruction were spent working with the VFX Editor, their own editor and the animation team to post-vis the whole sequence. At one point, it went up to 150 shots, then dropped to 50, back and forth as it progressed through different stages. Sometimes it included much more parcours with Jake bouncing between the arches. Ben recalls considering something close to 45 different versions before locking it at about 60-plus shots.

The lighting scheme came from a mix of onset reference and concept art. Whenever they shot practical elements of the actors, they took HDRI reference, although once the collapse had begun they changed the lighting so often that they did some relighting on the 2D elements as well. In the latter part of the sequence, Lighting Supervisor Rob Allman and Kevin Jenkins art-directed much of the look of lighting, referencing old masters paintings, moody Rembrandts and so forth. Tom Wood was open to these ideas, moving his vision to striking visuals, dark backgrounds falling off to near blacks, strong light from above and rich saturations in the reds and other old master-style lighting techniques.

Style Bible
Kevin’s own concepts became a ‘bible’ for their style overall. Tom, Mike Newell and Jerry Bruckheimer approved his drawings readily, even while they were still shooting. They were literally Rob’s guide for light and let everyone know how the sequence should look, across post-production. “Many projects let the looks develop as the team works but in this case, Kevin’s concepts lead the team, which gave Rob a clear target and the compositors also had the same goal. For every key shot, there was a painting to follow. At times he was producing one per day, and the production approved them because they knew how complex the shots were and wanted the lighters to have a target to work toward immediately without waiting for elements.

“The concepts also drove the scale and magnitude of the simulations as well, and he created some of the visuals well in advance of starting the shots. It was a long but exciting project, in which we couldn’t have predicted that this sequence would turn out as it did.

Charming Snakes
The other main portion of Framestore’s award concerned some villainous snakes used as the weapon of the lead Hassansin, the hero Dastan’s treacherous adversary. Armed with the script pages dealing with the snakes, and direction from a consultation with Tom Wood on looks, Framestore’s art department leapt straight into concepts. Terrifying, nasty snakes were called for, so they studied live snakes, looked at scientific references from cobras to brilliant coral snakes, puff adders, black and green mambas, placed all of them on large mood boards and then went over them with Tom, who favoured the desert vipers with horns and rougher scales.

They decided they should be mostly black, broken up with banding and a lighter belly. The put in large Hollywood fangs for danger and produced an amalgam of real snakes to create the vicious sidewinder they needed, going slightly beyond reality especially regarding their tunnelling ability – how well could a snake really tunnel with horns and rough scales?
Following a rapid approval on the design, the build got underway while Ben was still in Morocco. Looking for a way to apply the large, rough scales, they considered basic displacements but rejected them because of the size and coarse shape of the scales. The final solution is a bit like a feather system controlling inclination, twist, elevation and size of all the scales on the surface.

Scaling Up
Their existing feather system was too complex, however, and only worked with proxies. They needed to show the animators the final dimension of the snake. Alex Rothwell wrote a Maya plug-in to apply to a naked snake that generates full resolution-geometry scales. They then applied bump and displacement to these scales as the last level of detail.
One of its main advantages was allowing the animators to see the final shape of the body, and the Character TD to see how the scales reacted to interactions with surfaces – both people and terrain – in their playblasts, rather than waiting to render. It does retain many of the parameters and controls of a feather system, but is tuned especially for these scales.
Rigging and animations were difficult. “Snakes have often been a challenge,” Ben said.

“Theoretically, you should be able to put them on a path and get them moving. But considering these snakes’ activities in the film, even a kinematic solve becomes hard. For one shot you’ll want the head to drive the action but for another you’ll want it reversed, or to anchor all movement from one point. These demands made the rig complex. It took a long time to design and build what the animators needed and at one point they cheated and used two snakes in one shot, concealing the join, which gave them more flexibility to carry out their manoeuvres.

Super Snake Rig
“The more subtle almost static shots - such as the snake on Tamina’s chest when she lies unconscious in the temple, or the snake coming out of the ground to strike at the oasis – were the hardest because you needed to control every part of the body down the spine, and couldn’t do it all with forward kinematics. At least we have a super snake rig now, which has so far been adapted for the Medusa in ‘Clash of the Titans’.”

The shot covering the snake’s demise crept up on the team sooner than they expected and they never had a chance to set up witness cameras for enough tracking reference. So, they had to make do with a stand-in – a bicycle inner tube stuffed with raw salmon and tuna, fake blood and stringy additions. They placed basic white tracking markers on the outside of the tube and got the Hassansin's actor to perform the scene, before and as he sliced the snake open. They simply did as much tracking as they could despite the squashy subject.
They ended up rendering the guts in the interior in CG, based on the lighting and look of their prop. Doing it this way also meant they could constrict muscles based on a real snake’s death throes, and allowed better slicing of the scales with more accurate alignment and animation.

Dagger Rewind
Double Negative worked on four main scenes in ‘Prince of Persia’ involving about 200 shots portraying the magical aspects of the story. These included three scenes in which the hero Dastan’s magic dagger is activated, releasing the sands of time and sending the character holding it back into the past. For this time-rewinding effect, the director wanted the character to separate from his body so that he could stand back and see the world rewinding.

Double Negative’s Sequence Lead Viktor Rietveld worked specifically on this ‘dagger rewind’ effect, which, with a few limitations, allows an effects team to redesign live action shots in post production. For this project, it represented a challenge and an opportunity because they needed to create two independent versions of the same character - one moving forward called a ‘ghost’ character and one moving backwards, the ‘rewind’ character - shown within a single shot.

Locked Cameras

To achieve their effect, the team sets up as many as 8 or 9 locked cameras on set all pointing at the character from different directions. Afterwards, the character is extracted from the footage by rotoscoping or green screen. The collected images are projected onto a solid 3D object, such as a sphere or cube, which is carved away from around the silhouettes. The result is an accurate 3D object representing the actor. Double Negative’s own tracking software tracks matching points from each projection and makes comparisons between the different cameras as the basis for a mesh, which it can then smooth out and refine. The technique gives a good approximate surface.

The technique was first developed as Event Capture for ‘Quantum of Solace’, when it was used to complete a sequence in which the actors jump into free fall from an airplane. Applying it to ‘Prince of Persia’ required a few interesting refinements. For example, the actors’ hairs didn’t come through the process accurately, demanding the help of the matchmoving team to work on strands of hair and flapping clothes, lining up images from all cameras used in a sequence. The result was a workable 3D character that could be projected into live action footage.

Texture Perfect
A shader was also developed that projected each camera’s image back onto the 3D object, initially intending to apply a perfect texture onto it so that when a virtual camera eventually shot the complete object, it would be realistic enough. But a problem arose because the photographic textures are derived from locked camera positions, and the specular highlights tended to jump over an image rather than smoothly move over the surface as they do in real photography.

This had to be corrected manually with paint work, and the textures had to be split into dozens of layers. The compositors had to carefully combine these layers, starting with eight beauty, or incandescent, layers. Then a vector would be projected into the virtual camera. Viktor wrote a simple Shake script that produced Alpha channels, pre-multiplied with the beauty pass, and then combined them together to produce a smooth appearance. Initially, they would have 16 to 18 layers, which is not that many, but then they would add small patches here and there as required.

Shadow Pass

They also needed a shadow pass to indicate which portion of their projection was incorrect. Considering each camera as if it were projecting a light, they knew that they would cast shadows wherever the projection was wrong – either failing to project or projecting double – and had to be removed. They would highlight each problem area with a white pixel and output it as a different channel so they could keep track of what needed replacing with other layers.

Ultimately, their combined efforts allowed them to turn their virtual camera right around live action characters with realistic textures. Once all data was in place, it worked well and gave lots of flexibility inside the camera and could be used on both the ‘rewind’ and the ‘ghost’ characters.

Flexibility was important to the director Mike Newell regarding this effect, thinking he might want to wait to decide on camera moves until some time after the live action shoot. “Of course, it’s an expensive technique and not practical to use all the time, but it was worth it in this instance, especially since we were producing the two characters in each shot," said Viktor. "One was moving forward in time normally, while the other moved backward, but used the same camera motion. Motion control cameras would have been the only alternative here, but these have limited flexibility on set.”

Tracking Points

One of the technique's strong points is that it produces plenty of tracking points. What their tracking software does in particular is compare all of these points from each camera, comparing their relative position in each view and resulting in an accurate 3D configuration that can be re-projected from the cameras to produce realistic textures.

The angle between each camera is where the 3D camera has freedom to move. The more cameras used, the more flexibility it will have. For some shots the array of cameras was wide and extensive, nine cameras with 20° between.

For the scene when Dastan activates the dagger for the first time, the camera has to go right around him 120°, from close up to quite far, shooting three characters all running around. But they had captured enough data to do it, despite the length of the shot, some 300 to 400 frames. This became especially important when just one month before delivery, a change was required in the 3D camera, which would have been impossible with a straight live action sequence.

Photo Trails
The team also had to develop lead trails or photo trails for the rewind characters - these are generated by 3D objects or characters acting out a sequence  - ‘gluing’ the performances together in 3D space, and effectively smearing the textures together. Viktor explained, “As the character moves from frame to frame, Houdini can interpolate each frame so it has about eight in-between copies. Each copy, through the length of the whole shot was then rendered slightly transparent with motion blur for the full 360° shutter on each in-between. This produces a full length smear of the character on the whole shot. The technique was used in front of the character moving in time. The trails behind them were done with an in-house fluid simulator Squirt.

“It was a difficult effect to grasp, even while we were working with it,” Viktor admitted. “On occasion it was a little mind boggling, moving backwards and forwards at once. We used a combination of Maya and Houdini in our pipeline, that is, it was based on Maya, which includes most of the plug-ins we have developed, plus Double Negative’s inhouse volumetric rendere, DNB. We need Houdini for many of our effects. Houdini is a fantastic data wrangler. We also developed a proprietary file format for working between Maya and Houdini, which we used especially for particle handling. It can handle and control the millions of particles we needed for the ghost effect.”

Ghost Effect
Sand, referring to the ‘sands of time’, was a unifying theme across all scenes in ‘Prince of Persia’, linking the sand inside the dagger’s handle, the rewinding effect and the look of the destructive sand at the end of the movie. Maintaining the sand theme, the ghost effect essentially turned the team’s complete, cleaned-up, matchmoved 3D Dastan into sand. Double Negative have in-house fluid dynamic software, typically used for sand, which they could also use for the ghost effect, which involves particles that swirl following the same kind of motions that moving fluids have. In fact, the fluids generated with this software actually use fluid implicit particles to drive the simulations and can simply be rendered.
Using the application with Houdini, CG Supervisor Justin Martin made a tool that converts the skin into something more like a volume, called a level set, a voxel approach to the surface. Onto this surface, he placed many millions of particles that swirl around with ‘curl noise’, which ensures all the sand particles move properly over the surface. In effect, the entire surface becomes made of sand.

Combing Vectors
One problem was trying to make the hair into sand as well. For close ups, Viktor had to use paint effects in Maya, using sand like textures on strands of hair that they had simulated. Otherwise, most of the sand was dispersed from Dastan’s body, and the Effects TDs could ‘comb’ vectors for streams of particles, dictating how the sand should flow. In general, it needed to move away from the dagger, so an influence field for this was added. Then the fluid dynamics software took over and started emitting particles from his body.
Again, once all factors were in place, the system was simple and worked well. But there was a lot of data, handled with some ten different simulations for a complete body – one for the arm, for the dagger, the hand, another for the hair, all blended together in 2D to extend the simulations, apply large and small grains of sand, add more grains and so on.

Light Fantastic
“Only a few years ago,” said Viktor, “we couldn’t have managed the render for this. Even RenderMan wouldn’t have been able to manage so many particles. We were using our particle system to instance particles to make sure we had enough and, fortunately, RenderMan was upgraded in the meantime to be able to render them without resorting to tricks and instancing. That gave us more freedom.

“We could colourblend other effects into the look, such as emitting light from the dagger or from Dastan’s tattoos. We created some very effective lighting from various sources for the particles that, before version 13 of RenderMan, may not have been possible. This has made a huge difference to our team’s lighting techniques.”

Lighting for the rewind characters was a special case. To regulate the lighting from the different cameras was handled with a primary grade given to the footage at scanning time, so the compositors would all have the same colours to work with. “However, once this grade was baked in, the render was fast and straightforward,” Viktor explained. “Those reconstructed characters did not need relighting, whereas the ghost characters certainly did and were completely re-lit. Dastan’s face was partly live action, partly CG, which needed to be matched to the lighting on set. But sometimes, for more control, we would light the set completely flat and relight everything later.” Most sets for the rewind shots and sequences were almost completely replaced with CG – including backgrounds, extra characters and the ghosts themselves.

Sandglass
The brief for the sandglass scene at the end of the movie was to create a digital environment that looked completely real, but had an enormous light-emitting crystal tower in the centre full of moving twisting sand. The sand needed to reveal images from the past inside the crystal, and at a certain point the chamber had to collapse all around the actors. Finally, the sand inside the crystal begins to escape, destroying everything around it and knocking down walls and stalagmites.

Creating the underground rock cavern wasn't so problematic, but the crystal presented a bigger challenge - what does a 300 foot crystal filled with light and emitting sand look like? After finding some unconvincing photo references, the team eventually based much of their work on crystals they bought from a local alternative medicine shop. With these they captured footage while shining lasers through them and lighting them with different light sources, emulating results such as refraction and flaws that made them feel like crystal.

The greatest challenge of the sequence was achieving the scale. The team visited a quarry, capturing a large collection of photos to reconstruct the rock surface in 3D, and projected textures onto the geometry, creating a realistic rock surface built to a real scale. Further assistance came from the quanities of falling material, after researching and studying how rocks would fall from the roof and break up, dragging dust and debris with them. Getting the speed of falling material right helped to convey the scale, as did adding atmosphere. The floating dust particles, barely visible as dust, gave the cavern a realistic feeling of space and distance.

Mythical Alamut
MPC’s VFX team, assigned to create the mystical city of Alamut and the Golden Palace, started on pre visualisation in about May 2008. The brief was for a grand, majestic setting that stood out from the other Persian cities in the film. VFX supervisor Stephane Ceretti supervised on a number of shoots in Morocco and at Pinewood Studios for their 250 shots.
Striking painted concepts of Alamut were provided by the production Art Department, which were primarily hazy and silhouetted to characterise Alamut as a mythical place. A concept layout in 3D was also provided which was used as the base for the overall structure of the city, and its central rock and tower locations. The proportions and design were then developed further at MPC in 2D and 3D. Detailed concepts were provided for certain parts of the set, such as the Golden Palace, sitting on the central rock overlooking the city.

Over India
A large variety of photographic reference was gathered primarily by Stephane and Tom Wood as well as MPC's photographer, James Kelly. Sources included the set locations in Morocco, a reference shoot that James carried out in India for three and a half weeks at around 50 locations, and book sources such as ‘Over India’, in which aerial views are shot with an SLR attached to a kite.

From the start, the team knew that there would be complex shots with sweeping camera moves through of Alamut in various lighting conditions. Previs was undertaken but the full extent and number of the shots required was not firmly decided on until the shoots were completed and the editing had been locked. This meant that the cityscape needed a good degree of flexibility and, therefore, it made sense to construct Alamut fully in 3D, although certain shots were supplemented with matte painting where 3D refinement would have created unnecessary work.

In the end a variety of city shots were done including accurately placing Alamut into a helicopter plate shot in Morocco, extending the Moroccan Eastgate set with Alamut behind and, of course, some fully CG city shots.

Town Planning
Once the concept was signed off, the CG city was built using a TownPlanner tool to randomly distribute a series of different building types based on a grid structure. “It was developed by James Leaning in the R&D department to allow chunks of city with thousands of buildings and props to be laid out quickly and naturally,” Stephane said. “A ‘floor plan’ of polygon planes was used as an input to define the broad structure and TownPlanner allowed for control over things such as roads, arches to join houses, prop variations and random offsets for cloth simulations. Separately, a packaging tool would add different types of props, including washing lines, trees, piles of wood and pots.”

To create the bulk of the city, the artists built a selection of 30 houses, from plain buildings to minarets along with a large selection of props and market stalls. Around 50 different props were created and cloth simulations done for laundry on the roofs. Each house had texture variations and a number of other hero or unique buildings were created as focus points. They could control the variation and types of housing in an area in real-time with an OpenGL preview. Close-up shots of the city involved additional layout work and dressing by hand.

Grand Scale
“Alamut city comprised about 20,000 buildings and 180,000 props in total,” said Stephane. “We had made a city of smaller size for ‘Fred Claus’, when we experienced issues with the pipeline when dealing with a large number of assets and multiple departments working in parallel. With this in mind, before ‘Prince of Persia’ was started, we further developed the environments pipeline and tools to allow us to create the Alamut city that the grand scale required.”

The pipeline has tools to allow artists to lay out the collection of 30 buildings cached on disk to form a virtual layout, without having any geometry existing in the scene. This can be viewed in Maya using the custom OpenGL preview node and be released into Shot Packages which are passed to along to artists such as crowd or lighting TDs.
This OpenGL preview node allows artists to change the level of detail (LOD) in Maya, with options such as a fast cube representation up to the fully detailed render models. Artists can also update the layout version within the preview node without breaking connections in their scenes. This makes it much easier for departments like lighting and layout to work in parallel when required.

Level of Detail
“Due to the large number of houses and props seen in certain shots we built three levels of detail (LODs) - a preview, low and high resolution version of each to optimise work in Maya and rendering,” Stephane said. “We could then pick at lighting stage which LOD to render. For certain hero shots we put extra custom work into the high-resolution buildings seen extremely close up. We found at the start that we needed to be careful about how many objects we had in the city. It was easy to layout too much using TownPlanner, resulting in slower rendering times and bigger memory overheads.
Various devices used to bring the city to life included subtle movement of clouds over the city, dust clouds from markets and chimney smoke, swaying cloth and palm trees. Alamut was populated with CG crowds created with MPC's crowd system ALICE, which is now efficient enough to be a sound alternative to using 2D elements. In dealing with terrain, because TownPlanner takes a polygon plane as an input, it really depends on what is supplied to it and builds within that, with a variety of object orientation options for terrain. They laid out the surrounding forestation using it also and found that it is quite flexible.

“Rendering the city for the complex 3D shots were heavy but a lot work was done early on and throughout to optimise,” said Stephane. “This included work on the pipeline and the way the city is loaded into PRMan from the packaging, along with a number of more straight forward things such as PTC baking global illumination and separating shadow maps for static/animated objects to reduce the number of per frame shadows required. We rendered the trees as a separate pass and the city normally two passes to allow comp control and to keep within a reasonable memory threshold.”

CG Armies
MPC was also responsible for creating CG armies for the siege sequence which were simulated using ALICE as well. For some of the shots over 10,000 agents were used, including soldiers with flags, citizens, horses and camels. To create a realistic animation, ALICE was integrated with fur and cloth simulation software to achieve realistic dynamics for the hero agents. A variety of model, prop and texture variation packages are created using a spreadsheet to create armies with enough variation in them without having to work too much on individuals. ALICE works with this list and, combined with per-agent shading variations such as slight colour offsets, it allows for more than enough variation.

MPC used its library of motion capture clips, and a dedicated two-day shoot at Centroid at Pinewood Studios, supervised by Crowd Simulation Lead Adam Davies and Stephane, captured some specific motions, from armies charging into battle, guards patrolling the city, panicked citizens, up to Dastan’s elite men rappelling up a wall.

Cloth Simulations

“The biggest challenge was setting the ALICE cloth pipeline up to cloth simulations for the crowd costumes and flags. This added an extra level of realism to the crowds, particularly for flags and long tunics, or djellabas. This would be run after a crowd simulation was approved and could be simulated on the render farm overnight,” Stephane said.

“We received excellent photographic and plate reference for the soldiers from the shoots, and we built the agents directly from this. For this show we only had one quality of agent built to a high level - the crowd often tends to come closer to the camera than expected! We used the same motion capture clips throughout, and using the ALICE cloth meant we didn't need to use Syflex for the crowd closer to camera.

City Building
Sue Rowe, Visual Effects Supervisor for Cinesite, was on set throughout the shoot in Morocco and later at Pinewood Studios, collecting data for her team and preparing how they would handle the tasks they had been awarded, comprising 280 shots. The shoot ran from the end of August and into December. “Although the art department had provided quite adequate sets in many cases Director Mike Newell and Producer Jerry Bruckheimer came back from reviewing the footage wanting the whole production to be more, bigger and better. Can we add a couple of hundred buildings? Have a grander palace? Can that camera’s helicopter move swing even wider?” said Sue.

To create augmentations, the Cinesite team did some 2D compositing but most of their work was CG. For the two cities they created, Nazaf, King Sharaman’s home, and Avrat, his father’s burial place, they chose to use 3D after experience with other films. “You might plan a city as a matte painting, and then discover you need to move the camera, requiring a matte painting projected onto geometry for 2 1/2 D moves, and finally realise that these methods are not cost effective. It’s also frustrating for a director to be told when and where to lock off the camera, or be limited to a nodal pan,” said Sue. “Most directors expect to have full freedom.”

The cities were built from a combination of art department concepts and detailed survey of the sets at the time of shooting, photographing everything down to the last mosaic tile. Sue did further research at museums in London, especially a special exhibition at the Tate Britain Gallery called ‘The Lure of the East’ showing works by British artists depicting the cultures and landscapes of the Near and Middle East, which she attended with Tom Wood.

Local Lighting
It was good introduction to the aesthetic they sought – smoky, dark areas contrasting with bright hot places. As northern Europeans, this look of the light struck them both, and certainly after spending six weeks out in Morocco immersed in the environment they knew the look to go for. The look of Cinesite’s cities was very different to elegant Alamut with its Indian influences. Nazaf and Avrat are run-down war cities, comprising grittier urban sprawl.

While the set was limited to a façade representing only a tenth of the complete city, it gave the team enough information to build the remainder in a modular fashion, with a variety of different doors, arches and minarets to mix and match in buildings of a similar design and feel but all individually made. Sue reckons about 50 per cent of the shots were done in Morocco and when they returned to Pinewood Studios they could create effective backgrounds for Dastan’s parcours jumps performed on wires on a blue screen set.

The team’s real challenge was matching the lighting. They had taken shots on location as a guide, but of course the DP could never have replicated the lighting on the set. The artists touched up every set, adding more contrast and dark shadowy areas. Their 2D Supervisor Matt Kasmir dedicated himself to matching the light in the two locations. They had taken silver and grey balls and HDRI cameras out for lighting reference, using a 4-camera 360° array to capture the light in a single shot, while their camera wrangling team recorded camera heights, tilts and angles, and lens grids for distortion. DP John Seale also used zooming cameras, not typical for a VFX film, which change focal length and lens distortion, and makes tracking 3D objects into live action more complicated and time consuming. This data was encoded into the shots.

Lion Hunt

Sue is particularly happy with the CG lioness the team created for a hunting sequence. She was on set when the real lioness the production had sourced was filmed on blue screen. But she was too contented and sleek for the role, refusing to roar on cue as the story’s half-starved, dangerous lioness had to do. Sue was quick to suggest a CG replacement.
Sue had worked on a black panther for ‘Golden Compass’ not long before this, and started with the panther rig and fur, building up the lioness from there. “This lioness needed to be quite stylised, and Jerry Bruckheimer asked for bigger teeth, a wider mouth and mangy fur. It was on screen only for a few shots, but the way it suited the story made it worthwhile. We had a photo shoot at London’s lion park for hi-res images of claws, eyes, how moist the nose should be, even a lion with diseased fur. It’s important to capture how it should move and how a lioness moves differently to other big cats.”

Sue has learned that getting the surface of the skin right is critical for big cats, due to the strong layer of muscle under a think layer of sebaceous fat with fur on top of that. “When the cat jumps onto a rock, the muscle snaps into place but there is some secondary movement in the fat layer to account for, then over this the fur wrinkles in a special way. These details add to the realism. Unlike an alien, an ‘unknown’, people know instinctively how a lion will look and move.”

Living Crowds
Massive crowd system software was used to populate the cities. They photographed the cities’ soldiers and citizens by putting them on a turntable, photographing them at 360°, plus close ups of faces and hands. The images were projected onto CG doubles, rigged and cloth simulations were done. One of the opening shots, starting at the base of the city and rising to reveal the palace, is populated with hundreds of these agents, as well as camels, dogs, horses and goats. “It’s having movement and variety in such shots that sells them,” explained Sue. “We had motion capture sessions to collect data of people carrying out all kinds of likely activities, that we could snap onto the rigs.”

Other software in the pipeline was Shake and Nuke for 2D compositing, and Maya for the 3D. City dust and smoke was generated with Houdini and Maya fluids, and matte paintings for sky replacements were made in Photoshop.

Animated Weapons
The Hassansin fight sequences they were assigned to create CG weapons for were a lot of fun for the team. The hand animation of the whips, chains, claws, fire and blades was challenging, of course, because they had to match and coordinate the movement of the weapons to a choreographed, live action fight. The actors had enacted the sequences with only handles and stubs in their hands, making moves as if they had been hit.

The team needed to work out locations of all elements in 3D space, while looking at a 2D image of the footage. They relied on LIDAR scans of the set and, by combining these with motion tracking they could figure exactly where the actors were and how far the whips should stretch, or how much the shape should change, to make the necessary moves and hits against props to enhance the actors’ work. The 3D Supervisor Artemis Oikonomopoulou was a great help, adding glinting lights to the flashing blades.


Younger Skin
The team had to do some face replacements onto live action doubles, usually standing in for the actors to save time. “Face replacements are becoming more common in films. We took photo reference of Jake’s face, for example, and tracked it onto the face of his double,” said Sue. “But we also did ‘youthening’ work on the faces of the King and his brother in some scenes occurring about 20 years earlier in the story, in which Mike Newell was reluctant to replace the main actors with younger ones. We shot the scene with and without the actors, and took hi-res images of younger people who closely resembled them under the same lighting conditions.

“In post, we tracked in the younger looking skin onto the faces of the actors to replace skin under the eyes, thinned the noses, added eyelashes and hair. We used our in-house plug-in Motion Analyser to track the skin’s surface and locate specific flaws in 3D space. It can keep the new skin accurately in place by analysing the motion of vectors and maintains the location of the object as it moves through the shots. Other faces it is being used on include talking animals such the dogs in 'Marmaduke'.”

Words: Adriene Hurst
Images: Walt Disney Studios
Featured in Digital Media World. Subscribe to the print edition of the magazine and receive the full story with all the images delivered to you.Only$77 per year.
PDF version only $27 per year
subscribev