PFTrack 2017 Improves Camera and Texture Extraction Support

PFTrack 2017c

The 2017 release of PFTrack adds more ways to work with depth maps, optimisations for using photogrammetry to extract textures, and support for metadata from RED and ARRI cameras.

All of the PFDepth nodes have now been integrated into PFTrack. This gives quicker access to lots of ways to create and manipulate depth maps, including the Z-Depth Tracker, Merge, Edit, Filter, Composite and Cache nodes. An updated Z-Depth Solver node and Z-Depth Object node are included.

You can use PFTrack to prepare clips for z-based compositing, and use rotoscoping to edit depth.

The stereo camera and image pipeline has been extended so that users can build a Stereo Camera node that automatically positions the right-eye camera after tracking the left-eye. Stereo Disparity Solver, Disparity Adjust and Disparity-to-Depth conversion nodes are now a part of the pipeline for fixing issues such as stereo keystone alignment, and the usual left/right-eye colour and focus mismatches.

Using Z-Depth data, left and right-eye images may be rendered from a single clip.

User Interface

In the user interface, PFTrack’s node creation panel now organises nodes into groups to make them easier to find. Your commonly used nodes can be placed in the new custom node group and accessed more quickly. Tree layouts can be saved as XML preset files to help construct common sets of nodes more quickly. These files can be copied onto other machines or given to users to share common layouts.

PFTrack 2017b

Digital Cinema Cameras

Users working with ARRI footage now have added support for reading ARRI RAW media files. ARRI metadata can also be read from DPX, OpenEXR or Quicktime ProRes files, and camera and lens metadata is automatically read from RED and ARRI source files.

Users have more support for importing custom XML metadata to the Clip Input node, including Cooke /i Data which now uses this node. All metadata is passed through the tree and made accessible to Python or export nodes.

Texture Extraction

For artists using photogrammetry to extract textures, an optimized texture map can now be created automatically - as part of the simplification process - in the Photo Mesh node. Any exposure and brightness differences in the source media will automatically be corrected to generate the best quality texture map, and the exposure-balanced images are automatically passed down-stream. They may be used in the Texture Extraction node for manual texture painting if required.

Normal, displacement and occlusion maps can also be generated during simplification, to make sure the simplified mesh retains as much visual fidelity as possible. Occlusion maps can be generated for either the sky or local surface occlusion.

PFTrack 2017a

Experimental RGBD Pipeline

PFTrack now has an experimental RGBD pipeline for depth sensors. If external sensors are used, the captured Z-Depth data can be attached to an RGB clip and passed down the tracking tree. Auto Track and User Track nodes are updated to read z-depth values for trackers at each frame.

The Camera Solver node will use the tracker z-depth values to help solve for camera motion, which may help reduce drift in long shots and improve accuracy on complex camera moves. It supplies 3D data for nodal pans and a real-world scale without extra steps, and a special Z-Depth Mesh node can be used to convert depth maps into a coloured triangular mesh.

An iOS application will be released during 2017 allowing depth data to be recorded using an iPad and the Occipital Structure Sensor capture device, which you can find out more about here.    www.thepixelfarm.co.uk