production pipeline

We record the main layer of visuals, the two characters performance, using the depth camera Kinect Azure and through Depthkit software.

The performance-capture in practice will be the talents pthe visuals to the audio recording which is the master script. This is also the stage at which we perform the blocking of the scene.

We’ll be using two Kinect Cameras daisy chained in a studio to be able to capture a wider area of the body surfaces, from front and back. We then transfer these point-cloud data and import in Maya as animated meshes (OBJ sequence or ALEMBIC).

Most of our art direction happens in Maya where we turn the meshes into dynamic particles. The advantage of using dynamic particles is that we can simulate physical interactions with the forms. We’ve chosen to create this universe in Maya since it has an incredibly powerful set of tools for particle effects and animation In Maya, we also create the other layers of animation that correspond to the layers of visual experience in the VR world of our piece.

The end result of this process is 4 layers of mesh animation with particle behaviour. These layers are:

1.   visuals seen in the exterior space
2.   visuals seen in the intimate space
3a. visuals seen in the inner space through character A’s POV
3b. viusals seen in the inner space through character B’s POV

All these final layers are then exported from Maya to Unity where they are put into place and made interactive with the spectator’s positioning in the scene in virtual reality. The lighting of the scene will be intuitively rebuilt in Unity using skybox and a light map brought in from Maya.