Last month I tried a new way to generate and use facial mocap from a single camera rig. Since that test I’m thinking about the design of the 2nd version of that helmet, it will include 1 to 2 cameras setup, and I think I’ll go on 2 cameras from now because you really got depth informations vs flat tracking from only 1 camera. I shot the session with a GoPro HD Hero 2 at 100fps.
1st problem, how can we use the footage as source for tracking, isn’t that too fisheye look? Yes it is, so I had to remove the spherical distortion as much as I could with some plugins in AE, and to ease the tracking process, black and white + noise reduction filters were welcome. It’s not that good enough but that’s a test.
The next steps were quite easy and boring, a lot of people have already done this, thanks to their help. Every white beads were tracked in Autodesk matchmover, then data were exported as .rz2 file.
To use rz2 file in Maya you need to convert it to a readable file format for the software, that’s where Survey solver 2D enter the game.
Once all that process was done I exported the Maya scene with 2D trackers to 3ds Max in FBX.
Why is that? I’m a Max user.
Inside 3ds Max I quickly modeled a face and made a basic rig, added “position” and “look at” contraints, that way bones are driven by the tracked data.
Finally how does it look?
Not that bad, (oh come on!) with a better bone rig/skin and some morph targets, without doubts this 1 camera rig solution would make a good job on cartoon or low complex characters + it’s really light.
A little more informations:
– For the next helmet, cameras will be placed further away from the face and set to zoom mode, small LED will be added to be able to see the eyes and track them,
– The white beads are little polypropylene beads you can find inside some pillows, they are fixed with liquid latex and stay in place during all the motion capture session even on extreme face poses,
– 100fps is not a real need, 50 or 25 would do the job, except if you are shooting a human beat box.