top of page

MoCap W4 '25: Analyzer Catchup and Performance

  • Writer: Hannah Chung
    Hannah Chung
  • Mar 27
  • 2 min read

This week I had to catch up on the Analyzer practice we were given in Week 2. It was necessary to go over it because there were some steps that were different from what we did last year. Luckily, there is also no need to track the mouth in this scene (a blessing).


However, in class we had a performance practice lesson in the Mocap lab. This session was mostly for the actors in our groups, but we all attended. I think it was interesting watching the actors (and extras) getting into character. They did a bunch of preparation exercises to get them in the zone, and then they were given a chance to act out the scene. By using verbal phrases and acting out side gestures such as pulling or building confidence, you could really notice the progression after each different take. I enjoyed watching Jasmine as I think she nailed the dominating confidence that Spooner gives off, taking her time to show intention and giving it a bit of sass.

The performers taking part in a warm up.
The performers taking part in a warm up.

As for the Analyzer work, I followed the ROM list to create key frames for different influential poses. Then I trained and tracked the brows in the brows ROM footage. I followed the same process with the eyes which had considerably more poses than the brows.

Here's a video of the final tracked eye footage:

It isn't perfect but for the sake of being a practice, I decided to role with it. For Jasmine's footage, I will definitely be more particular about matching the exact points of the dots on the eyes.


One thing I noticed the first time I went through the eye tracking was that the markers fell off the upper eyelid during the blinks. I decided to retrain the model with a keyframe halfway through the blink, which significantly improved the accuracy when tracked the second time.

ree

Once both the eyes and brows had been tracked from the ROM footage, I was able to export out the tracking model to create training frames and a .mat file. These files would go on to be used to track the data in the actual performance footage.


I opened the performance footage, set up the first frame with the markers in the same position as they were in the ROM footage and finally imported the tracking model. With the data successfully imported, I could then track the eyes and brows without any need to add more keyframes.

Here is the performance footage with the imported ROM tracking model:

This Analyzer pipeline saves a lot of time but is less accurate than doing the work manually (for obvious reasons). But I recognise that moving forward, we will probably end up adjusting the tracking to minimise the jitter.






Comments


Powered and secured by Wix

Hannah Chung | Animation and Illustration Portfolio 2025

bottom of page