MoCap W3 '25: Photogrammetry Scan into a Metahuman
- Hannah Chung

- Mar 20
- 3 min read
Updated: Mar 20
This week I experienced using Unreal Engine's Metahuman creator for the first time. The application felt a lot like a sim character creator, and the real time adjustments were so fun to witness. With the animations it felt more alive than Autodesk's character creator model.
This was the base model I started with. I didn't end up changing a lot of the facial features but it was interesting exploring the different customisation tools like face redness, freckles, eyebrows, teeth colour etc. The options were extensive. You could also blend multiple pre-made metahumans together to create a new one.

Here is my altered metahuman:

Metahuman gives you the option to change clothes and body types. Here is my metahuman in different poses from the body animation series.


Another feature of Metahuman is the rendering studio which allows you to try out different lighting setups. I liked the Red Lantern lighting as it felt the most cinematic.

After playing with lighting, I messed around with Level of Detail (LOD) settings. A high LOD setting made it seem as realistic as a metahuman can be with much more detail, clarity and depth while a low LOD made it look like a low poly game asset that was made with paper.


Next we got called for our Photogrammetry session! This took place in its own small studio area. Jasmine (our actress) put on the cap, sat in the chair in front of the 10 cameras and went through eight 45 degree turns.



Each turn, all 10 cameras took a photo at their different angles. By the time she got back to the original position, there were 72 photos taken of her in a 360. The photos were taken and imported in an application called DigiCam.
Once all the photos were taken, Blair (our new MoCap technician) showed us how he took the photos from Digicam and imported them into a program called Reality Capture which is a plugin for Unreal Engine.

The software then aligned the images, matching the discernable areas on her face in the photos together. It created a fragmented frame that needed to be adjusted so that the box fit well around her head. It was also important not to clip features like the tip of her nose.

With all the photos aligned and the model box trimmed to size, the next step was reconstruction of her head. This part was the longest part of the process, taking about 4 minutes, but by far the most rewarding as it resulted in a 3d model of her head that looked like it was made of plaster. It was super exciting seeing her head in 3D space.



Then the model underwent a colouring and texturing process.

Finally, the software created a texture file by UV unwrapping the model. This produced quite possibly the ugliest UV map I've seen in my time at AUT, but apparently it's not concerning for this project.

The last part of the session was devoted to uploading Jasmine's scan into Unreal Engine to turn into a Metahuman base. I added the Metahuman plug-in to Unreal 5, then opened and imported her head scan in. I added the texture and assigned a generic body to her.

After that, I aligned a neutral frame and then Unreal tracked the face, adding markers for ROM movements.

I used MetaHuman Identity Solve to create this base mesh for Jasmine's head.

Then I saved the file and was able to access it on Metahuman Creator. This was such a wacky moment because the digital Jasmine was moving. After knowing my friend for nearly 5 years, it was so shocking/uncanny to see her digital double but it was incredibly cool too.


Metahuman Jazzy! Each member of the group then made a different version of Jasmine using the Metahuman Creator. Here is my finished Metahuman Jazzy:


And here she is in some action poses:
















Comments