Technolust, Images Courtesy of IRIS VR

Digital Humans in Technolust

Quantum Capture

Last modification: 30 Aug, 2017
9 mins

Technolust’s gritty world is enough to satisfy any cyberpunk fan. It combines an apocalyptic open-world atmosphere, a compelling narrative, and photo-realistic characters.

We talked to Morgan Young and Craig Alguire from the photogrammetry studio, Quantum Capture to learn about the workflow behind Technolust’s lifelike characters.

What is Technolust about?

It’s an open-world cyberpunk exploration game that runs on Oculus Rift. It’s set in a dystopian future where corporations have control. You play as a hacker trying to invade the corporations. It’s an exploration with all sorts of mini-games tucked away inside of the main experience. There’s also a few spin-offs. We’re planning to release it on HTC Vive in the coming months.

What did you guys first think when Blair (creator of Technolust) first told you about the game?

We thought it was awesome. You boot it up and it feels like Blade Runner. It really nailed the aesthetic. It draws from cyberpunk references from the 80’s like the book Neuromancer. Also, Blair did such a great job with music and sound. He used a 3D audio component. That blew me away! The feeling of immersion I got from his short demo was more than any other experience I had tried. That demo came from one guy and with only a couple of weeks’ work. So, we felt very strongly that this game would be awesome.

Blair made this game almost entirely on his own. Obviously, he had help and resources available to him, but he was the lead developer, the lead writer, designer, and he worked directly with Oculus to solve a lot of the issues and bugs inside the SDK. He’s been a very vocal person in the VR scene and is responsible for a lot of improvements.

Quantum Capture's 112 DLSR cameras used for photogrammetry character rigImages Courtesy of Quantum Capture

Tell me about your pipeline and how you use your photogrammetry rig for the characters in Technolust.

We start out with a live performer. We bring them inside of our photogrammetry rig that consists of 112 DLSR cameras and we synchronize all of the cameras so they fire at once. We wind up with 112 odd photos of the actor from multiple angles.

We then fuse all of those photos together inside Agisoft PhotoScan to create a high-res, hyper-real 3D model. From there, we export a 3D model into Zbrush to clean up some posing and then it goes into 3ds Max where I do a lot of retropology and UV layouts. From 3ds Max, it goes into Maya. I use HumanIK for the base any time I can because it’s great to keep it in the Maya native tool sets.

We set up a skeleton and all the blendshapes if there are any. For facial performances, we have up to 40 different head-scanned blendshapes. I then take those blendshapes and separate them into the left side and right side and add them to the skinned character. We then layer in any motion capture work and export right out of Maya to the game engine.

While Craig is rigging and skinning inside of Maya, I’m generally using Mudbox to clean up the textures. Mudbox is the only tool that I’m able to paint across multiple UV maps. So, if I have several unwraps to one character like for the heads and hands, I’ll have those unwrapped in separate UV sheets inside of Mudbox. I can load the model in such a way that I can contain corrections across the model surface and it’ll respect the changes across all of the UV unwraps. It’s great for painting out seems across multiple UVs and stuff like that.

Technolust character rigged and skinned inside Autodesk Maya, WIPImages Courtesy of Quantum Capture

How did you guys get into photo-scanning?

At the time, I was working on Assassin’s Creed and Craig was working on Far Cry at Ubisoft. We stumbled upon the scanning technology right around the same time that the DK1’s Kickstarter was going on. We started with some rudimentary scanning at Craig’s place with 12 DSLR’s in his living room. From there, we were interested in seeing this in VR. We managed to get Blair’s head scan in a Unity scene. It was awesome, it was the first time we saw a photo-real 3D scan in VR. We definitely knew we were on to something at that point.

We took it to Ubisoft and tried to sell them on the idea but they didn’t really see the long-term vision yet. So, we went on our own to see what we could do with it. Both of our skillsets were perfectly applicable. I’m a 3D Artist and Craig is a Tech Artist among other things. We met with Chris Abel, Blair’s partner in IRIS VR. Chris was blown away by the scanning concept. He was in a situation where he could get the funds for us to leave Ubisoft and grow our camera rig. We went from 12 DSLRS to 80 to 92 to now we have 112 DSLR photogrammetry scanning rigs. We started working with Blair to scan characters in his game, Technolust.

Since we’ve left Ubisoft, our focus was on achieving photo-real 3D scans specifically to showcase inside of VR. There was no looking back at that point. Now we run a productions service studio that focuses completely on VR development. We do motion capture, 3D scanning, and 360 videos. There’s been a tremendous amount of interest since we started.

Still from Technolust, courtesy of IRIS VRTechnolust, Images Courtesy of IRIS VR

What was the biggest challenge in creating these characters?

The biggest challenge was working on a pipeline that has yet to be developed. Managing the scan data efficiently and translating it into game-ready assets was difficult. Scan meshes are millions and millions of polys and they are unusable inside a real-time application. So, it was difficult to make them efficient and optimized enough to run inside of a game engine while retaining all the detail that we’re able to achieve with the photogrammetry rig. Even just coming up with that pipeline going from scans to real time was a challenge. We do that primarily with all Autodesk products - we couldn’t do it without them!

Which products did you use and how did they help you with what you were trying to achieve?

I found that for facial rigging, Maya seems to be well-suited. The tools work well and intuitively, especially the pose editor. It has been more manageable for character posing, blendshapes and stuff like that.

Human IK has also been awesome for our stuff. All our character 3D scans are accompanied by a big full body scan of the entire human form. We’ll do one 3d Scan of the human from head to toe and then we’ll concentrate all of our cameras on a tighter volume and capture high-res 3D scans of the actor’s facial poses. We’ll get up to like 50 of those per character and then we’ll have to manage retopologizing all these scans, assemble them into one model and parametrizing them with all the blendshapes that recapture from the 3D scans. All the toolsets inside of Maya make that workflow much easier.

Still from Technolust, courtesy of IRIS VRTechnolust, Images Courtesy of IRIS VR

What do you guys see for the future of gaming? Is Technolust a warning?

A warning? (laughs) That’s Blair’s vision of the future.

From the technological standpoint, I see immersive gaming becoming mainstream in the next couple of years. You can see this with Pokemon Go - more and more platforms will adapt to be more interactive and immersive media. I think that playing games on a traditional computer will be around, but there will be other alternatives. VR will be a big mover of that.

"Technolust: The Revolution Will Be Televised" game cover art
Technolust, Images Courtesy of IRIS VR

Quantum Capture
and Iris VR brought the characters of Technolust to life with the help of Maya, 3ds Max, Mudbox, Zbrush, PhotoScan and the Unity engine. Listen to our podcast with Technolust’s creator Blair Renaud to learn more about the game. Join the resistance on Steam or Oculus Home.

  • 3ds Max
  • Maya
  • Mudbox
  • VR
To post a comment please login or register