Ever since Ninja Theory showed off their Hellblade Live/Senua demo at GDC 2016 I’ve been thinking, how the f*ck can I pull that off? I mean sure I don’t have Epic Games, Cubic Motion or 3lateral as partners but it doesn’t mean I can’t recreate everything on a budget right? Ninja Theory’s Hellblade team is only 16 people, not 160 so it’s not like they have that many more people than I do, so there must be something they’re doing that I can leverage on.

For those of you who have absolutely no idea what I’m talking about, here’s the GDC presentation:


Doing digital real time cinematography has been something I’ve been wanting to do for a while and I’m glad to see that a small company like Ninja Theory managed to pull it all off. Although they didn’t do it alone, so let’s see what part’s of the equation did each company provide

Epic Games : Unreal Engine 4. The version Ninja Theory uses is the same as what you and I can get. So it’s not like the folks at Epic are doing anything special for them specifically.

3lateral : These guys specialize in creating characters and animations. Tameem Antoniades of Ninja Theory mentioned in that GDC video 3lateral having some really nice face scanning and face rigging solutions so it’s likely that’s the part they played in all this.

Cubic Motion : These were the guys Ninja Theory used for the facial solver.

Then there’s Ninja Theory themselves, who were the guys putting it all together.

The GDC video was kinda impressive but during Siggraph 2016, they decided to step things up and leverage Unreal Engine 4’s new sequencer and do some live 3D cinematography and now this is what I’ve been looking forward to. While the demo at GDC was really just them demo’ing the facial animation, the Siggraph demo is live 3D cinematography.

Here’s the video:

So now that question is… who else was involved? In addition to the previously listed partners, they’ve added a couple of other new partners here. Namely, House of Moves, Ikinema, Technoprops and Nvidia.

So let’s see what these companies do:

Nvidia : Yes they make hardware but they also do a lot of software as well through their Gameworks division. What they ended up doing here I’m not too sure. Although I wouldn’t be surprised if all they did was provide the new Pascal Titan X to run everything on. UE4 is really optimized for a single graphics card and to get the fidelity needed to do all this in real time without lag, they’d need a pretty beefy graphics card. As of right now, the biggest baddest GPU on the block at the time this happened would have been the Pascal based Titan X.

Technoprops : These guys do facial capture systems. It definitely looks like the Technoprops headset was being used in the siggraph video.

Ikinema : These guys offer a full body IK solver for more natural motion when applied to animation. It’s likely the Ikinema IK solver was just used to clean up the animations in real time as mocap usually does require some clean up work. Really brilliant use of this tech for that reason if that’s really what they used it for.

House of Moves : These guys are a mocap studio and were probably the ones that owned the mocap stage and setup the virtual camera systems. In the full hour long siggraph video, you’ll see that Tameem is operating a virtual camera with a House of Moves sticker on it.

That’s great but now I gotta figure out what all the hardware is.

We already know that the facial mocap headcam is was provided by Technoprops using the facial solver built by Cubic Motion. Looking at the mocap stage, the cameras don’t look like the Optitrack cameras and the Virtual Cameras don’t look like the Optitrack Insight. So I had to do some digging. After a couple of days of looking for an Optitrack Insight VCS alternative, I decided to look at the House of Moves website and BINGO! It looks like everything is made by Vicon instead of Optitrack, except there was no mention which VCS they were using. So I decided to dig even furth by going to the Vicon website and there it was, the Vicon Camera System.

There was no pricing available and for some strange reason I had never heard of them before. Looking at their resellers and offices, they have no presense in North America because that marketing is dominated by Optitrack.

So now that I have all the puzzle pieces together. How I can put together a cheaper alternative?

Unreal Engine 4 : I got the same build Ninja Theory gets and so can you. If anything they might get a nightly build with more features but by now with 4.13 just being released I’m pretty sure whatever Ninja theory had for the Siggraph demo, I have too.

Animation & Rigging : Yeah we don’t have super experienced animators and riggers, but I’m sure we can get at least half way close. The Abyssian Knights project is an animated project that we’re replicating 2D with, so we don’t need super realistica rigs. In fact we want it to look like it’s got hand drawn physics (read: improper broken artist driven physics to make things look cool).

Motion Capture : There’s no way I can afford a full mocap stage. Even at the cheapest it would still be $10k+ and take up space. We currently use Noitom’s Perception Neuron which doesn’t require a mocap stage and only costs us about 1/10th the price of the cheapest full stage mocap system.

Facial Mocap : This seemed to be a two part solution for Ninja Theory as they used the headcam from Technoprops and the solver from Cubic Motion. Pretty sure Technoprops and Cubic Motion are service companies and wouldn’t want to sell some indie like me their solution and while I probably could write my own ghetto facial solver using OpenCV, It would probably be easier to find someone who’s got a full solution. Enter Faceware! They offer an indie package of their facial mocap solution. It’s not exactly super cheap but still affordable by indies. The best part is they offer both a software and hardware solution. We were originally going to use Faceshift for the Abyssian Knights project and we had a license previously but I’d rather move onto a different solution as it’s not possible to renew the license.

There still is ONE part of the puzzle piece missing though. The virtual camera system? I mean I COULD get Optitracks Insight Mini but the controller alone isn’t enough. I’d still need to get Motive which is $999 and a 6DoF tracker of some sort or spend $2300 for the V120:Duo and a license of Motive Tracker. Thankfully the pluging to stream data to Unreal Engine 4 is free but it still requires Motive so there has to be a better solution. The hunt begins. If anyone has a good solution here, please let me know although I get this feeling I might have to build one of my own.

Update: Nov 17, 2016 – Still trying to sort out the Virtual Camera system. Going to attempt using the motion capture sensors I have and stream it into Unreal. Pretty sure I’ll have to rig the camera onto an actor but this might work!. Will post results when I get something working.