Posted by Bellsey, 2 February 2012 12:00 am
During my webinar I showed some samples at certain parts of the workflow. Sometimes due to the webinar recording and bandwidth the playback isn’t great so you can see them below and also on my YouTube channel.
The first clip is from the lip-sync workflow using the sample content that ships with Softimage. All I had done here was literally import the relevant data files and clicked to create the basic lip-sync, no other tweaking. It’s not perfect, but Face Robot does a good initial job, and certainly enough to really work on and improve.
With the second clip in the webinar video I had used a different audio and text file for lip-syncing. This data came from Blur Studio and the Age of Conan cinematic. There was also mocap data for this, but I only used the audio file and associated text. Again I haven’t done any tweaking, just followed the simple steps, but just as a default setting, it works pretty well and better than the first test. As before, not entirely perfect, but it can easily be tweaked and improved. There’s no other animation other than the lip-sync, so it does look stilted.
Now in this third clip, which I didn’t show in the webinar, I have taken the lip-sync and combined it with some mocap, so overall it looks a lot better with some secondary animation added.
And finally this was the playblast after I had completed the Face Robot game export process. As with the other videos, I haven’t done any tweaking of the animation, I have just followed the step by step workflow. The capture was straight from viewport. The head mesh is approx. 2500 polys with a bone count of 42.
Remember, if you would like to know more about Face Robot, including where to get the tutorials, then you can use the Softimage public wiki here:
Please only report comments that are spam or abusive.