I guess that in order to have a straight forward lipsync Voice-o-Matic does the job, the price looks reasonable and the results are quite good, but if I'm going to spend 350$ I would go one step forward and buy maybe ZignTrack or something similar, just to add eyes, blinks and brows movement to the head.
Right now I'm using the Faceware suite ( Lite, Pro is waaaay out of my wallet ) and I'm trying to improve the results combining Facerobot with ICE in order to setup different kind of poses that can be shared between various characters. Faceware works by setting up 45 expressions which will drive the face, but I'm kinda building a system that allows to have a custom character pose which is not available in the Lite version that I'm using. The inspiration for that is the Janimation head tech demo, which uses 27 expressions build inside Zbrush, then exported into Facerobot with a custom rig, but based on Facerobot, and the results are stunning. Workflow is quite time consuming, so in the end it comes to how much time you want to spend tweaking everything in order to have a perfect and believable character Right now I'm scanning my face in different poses ( U shape, mouth open, anger, surprise and so on ) in order to have a reference of the motion of the face and of the skin. What I will do is to retarget those expressions to a new character using the relative point position from one character to the other, so that the poses are shared and I have a good reference, and probably ( I need to check but its definitively doable ) add more custom poses using ICE and some custom shape in order to refine the process. If everything works well I might end up with pretty good results, but I'm still testing everything out, so a demonstration ( or a tutorial ) wont come out any time soon Again, in the end it depends what you want to achieve, and personally I like to work with Softimage and Facerobot, even if its a 2007 old tool, but it suits my needs ;) Cheers Nicolas 2014-02-13 11:21 GMT+01:00 Tim Leydecker <[email protected]>: > Thanks for your info, Nicolas. > > I´m going through the Motionbuilder Help at the moment, looking through > it´s > > Audio-driven facial animation workflow options > > http://download.autodesk.com/global/docs/motionbuilder2014/ > en-us/index.html?url=files/Animating_faces_Audiodriven_ > facial_animation_workflow.htm,topicNumber=d30e55972 > > Personally, I like working in Softimage a lot but I have to come up with > something > that is open, extensible and allows for easy progessive refinement as part > of the > modeling stage. > > I am leaning towards setting up Character Poses as individual viseme > snapshots in Maya > and then driving these with Voice-o-matic as a means of getting a block > animation pass. > > That seems to have the least amount of overhead and be closest to spending > as much time > as possible actually creating and refining good phoneme poses compared to > spending a > significant amount of time setting up a more evolved system. > > I do want to avoid having to lock the model, then end up with Garbage > in>Garbage out, > it´s most important to have ways of getting at good poses and be able to > refine the input > poses easily by "just" adjusting a character pose and directly seeing the > result updated in the > blocked in the animation, i´m positive voice-o-matic will allow me to do > exactly that. > > But now, this will have to wait. Opinion built. Request filed. Action tbd. > > Thanks for all you guys´ insights! > > Cheers, > > > tim > > > > > > > > > > > > On 13.02.2014 09:22, Nicolas Esposito wrote: > >> I tried both Voice-o-Matic and FaceFx, but in the end I preferred to use >> Facerobot combined with the lipsync tool in order to have decent lipsync >> based on audio ( and text ). >> It can be tricky working with Facerobot but with a bit of trial and error >> you can get really nice results and have a basic facial animation without >> going crazy with all the options >> available. >> >> For Alien creatures it depends how ocmplex the mesh is, but generally it >> requires more setup with the regions, but other than that. >> Also you can easily create new phonemes and corrective shapes when you're >> importing the visemes, so honestly I would rather not buy Voice-o-Matic >> since I can do the same thing in >> Facerobot with Lipsync... >> Maybe thats why he dropped Softimage support, since the same toolset is >> already available >> >> >> 2014-02-13 0:02 GMT+01:00 Tim Leydecker <[email protected] <mailto: >> [email protected]>>: >> >> >> Thanks guys, >> >> ufortunately, I have to wheight my good nature against a limited >> amount of time. >> >> The best voice-o-matic samples i´ve dug up this evening mostly >> revolve around >> toonish and on the edge examples of a character. whip is lovely. >> >> http://www.youtube.com/watch?__v=K1suyOYNMV4&list=__ >> PL0D3D3A1137CFCF90&index=36 <http://www.youtube.com/watch? >> v=K1suyOYNMV4&list=PL0D3D3A1137CFCF90&index=36> >> >> >> The common tip I see repeated every time is to reduce the smoothing >> to 1. >> >> The end result in the above video would give me enough to test my >> viseme shapes. >> >> I´ll see if I can give the Maya Version of voice-o-matic a try. >> I like the patient way that guy from Montreal explains his thing. >> >> It might be possible to have a face robot control group to pitch >> against, >> even if only for a test (or to satisfy my short attention span and >> playfulness). >> >> I´m just the modeler but I want to be sure the stuff I hand over can >> animate nicely... >> >> But first, I want/have to get Fibermesh curves solved with Yeti and >> rendered in Arnold. >> >> Which is why I would pick the voice-o-matic Maya version. >> >> I can model *.obj wherever I want (my personal 3D-love tour) but at >> some point things will end up in Maya. >> >> Cheers, >> >> >> tim >> >> >> >> >> >> >> >> >> >> >> >> >> >> On 12.02.2014 23:34, Luc-Eric Rousseau wrote: >> >> FaceFX is using the exact same voice recognition library as what I >> used in FaceRobot. Of course, workflow is more important than >> voice >> recognition tech, >> >> The di-o-matic guy is quite friendly, we've talked a few time. >> It's >> his own custom voice recongnition engine. He's here in Montreal. >> It's >> worth talking to them. He might have dropped the XSI plugin due to >> lack of interest, I don't know. I've never heard anyone talk >> about >> the plugin. It says up to Softimage 2013. >> >> If you're a cheap bastard with loads of free time, you can get the >> data out of face robot with the ImportSpeech command without ever >> using Face Robot. But phoneme recognition is a tiny part of facial >> recognition and that's probably not worth the trouble. there is >> lip >> sync stuff in Motion Builder too, if you have the suite. >> >> >>

