That´s a cool idea!

Cheers,

tim



On 12.02.2014 22:19, Mirko Jankovic wrote:
If it is for lipsync only and need auto lip sync solution you can always export 
mouth controls only from face robot onto your characters.
Meaning something lie:

Create you own mesh
Rig it in face robot
create lopsync in face robot scene, separated from the rest of your mesh
export as emdl only jaw and lips nulls, couple big yellow ones.
autolipsync in facerobot scene, plot and exprot action for mouth cntrls only
then have imprated those controls to your the rest of the rig character, import 
ploted action ad there you go..
did that recently for like 2.5 minutes of lip sync and at the end doesn't look 
bad at all.

It sounds more complicated than it actualy is.


On Wed, Feb 12, 2014 at 10:07 PM, Tim Leydecker <[email protected] 
<mailto:[email protected]>> wrote:

    Thanks Steven,


    http://www.facefx.com/__documentation/2013.3/W250 
<http://www.facefx.com/documentation/2013.3/W250>


    gives a good idea about a default character setup.

    I glanced through some of tutorials and clicked through the docs.

    I can see why FaceFx can provide more in-depth control and possibly
    allows superior results when fed with the audio and the text to analyse.

    The reason why I´m still leaning towards voice-o-matic is pretty nicely
    summed up in this example of storing keyframe data on "proxies":

    (Voice-o-matic) http://www.youtube.com/watch?__v=71hp0z0oUvc 
<http://www.youtube.com/watch?v=71hp0z0oUvc>

    That comes close to the workflow I expect to face.

    Getting in help with the blocking in of the lip-sync, possibly even before
    phoneme shapes and the personalisation of a character can be regarded 
finalized.

    For lack of a better description.

    Thanks again for the pointing to facefx, the site has very good info.

    Cheers,

    tim













    On 12.02.2014 21:07, Steven Caron wrote:

        i just went through this, i couldn't get that good of results out of 
voice o matic. i ended up using FaceFX.

        http://facefx.com/

        the pipeline for FaceFX is a bit tricky, since it sits on top of fbx, 
but they have an API which you can extract the auto lip sync curves without 
using FBX back into
        softimage. the
        results i got with FaceFX were awesome right out of the gate, i only 
needed to add one extra new target/shape.



        On Wed, Feb 12, 2014 at 11:54 AM, Tim Leydecker <[email protected] 
<mailto:[email protected]> <mailto:[email protected] <mailto:[email protected]>>> 
wrote:

             Hi guys,


             can you share your findings when using voice-o-matic

             from http://www.di-o-matic.com/____products/ 
<http://www.di-o-matic.com/__products/> <http://www.di-o-matic.com/__products/ 
<http://www.di-o-matic.com/products/>>


             maybe even compared to FaceRobot?

             Voice-O-Matic seems to help a lot in blocking in a lip sync
             audio/blendshapes animation using a basic set of 8-9 blendshapes.

             "Only" having to setup aprox. 10 good shapes to get to a lip sync
             antimation track ready for further finetuning seems very 
desireable.

             I am under the impression that FaceRobot would require more setup 
work,
             especially when facing an alien, e.g. a not really humanoid 
basemesh.

             Can you recommend an alternative program to voice-o-matic.


             Cheers,

             tim






Reply via email to