Hi Luc-Eric, After reading a bit better the vbscript I figure out what was going on and I experimented a bit with mixing the PhonemeFromSpeech and the GetPhonemeBlend, and I got interesting results. I also realized that I would need to get, store and apply the values for each phoneme at each frame from the results of the script, then apply that to ( for example ) the Y position of a Null which will go from 0 to 1 accordingly to the weight of the phoneme. I have no idea how to do that, not knowing scripting unfortunately has always been a problem :(
Anyway I'm quite familiar with the game export process, also exporting the game head rig to Maya as well, so that is not a problem... The main reason for me was to speed up the lip sync process in order to be able to export animation data to Maya from FaceRobot... At this point I'm thinking a workaround...which in theory should be like this ( bear with me for a minute :D ) > Phonemes are set by the position of the FaceRobot controllers around the mouth, jaw position and tongue > The neutral pose of the face, once the FaceRobot rig is built, is the "zero" pose for everyone, and so is a zero value for the phoneme as well. > I store into an array the values of the controller SRT ( for example for the phoneme "ah" ), same thing for all the other controllers when they're at weight 1. > At this point in ICE I'll compare the "neutral pose" controller SRT to ALL the other controller SRT ( when the phoneme weight is 1 basically ), so that I have a blend from 0 to 1 from the neutral pose to the phoneme. Basically by comparing the array values ( neutral to phoneme ) I should be able to get the blend between them... I almost never worked with arrays in ICE, but theoretically it should work ( tell me guys if I'm totally wrong ) Alternatively, using the Game Export rig, then create a set of nulls which are parented to the game export rig bones, but in which I can dampen/increase the position/rotation of the bones, so that I can kinda retargeting and adjuting manually the mouth movement once I parented the set of nulls onto my character...not very professional, but at least is something that I can do Any other alternatives are welcome Cheers Nicolas 2014-11-27 14:22 GMT+01:00 Luc-Eric Rousseau <[email protected]>: > You have to loop and call GetPhonemeBlendFromSpeechClip for each frame; > the frame is the third parameter. > > The script sample from the documentation doesn't print all 13 properties > from returned from GetPhonemesFromSpeechClip, it just prints the first > three. I mean, it's not looping over the attributes or anything, it's > literally printing the table's index 0, 1, and 2. > > Also, before going into this too deeply, I would suggest looking at the > export options for Face Robot. > try export animation rig > > http://download.autodesk.com/global/docs/softimage2013/en_us/userguide/files/face_export_ExportingAnimationRigs.htm > > you'll need to have crosswalk for maya though (i.e. dotXS import) > > On Thu, Nov 27, 2014 at 3:33 AM, Nicolas Esposito <[email protected]> > wrote: > >> Hi Luc-Eric, >> Yes! That is exactly what I was looking for!!! >> Actually going thru the SDK page I found out that there is also a >> "SIExtractPhonemes" script and a "GetPhonemeBlendfromSpeechClip" script >> which work. >> >> Couple of question, since I'm very unfamiliar with scripting ( except >> when it comes to copy&paste existing script :D ) >> >> The script "GetPhonemeFromSpeechClip" says that it'll get 13 attributes ( >> start-end time, phoneme weight, shape weight, and so on ) but if I run the >> script ( the VB example ) I get just what you see in the attached image ( >> so just the start-end time and the phoneme ), while using the >> GetPhonemeBlendFromSpeechClip I get the phoneme weight at that frame. >> >> Couple of things: >> - I would like to use the first script ( get phoneme from speech clip ) >> to gather all the informations about time and weight of every single >> phoneme, but some informations are missing when I run the script and I >> don't know if I'm doing something wrong or if I need to add something ( >> actually mixing the script with the GetPhonemeBlendFromSpeechClipè kinda >> works ) >> - I would like to modify the GetPhonemeBlendFromSpeechClip in order to >> gather all the values for each frame, instead only at the current frame >> you're on the timeline. >> - It would be possible, after I run the script, to gather the >> informations/separate them in an usable way? Consider that I would like to >> link the weight value of each phoneme to a null, so I get values from 0 to >> 1 for each frame, and I can bake the results from it. >> >> Cheers >> >> Nicolas >> >> >> 2014-11-26 21:26 GMT+01:00 Luc-Eric Rousseau <[email protected]>: >> >>> >>> The command GetPhonemeFromSpeechClip will help you get the data out of >>> the speech clip (which is a clip on the animation mixer). Does that help? >>> >>> >>> http://download.autodesk.com/global/docs/softimage2014/en_us/sdkguide/si_cmds/GetPhonemeFromSpeechClip.html >>> >>> >>> On Wed, Nov 26, 2014 at 10:45 AM, Nicolas Esposito <[email protected]> >>> wrote: >>> >>>> Hi guys, >>>> I'm trying to transfer some phonemes shapes from FaceRobot to Maya. >>>> Main purpouse would be to extract all the phonemes from FaceRobot ( >>>> while setting the lipsync ), transfer all the meshes to Maya, set them as >>>> BlendShapes and be able to animated them via SDKs automatically, in order >>>> to have a Blendshape facial animation driven automatically. >>>> Once that is done I would be able to transfer the same blendshape >>>> animation to different characters by changing the blendshape "master" with >>>> the "Bake topology" tool. >>>> >>>> I'm having some troubles getting animation data ( curves/whatever ) >>>> from the lipsync viewer. >>>> >>>> In the first attached image ( FR_1.jpg ) you can see the lipsync viewer >>>> with the phonemes displayed together with the audio ( audio is the default >>>> one from Softimage ). >>>> If I edit the phoneme I can set the values manually. >>>> The value, in the script editor ( FR_2.jpg ), is set, but I notice that >>>> the phoneme is not listed with its name ( should be "ah" ) but with 0, >>>> means that is listed as the first phoneme in the sentence. >>>> So as you can see in the next image ( FR_3.jpg ) the phoneme "C" is >>>> listed to be the 7th phoneme in the sentence, and so on... >>>> >>>> This is a big issue because what I would like to do is to get the value >>>> ( 0 to 1 ) of the phoneme "ah" ( and all the others ) at runtime, so that >>>> value will drive the Blendshapes in Maya. >>>> >>>> I also tried to see if the visemes list under the Mixer node ( see FR_4 >>>> ) would give me some values, but as far as I can see nor the properties nor >>>> the animation editor would give me a curve or values of that phoneme at >>>> that specific frame. >>>> >>>> I'm also open to other solutions/suggestions ( maybe just bake the >>>> bones animation, import them into Maya and create some scripts with >>>> constraint/offset controllers in order to apply the lipsync to my character >>>> inside Maya, but Blendshapes method looks faster and less complicated to >>>> setup ).... >>>> >>>> Suggestions? >>>> >>>> Cheers >>>> >>>> >>>> FR_1.JPG >>>> <https://docs.google.com/file/d/0ByXsl1-14iQEQzBMdkJ6allfSms/edit?usp=drive_web> >>>> >>>> FR_2.JPG >>>> <https://docs.google.com/file/d/0ByXsl1-14iQEUEQzcFlaYmJDSXM/edit?usp=drive_web> >>>> >>>> FR_3.JPG >>>> <https://docs.google.com/file/d/0ByXsl1-14iQETXYwS1ZuSXc3bWs/edit?usp=drive_web> >>>> >>>> FR_4.JPG >>>> <https://docs.google.com/file/d/0ByXsl1-14iQEODBvQlpCOHhkQkk/edit?usp=drive_web> >>>> >>>> >>> >>> >

