Hi Nicolas, Right, a phenome watcher on one mesh to drive phenomes on another is that it? So its a kind of inverse of the picture I gave you. Perhaps, if you set up each phenome in the ice tree and then compare the distance to its self on the mesh as it moves. The average of the array puts out a scalar value, so when this hits zero you will have a match. if you add a rescale node you can make go from 0 to 1. You would have to have the original shape positions stored first to compare. It's a thought anyway.
Max On Wed, Dec 3, 2014 at 2:44 PM, Nicolas Esposito <[email protected]> wrote: > Hi Max, > Thanks for the answer...a couple of minutes ago I was scrolling thru Paul > Smith tutorial anb I watched exactly what your image shows, which is very > usefull :) > > Actually I'm trying a workaround on my previous discussion here on the > mailing list ( Lipsync from Facerobot - get anim values ) > > The suuggestion by Luc-Eric is exactly what I'm looking for, but not being > able to script is a hige problem for me :) > > The description above is the workaround for me, means that I extracted all > the phonemes as single meshes from the main mesh. > > Since during the lipsync the phonemes value goes from 0 to 1 I'm trying to > set the base mesh point position to be my "0" value and each phoneme mesh > point position to be my "1" value. During the animation I have all those > points position which corresponds to my shapes, so I have a 100% match when > the phoneme is spoken ( obviously with a blend between each phoneme when > spoken ). > > What I find difficult is how to convert all those 3d vectors from the base > mesh points position to be my "0", and ( for example ) the phoneme "ah" > points position to be my "1" value, so that at each frame I have a blend > value for every phoneme, so that I can drive the shapes which I previously > created. > > Hope its clear > > 2014-12-03 13:39 GMT+01:00 Max Crow <[email protected]>: > >> Hi Nicolas, >> >> get the point position, add it to the shape position then linear >> interpolate from the original point position and you have an ICE shape >> manager. Very cool and useful. >> >> Hope this helps. >> Max >> >> >> https://www.dropbox.com/s/orxwfdphyl6x75y/Shape%20Manager%20in%20ICE.jpg?dl=0 >> >> On Wed, Dec 3, 2014 at 10:49 AM, Nicolas Esposito <[email protected]> >> wrote: >> >>> Hi all, >>> >>> I'm having some difficulties with comparing two shapes using point >>> position...let me explain. >>> >>> Basically I have the base mesh and I created a couple of blend shapes >>> via the shape manager. >>> The shape are animated and driven externally. >>> >>> I'm not so good with ICE and I may be saying bs, so bare with me :) >>> What I would like to do is to get the original mesh point position ( so >>> an array of point position ) and compare those to all the blend shapes I >>> have. >>> By doing this I will have an array of values which indicate the >>> difference between them ( base mesh compared with first blend shape, base >>> mesh compared to second blend shape, and so on ); I would like to reduce >>> that list to a single value in order to output a value from 0 to 1 so that: >>> Base mesh point position: value is 0 >>> First Blend Shape point position: value is 1 >>> >>> Basically ( if possible! ) I would like to check when the base shape >>> point position are matching ( or how close they are to ) the first Blend >>> Shape, exactly how the shape manager shape values are displayed when keyed >>> manually. >>> The main point is to have the blend shape value ( again, 0 is the base >>> mesh, 1 is the value when the first blend shape is match ) for each frame >>> using the point position as my source. >>> >>> Doable? I'm having context issues using point position and I don't know >>> how I could transform the list of arrays into a single value in order to >>> output and rescale that value to be 0 ( for the base mesh ) and 1 ( for the >>> first blend shape ) >>> >>> Any help will be appreciated >>> >>> Cheers >>> >>> Nicolas >>> >> >> >

