I think that with the exception of 'yes' and 'no', the thought of
translating head motions to 'construct' a facial expression or a gesture
expression or even implement FACS
<http://en.wikipedia.org/wiki/Facial_Action_Coding_System>(I dont think
Philip meant that)
seems at least to me to much more complex -- probably why I interpreted
Philip's post the way I did.
And I do not recommend using the very narrow channel of information that
head motions provide to drive a wide range of avatar gestures...because this
channel will be so noisy, that a lot of ambiguities will arise, so much that
it will cease to become useful.
So what looks the 'simple' low bearing fruit... might ultimately be
problematic.

On Thu, May 21, 2009 at 10:34 PM, Melinda Green <meli...@superliminal.com>wrote:

> my understanding is that Philip and Merov's
> intent is to simply translate user head movements into avatar gestures.
>



-- 
Rameshsharma Ramloll PhD Research Assistant Professor Idaho State
University, PocatelloTel: 208-282-5333
More info at http://tr.im/RRamloll
_______________________________________________
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/SLDev
Please read the policies before posting to keep unmoderated posting privileges

Reply via email to