Not everybody will wear a human face in VR, so there is merit to use
avatar gestures. A specific interface would require a one to one
relationship between the human face and the avatar face. There may be
different gestures needed when the human face performs certain actions.
To keep it in avatar gestures lets us not rely on that one-to-one
relationship. A middle layer between the human facial gestures can act
to translate the current shape of the avatar face. The avatar may have
several heads (I've seen 7 headed dragons) and maybe someone wants to
control them all with human facial gestures. Something that is too
specific in a one-to-one relationship wouldn't allow that to happen. I think a channel that sends recognized avatar gestures would be less noisy than a channel that sends all head motions and facial positions, continuously. Moriz Gupte wrote: I think that with the exception of 'yes' and 'no', the thought of translating head motions to 'construct' a facial _expression_ or a gesture _expression_ or even implement FACS (I dont think Philip meant that) |
_______________________________________________ Policies and (un)subscribe information available here: http://wiki.secondlife.com/wiki/SLDev Please read the policies before posting to keep unmoderated posting privileges