Somehow I think it should go into the voice chat tab of the preferences dialog along with the rest of the voice preferences. I have reason to believe that many voice users become familiar with that dialog. I've been selling a LSL based lip sync system for over a year now and it has a semi-transparent pair of lips that sit in the lower left corner of the viewer screen; clicking it brings up a menu of options that allows the user to turn it on or off or control various features, using the normal LSL based blue dialog menus. The dialog has built in descriptive text for each option and also can display a notecard with additional help text. I've had good feedback from my users about the menu system. The only reported trouble people have had with this system is when the voice system fails or if they have some misunderstanding about the gestures that must be installed to enable my HUD to function.
On Sat, May 2, 2009 at 3:43 PM, Philip Rosedale <[email protected]>wrote: > I think that right direction is to assume that the majority of usage would > favor having it on, but that it should be easy to turn off. > > Ron Blechner wrote: > > We are still at the point where the 'uncanny valley' nature of the feature > can make it unnerving, and that problem is unlikely to be easily solved soon > in realtime with low CPU load. > > > This is a really interesting point, as avatars I've seen in world > range from clear to the left of the uncanny valley, right through it, > to acceptably beyond it to the right. It's a large part of the reason > my avatar is still using default skin texture, wears sunglasses, and > has cartoony hair. There's no easy answer to this issue, as even as > technology changes, people will still have a range of different avatar > looks. My reasonable guess, based from my experience with lipsync in > other virtual world platforms and games, is that lipsync alone does > not cause much of an issue with the "freaky factor" as one may call > it. I believe facial expressions are a much trickier issue in regards > to the uncanny valley. (That said, I still think its absolutely > essential for the future of virtual worlds that facial expressions get > a move on.) > > I wonder, then, what sorts of evidence / research is needed? Maybe > this is a case where we all can just "do our homework" - there's > plenty of machinima on YouTube using Second Life and lipsync and > without it. And it's not difficult to find a variety of different > avatars - from cartoony to hyper-realistic - in which we can look at > how lipsync looks ourselves. Personally, I think doing this kind of > research will absolutely reinforce the idea of putting lipsync on by > default, but I'm certainly biased. :) ... I might also add that > non-human avatars have used Animation Overrider tools to replace the > typing and trigger prim-animations that move mouths. (Yes, I'm > referencing the furries!) > > I'd also be interested to hear anyone's first-hand experiences where > the lipsync is unnerving; Philip, have you had that experience > yourself? > > -Ron / Hiro > > CTO, Involve, Incwww.involve3d.com > SL: Hiro Pendragon > > > > > _______________________________________________ > Policies and (un)subscribe information available here: > http://wiki.secondlife.com/wiki/SLDev > Please read the policies before posting to keep unmoderated posting > privileges >
_______________________________________________ Policies and (un)subscribe information available here: http://wiki.secondlife.com/wiki/SLDev Please read the policies before posting to keep unmoderated posting privileges
