Hello Ann, I think it's fair to understand this to be work in progress, I imagine most folks will not expect high fidelity in the begining (have no data to back this). In my case and that of my audience, they serve more as a mediating cue as discussed earlier (this cue is nevertheless weakened by the fact that the 3rd person view is the only 'usable' view in SL and when we stand the lip motions are sometimes hard to catch). However, when we sit around a table those cues are terribly effective. I run meetings every week in SL. In the begining I had to implement a tiny HUD (worn on top) with 5 basic functions (Wave, Yes, No, Clap, Away). People used to wave to manage turn taking. With lipsync, they dont need to wave but feel more comfortable interrupting. So even at this low level of fidelity, it helps. What about the green indicator? well they don't seems as effective...we do use the green indicators during voice troubleshooting...but during conversations they are a visual encumbrance, obtrusive almost. At close range...around a table, they are meaningless...takes some brain processing to find which white dot belongs to whom. This destroys immersion, focuse moved from face to dots above head. I am tempted to discuss about 'design of notifications here, and the need to fine tune their degree of obtrusiveness) but that would be labouring the point.
Having spent some time designing for autistic children, am aware of face processing training strategies involving games that start in the beginning stages with facial cartoon expressions (just animated eye brows for .e.g.) before progressing to real human expressions. So this fidelity argument can have many strands.... Keeping in mind the deaf might appear an inclusive approach but not necessarily. On a different note, Has '"bad kung fu movie dubbing" evolved into an artistic medium, just wondering. For those who have time for some levity http://tinyurl.com/ckrey5 And more generally, at what point does the 'fidelity argument' 'uncanny valley' arguments become relevant? Given the kind of CPU load that we are expecting to achieve our fidelity goals, I cannot see a high fidelity SL in the near year or the year after that (unless a carmack is born somewhere...and finds a way to squeeze something out of existing hardware). Alternative strategies: Would a mashup of video conferencing technology and SL be useful without destroying the immersive nature of SL? I hope to see a video conferencing technology company partner with SL in the same way that vivox did. Ramesh ( I think I have to use my rl name..SL has fragmented my identity...too many alts...this is not entirely bad btw) On Mon, May 4, 2009 at 11:08 AM, Mike Monkowski <[email protected]>wrote: > Ann Otoole wrote: > > RE: "lip sync" capabilities in Second Life > > > > Let me know when a deaf person can read the avatar lips and understand > > what is being said through the microphone and I will fully support it > > not only being on by default but no option to turn it off. Until then it > > looks like a bad kung fu movie dubbing job. > > The technology exists to make lip sync good enough for lipreading, but > until Vivox opens the source for SLVoice, giving access to the audio > stream, this "bad kung fu movie dubbing" is all that is possible. Also, > lipreading-quality lip sync would require real CPU cycles. It would, > hovever, be possible to make it more realistic with very little CPU > resource. > > Mike > _______________________________________________ > Policies and (un)subscribe information available here: > http://wiki.secondlife.com/wiki/SLDev > Please read the policies before posting to keep unmoderated posting > privileges >
_______________________________________________ Policies and (un)subscribe information available here: http://wiki.secondlife.com/wiki/SLDev Please read the policies before posting to keep unmoderated posting privileges
