Most of us old-timers probably expected voice I/O to be a common part of personal computing by now. But here we are in 2008, and I don't see even early signs of voice emerging into the mainstream. Products like Naturally Speaking have some popularity, but my sense is that they're used far more for dictation than any sort of command and response interface. Both Mac OS X and Windows Vista have built-in speech recognition capability, but does anybody use them (or even know they're there)?

So my question for the group is: why? Is it due to technical shortcomings, like recognition accuracy and dealing with background noise? Are there social issues, like not wanting to be overheard or feeling silly talking to a machine?

Or is it that splicing a voice-based UI into current graphical interfaces just doesn't give a satisfactory user experience?

This, to me, is the most intriguing possibility. Voice command today reminds me of the earliest versions of mice for PCs, which generated arrow keystrokes as you moved them around; although they were ostensibly compatible with the existing applications, they just didn't work well enough to justify using them. Could it be that an effective voice-based UI requires a more basic integration into the OS and applications? Perhaps we need an OS-defined structure for a spoken command syntax and vocabulary rather than just expecting users to speak menu items?

Why aren't we talking to our computers yet? Should we be?



________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... [EMAIL PROTECTED]
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help

Reply via email to