Jeff, I think all of the reasons you mentioned applies. Speech
recognition and synthesis in my opinion adds very little in the
keyboard/mouse/screen paradigm we are currently in.

Studying this also reveals how much information there is in the way
we say things, human to human, which is very tricky for computers to
analyze. A "Hmm" can mean so many things depending on timing,
intonation etc. Gabriel Skantze at KTH did a pretty nice system for
"pedestrian navigation" which tries to overcome this.

http://www.speech.kth.se/~gabriel/software.html


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=29005


________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... [EMAIL PROTECTED]
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help

Reply via email to