I am a little concerned that we are increasingly breaking down a metaphor, a 
'virtual interface' without realizing what that abstraction buys us.  At the 
moment, we have the concept of a hypothetical pointer and hypothetical 
keyboard, (with some abstract states, such as focus) that you can actually 
drive using a whole bunch of physical modalities.  If we develop UIs that are 
specific to people actually speaking, we have 'torn the veil' of that abstract 
interface.  What happens to people who cannot speak, for example? Or who cannot 
say the language needed well enough to be recognized?


David Singer
Multimedia and Software Standards, Apple Inc.

Reply via email to