Joseph said: "I haven't heard of any core, structural, intrinsic way
in which any software or hardware system can be designed that allows
for visual complexity to be more easily translated into audio or
other input using an assistive technology."

The technology is here, just not evenly distributed, read all about
it: http://www.nfb.org/Images/nfb/Video/K-NFB Reader_Custom.wmv

Andrei: "But why should that be a software overlay on the system
instead of built into the system itself?"

In a word, its an interface.  A presentation layer.  You dont
re-interpret the visual spectrum to audio, back down in an OS (why
not: e.g. JScript).  It must be an open, standards based, solution to
be truly universal and robust.

Note the K-NFB is nothing more than a digital camera and OCR module. 
A simple solution that adapts to most any environment, subject text
and user.

It cant be rocketscience to build a dynamic OCR reader for today's
powerful desktops.


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://gamma.ixda.org/discuss?post=21080


________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... [EMAIL PROTECTED]
List Guidelines ............ http://beta.ixda.org/guidelines
List Help .................. http://beta.ixda.org/help
Unsubscribe ................ http://beta.ixda.org/unsubscribe
Questions .................. [EMAIL PROTECTED]
Home ....................... http://beta.ixda.org

Reply via email to