Andrei, lets try to understand the problem before we focus on the
specifics of the engineered solution.  We should first put a bit more
resolution on what it takes to be "Disabled". 

Let me start with a few observations I've made.  I've never seen
someone with limited motor control use an Apple.  It primarily tablet
PCs or dedicated hardware.  Of the group that falls under "Blind",
I'd like to first better define the problem as Visual Acuity which
can be broken down in to two major groups; Hyperopia or
'farsightedness' and Myopia 'shortsightedness'.  -Very- generally
speaking the first affects the elderly and the second is an eye
defect. And, Hyperopia is going to be a significantly larger
percentage of computer users as we all age and our eyesight fails us
- I feel thats the business case for an eventual application based
solution

I others chime in with their observations...

That said... if I may "bite"  I'm surprised to see a talented
designer such as yourself delve straight in to a proposed solution.
If I may summarize your argument for fixing the OS with one comment
you made "Text doesn't appear on your screen unless it was coded
and rendered to do so."

I, as a reader of what is on the screen in front of me, do not care
about the code needed to compile/interpret/render the information in
front of me.  I just read, humans are system agnostic.  I can read
applications or web pages, games or graphics - what I describe as the
'visual spectrum' of information I take in.  

My approach to compensate for a lack of ability to interpret the
visual spectrum would be to look for a solution not based in a
specific OS or W3C standard.  I'm suggesting a new layer that
interfaces and translates between the user and the presentation
layer.  Build an engineering-out reader that interprets wpf,
javascript, vista or OS X's core animation and you shoot yourself in
the foot when that specific technology becomes redundant.  

I'm suggesting a system, based on my experience in working with
people with disabilities, that tracks the user's eye (if possible)
and reads that section back to them.  Regardless, it monitors all
activity on the screen, from OS dialogs to js interactions, read
those and translate them in to audio or touch (the other two major
'spectrums', or senses, the we receive information).  In short, an
Optical Character Recognition based Intelligent Agent.  Forget about
interpreting what it takes to get those electrons to the monitor,
lets just dynamically snapshot the screen and OCR it.

I feel its going to be more fruitful to address the point of failure
not the system.  The underlying system will change, the issue of
deficient vision does not.  That point of failure lies between the
user and the system presentation layer.  I think I'll call this
approach User Centered Design (o;

cheers :-pauric


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://gamma.ixda.org/discuss?post=21080


________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... [EMAIL PROTECTED]
Unsubscribe ................ http://gamma.ixda.org/unsubscribe
List Guidelines ............ http://gamma.ixda.org/guidelines
List Help .................. http://gamma.ixda.org/help

Reply via email to