On Wed, Dec 21, 2005 at 02:47:11AM -0800, Alan wrote:
> >And with your helpful assurance, the current inaccessible version of
> >OpenDocument is pushed into state mandates where it takes an Act of God or
> >an Act of State Government to get a new version of the standard (which
> >probably won't fix these kinds of problems anyway) to be adopted.
> >
> The CIO of Mass. has publicly stated that ODF will not be implemented 
> for disabled workers until there is a reasonable, accessible alternative.
> Further, he has made good accessibility support a key point of the 
> contracts currently being bid for implementation.

At this point the most accessible format is still ASCII.


> >On behalf of the blind, thanks a lot.  At least with word documents we had
> >a chance of sorting these things out if someone bothered to set the
> >metadata on their images, and form entries just didn't exist.
> 
> I was under the impression that the screen readers and other tools 
> available for Office and kin were rather hackish and that a change in 
> the file formats generally made a mess of things until someone 
> reverse-engineered the changes.
> Is this not the case?

Particularly now in the age of GUIs, yes.  It used to be that you wrote a
small set of rules for a given program and you would set a mode when you
started up a given program to tell it what to read and what not to.  It
was real easy back in the dos daze because all you had to do was tell it
what kind of screen updates to read to you as it happened and a couple of
ways to tell it to read random bits of the screen on demand.

Nowadays in Windows it requires a major partnership between whoever writes
the screen reader and whoever writes the thing being read to make sure the
two programs cooperate with eachother.  Word is very well supported.
Excel not so much.  Access only a little.  Winamp forget it.  iTunes much
the same.  Explorer is okay.  Firefox isn't.

Screen reading on the Mac faces all of the problems it does with Windows,
except that Windows started to get support for the basics ten years ago.
Mac is just starting.

Support for screen reading in Linux is either pretty okayish using
screader (based on screen, using the same control-key prefix notion screen
does for all of its commands) or yasr (improved alternative to screader
but again trying to only use up one control key as a prefix), or even more
infantile than on the Mac under Gnome and the like.


The problem with a screen reader is easy to explain to those who remember
the old 8 bit home computers.  If you were lucky enough to have a diskette
drive back then, you had a random access storage media.  This is what the
visual screen is like.  Whenever you look at something, read something,
whatever you do with your eyes, that is what you've got.  Even when
reading a passage, we don't read linearly.  We bounce our eyes back up at
a word we read two lines up to make sure we read it right to enhance our
understanding of the context we're looking at now.  A good college student
learns to read far more randomly than that--glancing forward and back
quickly to form a mental outline of the reading, possibly going so far as
to start formulating the questions they will want to have answers to by
the time they're done reading the current section, etc.

When you attempt to turn visual information into auditory, you lose this
ability.   It becomes like the cassette recorder those of us too poor for
a diskette drive had to use.  When dealing with analog media such as books
on cassette or as mp3, we have the same controls you get with cassette and
CD players.

When dealing with interactive computer speech, if we have the ability to
read anything that changes automatically and anything we want on demand
(ie, as we could under DOS with a tuned reader), we have some random
access ability once we learn the screen reader's silence command.  An
expert user will often be heard listening to a bunch of sharply cut
phrases at high speed as they tell the software to read something, listen
just long enough to hear what they need, shut the thing up, and read
something else.

The problems are two.  First, in order for this to work, you've gotta get
really good at doing it.  It takes a lot of training and practice with a
given program to figure out how exactly to navigate around the screen with
ease in a given application.  Problem number two is that you're navigating
around the screen--hence the name screen reader.  The application that
provides the audio interface is just bolted on to the side of another
application, which may or may not be well-suited to it.


There is an alternative to this.  The first one I used (back in 1990) was
a program called KeySoft on a Toshiba T1000 (no suffix) with built-in
speech hardware, sold as the PulseData Keynote PC+.  This thing ran under
DOS, and was designed to be heard, not seen.  It was primarily a text
editor, though it had a basic scientific calculator mode, a very simple
file manager, and a line-oriented terminal emulator as extra utilities.
You get the same out of vim or emacs today.

What was different about KeySoft was that the Alt key had a sticker over
the top labelling it Read.  Generally speaking, you used Ctrl-thing to
issue an editor command, and Read-thing to issue a reading command.  The
control command would navigate to the next paragraph, the read command
would simply start reading the next paragraph.  It would read back what
you typed (letters or words) as you typed, and it would read as you
navigated.  Obviously the Ctrl/Read mapping wasn't 100% since Read-H was
help and Ctrl-H was, well yeah.  ;)

KeySoft was unique because the screen display was really an afterthought.
It displayed just one line of the file--a slice of the current paragraph.
This was very good for those who used seperate boxes that display one line
(or part of a line) in Braille.  There was a view command that would
attempt to render a screenfull of the file in roughly WYSIWYG format
centered on the cursor, but that was mostly so sighted people could stand
to use KeySoft for more than a few sentances.

Today KeySoft runs under Windows CE and is part of a hardware device we
call a notetaker.  Think about the form factor of the Sony Picturebook of
a few years ago or the Toshiba Libretto or the Psion NetBook or other
subnotebooks that use a stylus because they lack room for a trackpad.  Now
take the screen off completely, give it a QWERTY laptop keyboard or a nine
key Braille keypad (possibly with added function and arrow keys), maybe
add one of those Braille displays, and price it anywhere from $1200 to
$7000.  Essentially it's a PDA that's way too big to go in a pocket for
ten times the price.  Yeah, it now includes a web browser.

The competitor to KeySoft is the software used on the PACmate, which
derives from the Braille'N'Speak.  The lineage is important only because
KeySoft was originally an ASCII/QWERTY thing and the PACmate software was
originally a Braille thing.  Today both do much the same thing.

In the free software world, we have emacspeak.  The problem is that
despite T. V. Raman's claims otherwise, emacspeak does seem like it has
been bolted on to emacs rather than built in to it.  All of the commands
are tied to the Ctrl-E prefix, and some modules don't work with it right,
and maybe if I knew more about emacs I would understand some of the things
it reads, but a totally blind person would find out that they needed to
understand emacs in order to learn emacspeak, and they needed to be able
to use emacspeak to learn emacs.  The elegance of having one bucky bit
mean do something in the editor and another being to read something, and
mostly having to learn just one set of commands..  Well, I miss it.


Eventially what I'll do is reread some stuff on best practices to handle
working with arbitrarily large files where large portions may be inserted
or deleted.  Some solution involving not loading the whole file into a
contiguous block of memory and periodically reallocing it as the file
approaches the limit, with lots of memmove's involved would be good.  ;)
I don't even think about using that kind of thing on a 16k buffer, but
when you start talking about 16M buffers and larger it's just not the
right solution anymore.  =)

Writing a text editor really has more to do with implementing the above
without bugs and providing a good interface to it all.  Then you start
developing macro languages, syntax highlighting, editor modes, and
generally complicate the problem significantly beyond reason.  ;)

> >Yeah, I know, this is open source--if I have a problem with it I need to
> >go and learn the codebase, figure out how to solve the problem, gather
> >support within the community for my proposed solution to it, write a
> >patch. and try to plead my case for it to be incorporated upstream.  Then
> >I can try to get the state to use the new version.  As if.
> >
> You could join the current working group for accessibility issues and 
> make your concerns known.
> http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=office-accessibility

Others already have.  The response they've gotten universally is that Word
doesn't do these things either, so it's okay.  My voice won't change that,
and it won't stop people from calling access an application issue either.

Besides that, the plate is pretty full right now, just writing a text file
reader.  Yes, reader.  As in out loud.

-- 
"We are what we repeatedly do.  Excellence, therefore, is not an act,
but a habit."
        -- Aristotle
_______________________________________________
EUGLUG mailing list
[email protected]
http://www.euglug.org/mailman/listinfo/euglug

Reply via email to