Re: Accessibility for person with a motor disability

2018-03-21 Thread Eric Johansson


On 3/21/2018 3:03 PM, Mats L wrote:
> Eric,
>
> It's always very good to have a person speaking on behalf of himself
> or herself as a user representing an actual need. There is a lack of
> this for people with mobility based access problems in these forums
> for free software, compared to the areas of low or no vision. Part of
> the problem is the really wide and diverse range of needs regarding
> physical access - disabilities as well as abilities. (Regarding
> cognitive disabilities and needs there is a general lack of people at
> all able and interested in speaking and doing things on behalf of
> those needs in the GNU/Linux world.) So thanks for stepping in! I'll
> be looking at your ideas with great interest.
I also think part of the challenge with providing mobility related
accessibility features is that those of us who are technical enough to
understand what to do no longer have hands to make it happen so we count
on people like you to work with us building prototype features. I did
that with togglename when I had enough money to pay somebody to write
code for me.
>
>  But remember that speech input is a dead end for a large part of
> users with mobility based access problems, those who have impaired or
> no speech.
Yes. Speech recognition is useful only if the person has a functioning
vocal apparatus. I'm reminded of this every time I get a cold. :-)

>
> Eye-gaze input is another hot area where it seems unrealistic to
> expect any decently competitive and user-friendly solutions in the
> free software domain in any near future.
>
> This said, I think it's very good to have Alex ask these questions
> about what's available. A decent awareness about the state of the art
> is always a necessary starting point for some improvement. And people
> have difficulties even finding their way to existing solutions. Things
> like decent head-tracking, on-screen keyboards (OSK) etc are really
> important to have available, and are life savers for some users, even
> though there is a huge potential of improvements.

This is a place where a foundation could really help. Like you point
out, all the accessibility features we have are really important and can
mean the difference between watching the clock tick for the rest your
life and being able to participate in society at some level.

I wish there were some foundation money to help us build new interfaces,
not accessibility tools but different interfaces for using tools like
speech recognition, I tracking, head tracking etc. to operate in larger
chunks and not emulating the fine-grained motions of a mouse or keyboard.

> One thing that makes me frustrated is the sustained tendency of
> unnecessary fragmentation and lack of collaboration even in this area
> of handling basic accessibility needs. Why don't for example the
> people involved in maintaining and developing Caribou and Onboard team
> up and unite on one common OSK with a wider range of functionality and
> options - for all GNU/Linux distros and flavours, and with support
> from them?
>
> Dasher is really an example of
> the kind of needs based, unorthodox and innovative solutions for text
> input that you were asking for. Have you tried it? As a second best,
> compared to excellent speech recognition, I think it could be relevant
> for you? But there I guess we now also have a problem with continued
> maintenance since David MacKay
>  so unfortunately
> passed away.
>
> Maintaining decent accessibility for all in an ever changing ICT
> universe is not an easy task, and particularly not on the free
> software platforms it seems, so far ...

You've hit on a really big issue. Too much fragmentation, not enough
concentrated support to solve the problem and use it everywhere.

dasher doesn't work for me because I can't move the mouse fast enough or
accurately enough to pick off letters and I'm terrible at spelling. By
the way, that's a serious side effect of using speech recognition. Your
ability to spell degrades...

For a while, I was going to the local A11y meet up and when I described
my issues, I got back a bunch of blank looks. These people had no idea
how to deal with accessibility needs like mine. part of the challenge of
using speech recognition is not just the speech recognition and the
application modifications but it's that using speech recognition in open
office is kind of counterproductive to other people's work. It's about
as easy to relax and speak as it would be to try to take a pee in a
bucket in an open office.

So if you want, I'll be glad to keep chiming in and be as constructive
as possible. If someone feels like trying to prototype fitting the
Dragon browser extensions into something like electron, I'd be glad to
work with them.

--- eric

-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: Accessibility for person with a motor disability

2018-03-21 Thread Eric Johansson


On 3/21/2018 11:30 AM, Alex ARNAUD wrote:
> Le 21/03/2018 à 15:27, Eric Johansson a écrit :
>> On 3/20/2018 5:35 AM, Alex ARNAUD wrote:
>>>
>>> What is as you know the most efficient way to write text with a
>>> head-tracking software?
>> I'm frustrated by this kind of question because frequently, this is the
>> wrong question. you should be asking what is the appropriate interface
>> to enable the person with a disability to write, and more importantly,
>> edit text.  much of this thread has been proposing answers based on
>> what's available, not what the person needs.
>
> I understand what you mean. I just don't know what people with motor
> disability need. I'm trying to understand what it is available and
> I'll check with an association what the users use in practice. I'm in
> the first step on a long way.
We need more than just an accessibility tool, we need a different way of
accessing functionality and data embedded in applications. I've been
trying for years to figure out how to write code by speech and here's
the current state of thinking. I did this is a proposal to github for
talk it github universe.

https://docs.google.com/document/d/1M14DEoC2uTWtQv1HtRyUwK5NKT6Wb0vutu98F9Yl1b0/edit?usp=sharing

It just occurred to me that another example of building your own
interface for the moment is what I'm doing right now. I'm extracting
bank statements to give to my accountant for tax prep. When you download
a statement, my bank labels every statement PDF.pdf. Yeah, I was
thinking the same thing. So I built a grammar that I can say "statement
in June" and it creates a file name of "1234-2018-06.pdf". I still have
to, display in PDF and then click the download button before I can get
to the point where I need to enter a filename but being able to generate
filenames by speech makes it much easier.

>
>> I can't use keyboards much because of a repetitive stress injury. I
>> would say that the most efficient way to write text with a head tracking
>> software is to not even try at all. It's the wrong tool. For many kinds
>> of mobility-based disabilities (RSI, arthritis, amputation etc.) speech
>> recognition would be a better tool.
>
> Which tool are you using on your GNU/Linux distribution for doing
> speech recognition ?

I'm not using a GNU/Linux distribution because well people of promised
speech recognition on Linux for as long as I've been disabled and it
just hasn't happened. what I use is Windows with NaturallySpeaking and
what ever hacks I can get to drive free software. I'm missing tons of
functionality that's present in NaturallySpeaking plus word (i.e.
Select-and-Say and easy misrecognition correction) but I do what I can.

I think it's safe to assume that we will not see speech recognition on
linux in the near future. there are at least a half a dozen projects I
can name off the top of my head that were going to provide speech
recognition on Linux "any day now". If you going to use speech
recognition today, the recognition environment must be available now. 

The question then becomes what can we do if we put speech recognition
"in a separate machine" like a VM or an android phone. the idea is to
isolate the nonfree components so that a disabled person can make a
living, participate online etc. using a mostly free environment. I
propose this because the assumption that every machine should be
equipped with  the accessibility tools the user needs raises the cost of
accessibility and limits the disabled user to just one machine that has
been customized for them. If on the other hand, if we put the
accessibility interface in a separate box like a smart phone and provide
a gateway to drive applications then many more machines could be made
accessible at very low overhead.

remember what I said about solving the disabled person's needs? As I
said to one free software advocate, take care of the needs of the
disabled person to make them as independent as possible, to earn a
living, to live a life first. Advocate free software second if it fits
their needs. I know this is not a popular attitude in some circles but,
quite frankly if I had to wait for speech recognition from the free
software community, I would be living on disability, wasting my life
because I wouldn't be able to work, I wouldn't be able to go to school,
and I just can't tell you how many things you lose when your hands don't
work right.





-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: Accessibility for person with a motor disability

2018-03-21 Thread Eric Johansson
On 3/20/2018 5:35 AM, Alex ARNAUD wrote:
>
> What is as you know the most efficient way to write text with a
> head-tracking software?
I'm frustrated by this kind of question because frequently, this is the
wrong question. you should be asking what is the appropriate interface
to enable the person with a disability to write, and more importantly,
edit text.  much of this thread has been proposing answers based on
what's available, not what the person needs.

I can't use keyboards much because of a repetitive stress injury. I
would say that the most efficient way to write text with a head tracking
software is to not even try at all. It's the wrong tool. For many kinds
of mobility-based disabilities (RSI, arthritis, amputation etc.) speech
recognition would be a better tool.

your question touches a hot spot for me because I've been living with a
disability for about 25 years now. I've also seen, for the same 25 years
people without disabilities proposing the same solutions over and over
again, either not able to or unwilling to hear that those solutions are
, at best crap, at worst humiliating.

As a person with a disability, I will tell you anytime you try to
emulate/simulate a mouse and keyboard with tools like on-screen
keyboards, I tracking etc.,  you are solving the wrong problem. the
right problem (my opinion) is digging into applications and revealing
internal information and providing access to internal controls so that
you can build an interface that matches the person's disability.

It's also very important to build the interface it lets the person
automate or extend that interface without counting on anybody else to
create that extension. For example, my hands don't work right so if I'm
going to extend my speech recognition interface, I need to do it with
speech recognition.

So I would go back to your disabled person and really look at what they
need. If they have enough physical ability enabling them to use speech
recognition, then that will make them more independent than head
trackers or on-screen keyboards would ever do.

--- eric




-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility