Re: Microsoft Talks Raising the Bar on Accessibility

2019-07-08 Thread Tom Kingston via Talk
NVDA and VoiceOver both have voice rotors like Window-Eyes and support 
Automatic Language switching.


On 7/8/2019 2:56 AM, David via Talk wrote:

Please, let me give you all a tiny example of what still needs
attention, before a screen reader like Narrator will become productive.
And yes it seems a tiny thing to many, but it still has a lot of impact.
And it is but one of the many things to be considered.

For most people living in the USA, or in places that are mainly
English-speaking, this might not even have been an issue. Same goes, if
you are living anywhere else, and only know one language. But for many
people the everyday activity looks a bit more challenging. Those who
live in Canada, Non-English European countries, and most parts of
Africa, will know what I am talking. In Canada for one, you have to deal
with information both in English and French. Though you might have taken
the shortcut and learned to deal with your English synthesizer
attempting to read French words, you will agree it is not optimal. And
promise you, try let it read some Islandic, Swedish or Greek - and you
will likely be lost. These languages often have National characters,
that are not part of the English alphabet. Here I am going to commend
Eloquence for at least *trying* to pronounce something that will come
close enough. But try with any of the other synths on the main market,
and you will find them typically skip those charactres in the text.
Imagine your English screen being read out to you - with all the letters
of A, G and K not being spoken. How productive would that be?

So, for a multi-lingual user, the way to go, is to have two or more
synthesizers installed. Each handling one of the languages you are
working with. It won't be anything rare, for users living outside the
English world, to have to switch between languages, numerous times a
day. Even several times an hour. You are checking something on the net,
that is in English. You get a message from your mom, which is in your
locale language. Then you need get in touch with someone in another
country, like here on the list, and again you are dealing with English.
And you simply gotta pay that phone bill today, and your banking site is
back in your native language. Oh, wait, you even have a friend who
speaks a third language, and you sure want to drop her a nice word of
encouragement. One African person I talked to, told me that they knew to
speak no less than seven different languages.

Now, you can easily - and cheaply enough - get hold of electronic voices
for most main languages today. Long as they are SAPI voices, they will
tie in with your screen reader, without too much of problems. That is,
on Windows. Thing is, how easy is it to swap between the different
voices? The last decade it has been a swift thing in WinEyes, since you
can use the VoiceRotor app, which became part of the standard
installation of the screen reader. Using this app, all you need is
pressing a hotkey, wait for a second while the next voice is being
activated, and you are good to go. NVDA, does lack such a hotkey. To
perform a swap of synthesizer - will require you to go in and out of at
least one, and sometimes two different menus.

Android has got quite a number of languages to choose from. And if you
don't want to stick with Google's voice for your native tongue, you
simply just buy another voice, and get it installed. Still, you cannot
easily swap between them. In and out of menus, requiring yu to perform
several gestures for each swapping. now imagine you are checking the
email this morning, and there are 25 new messages from you. First three
are in one language, then comes 2 in another, yet 4 more in the first,
still 1 more in the other - and so forth. If each change of language
will take you 10 seconds, how long will it take to check your mail? I'd
let you do your own time math. Smiles.

See, for most multi-lingual users this is the scenario of their daily
computer activity. And what do you think your employer is telling you,
if you have to spend half an hour each working-day, just in swapping
synthesizers? Something about productivity, my guess would be.

A similar thing would be told, comes to the Braille output. Like you in
English have Grade-two Braille, so do other languages. But since the
words are different, one language from the other, and the Grade-two is
meant to minimize the space requirement and increase the readability,
the Grade-two of another language will have to be quite differing from
the one you know in English. In English, you have one word, consisting
of only the letter A. That's why that letter by itself cannot mean any
abbriviation. But in Scandinavian languages for instance, there are no
words made up of only that one letter. So, in Grade-two of those
languages, an A on its own, will be the abbriviation of the word "at".
Again, in English you have the combination TH, occuring several times
even in one and same phrase. It is of high value to you, to 

Re: Microsoft Talks Raising the Bar on Accessibility

2019-07-08 Thread David via Talk
Please, let me give you all a tiny example of what still needs 
attention, before a screen reader like Narrator will become productive. 
And yes it seems a tiny thing to many, but it still has a lot of impact. 
And it is but one of the many things to be considered.

For most people living in the USA, or in places that are mainly 
English-speaking, this might not even have been an issue. Same goes, if 
you are living anywhere else, and only know one language. But for many 
people the everyday activity looks a bit more challenging. Those who 
live in Canada, Non-English European countries, and most parts of 
Africa, will know what I am talking. In Canada for one, you have to deal 
with information both in English and French. Though you might have taken 
the shortcut and learned to deal with your English synthesizer 
attempting to read French words, you will agree it is not optimal. And 
promise you, try let it read some Islandic, Swedish or Greek - and you 
will likely be lost. These languages often have National characters, 
that are not part of the English alphabet. Here I am going to commend 
Eloquence for at least *trying* to pronounce something that will come 
close enough. But try with any of the other synths on the main market, 
and you will find them typically skip those charactres in the text. 
Imagine your English screen being read out to you - with all the letters 
of A, G and K not being spoken. How productive would that be?

So, for a multi-lingual user, the way to go, is to have two or more 
synthesizers installed. Each handling one of the languages you are 
working with. It won't be anything rare, for users living outside the 
English world, to have to switch between languages, numerous times a 
day. Even several times an hour. You are checking something on the net, 
that is in English. You get a message from your mom, which is in your 
locale language. Then you need get in touch with someone in another 
country, like here on the list, and again you are dealing with English. 
And you simply gotta pay that phone bill today, and your banking site is 
back in your native language. Oh, wait, you even have a friend who 
speaks a third language, and you sure want to drop her a nice word of 
encouragement. One African person I talked to, told me that they knew to 
speak no less than seven different languages.

Now, you can easily - and cheaply enough - get hold of electronic voices 
for most main languages today. Long as they are SAPI voices, they will 
tie in with your screen reader, without too much of problems. That is, 
on Windows. Thing is, how easy is it to swap between the different 
voices? The last decade it has been a swift thing in WinEyes, since you 
can use the VoiceRotor app, which became part of the standard 
installation of the screen reader. Using this app, all you need is 
pressing a hotkey, wait for a second while the next voice is being 
activated, and you are good to go. NVDA, does lack such a hotkey. To 
perform a swap of synthesizer - will require you to go in and out of at 
least one, and sometimes two different menus.

Android has got quite a number of languages to choose from. And if you 
don't want to stick with Google's voice for your native tongue, you 
simply just buy another voice, and get it installed. Still, you cannot 
easily swap between them. In and out of menus, requiring yu to perform 
several gestures for each swapping. now imagine you are checking the 
email this morning, and there are 25 new messages from you. First three 
are in one language, then comes 2 in another, yet 4 more in the first, 
still 1 more in the other - and so forth. If each change of language 
will take you 10 seconds, how long will it take to check your mail? I'd 
let you do your own time math. Smiles.

See, for most multi-lingual users this is the scenario of their daily 
computer activity. And what do you think your employer is telling you, 
if you have to spend half an hour each working-day, just in swapping 
synthesizers? Something about productivity, my guess would be.

A similar thing would be told, comes to the Braille output. Like you in 
English have Grade-two Braille, so do other languages. But since the 
words are different, one language from the other, and the Grade-two is 
meant to minimize the space requirement and increase the readability, 
the Grade-two of another language will have to be quite differing from 
the one you know in English. In English, you have one word, consisting 
of only the letter A. That's why that letter by itself cannot mean any 
abbriviation. But in Scandinavian languages for instance, there are no 
words made up of only that one letter. So, in Grade-two of those 
languages, an A on its own, will be the abbriviation of the word "at". 
Again, in English you have the combination TH, occuring several times 
even in one and same phrase. It is of high value to you, to shorten that 
down to one character. In several European languages, that TH 
combination