Hi Rick,

    I create the WE object for those projects, the first thing to do in terms 
of getting the screen reader up and running. My games use 2 different 
techniques because the Trek game I leave the option for screen display, thus 
have to use a Python module for exposing the keyboard. Where the Battelship 
uses events embedded in Python and don't use any screen display, thus I am 
allowed to use keyboard events of a higher gaming level.

    The trek game I wanted sighted people to see a simple grid where in 
Battleship everything is described.

    As I had mentioned in my last email posting before getting this one, he is 
correct in using C++ for one reason, it is easier to get to the even, but 
requires a lot more detail, which is a pain.

    The biggest problem in any scripting language is getting the memory 
mapping, the structure of that map, and plugging in the numbers required by 
that event. Python has nice structures, but as he says, it is slower because of 
the level at which Python runs.

    I wrote but have not posted a Clipboard program/app that takes the data 
type from the clipboard and if it is a file listing, I can get the list of 
files you have copied to the clipboard, including the path, without copying or 
moving the file. Which might come in handy when you only what the names from a 
folder or path. The key in getting this information is knowing the structure of 
the data and where it is located.

    As a Python user I had worked with the man who wrote most of the UI hooks 
or objects for the DOS internal commands. There was an issue of exposing the 
MSAA and he was amazed that it did not work with the WE model...even though he 
exposed that event...

    I will take a closer look at the events and see what I come up with. Some 
can come from the WMI and what to call; as I did for the Uninstall app; which 
also needs an upgrade for I changed delays into timer events so as not to slow 
down initialization of all the WE apps. Those events are the Async I am talking 
about to get UI events so you can move on to the next event without holding up 
the system.

    To be honest, I have not played much with objects inside the DOM, but that 
is going to be my next project to expand on the "Breaking News" app. But most 
are standard and easy to control. I have found little difference in most web 
pages I have listed in my app tree and I have over 50 web sites I look at and 
most work in terms of getting text, but some purposely want an event triggered 
before exposing there data for tracking and logging in issues; but some are 
global paths used and I do miss some of them and base most on a original path. 
HTML5 does use objects more and just have not played with them yet, but like 
the DOM, you have to know the default paths for some of the stuff. Playing 
videos and such are much easier using the HTML5...

    Time to get working outside now for fall is rapidly approaching and want to 
get my so called house work down, at least outside work done before the snow 
flies...

        Bruce



  Sent: Monday, September 08, 2014 7:53 AM
  Subject: Bruce And Next Conceptual Modeling


  Hi Bruce: After scraping you get into the domain of actually envisioning how 
a screen reader might work to try and emulate some features programmatically.

  I was talking to a few folks over the weekend about this.

  Here is a good response from one of the fellows who worked on NVDA Project.

  It gives the 5k overview of what they work with, notice he mentions scraping 
and parsing what is what you are doing in your app.

  Many of the other technical items and structures are handled by WE for the 
most part and some exposed in the WE Object Model.My stumble in my last attempt 
at an "External Script" involved low-level handling of keyboard input and the 
fact the WE model didn't get at the dom code I needed to make the Forms 
Designer in vb.net read while native, managed uia actually, did expose the 
objects in the designer.

  Anyway, I thought you might get a kick out of seeing the 5k overview of how 
the NVDA Screen Reader is viewed by a Programmer and how he envisions how some 
of the

  technicals fit together from his viewpoint.

  Note I don't use NVDA, never worked on it and don't agree with everything he 
says - he is just another programmer but the keywords and technologies might be 
of interest as you develop your programming / scripting skillsets.

  Hi Rick,

  About the purpose of a screen reader versus self-voicing

  applications: in my opinion, it is better to create something

  that can provide access to majority of appications, with the rest

  coming in as external modules conforming to a set of screen

  reader's API.  Just as human readers should be flexible in

  handling various document types and conventions, a screen reader

  should be quite flexible to handle most of the controls found in

  many applications and emerging standards.  There's a saying in

  programming circles I like: do it right the first time, then

  worry about corner cases later.  I believe we can apply this

  concept to screen reader technologies: first get it to read what

  can be read, then worry about corner cases (inaccessible

  controls, apps, standards, etc.) later as time goes by.

  Having worked on the NVDA project like some of us on this list, I

  can confidently say that developing a screen reader is a major

  undertaking.  At a nutshell, a screen reader is a program which

  interprets elements of the user environment in ways which allows

  blind and visually impaired computer users to use the environment

  effectively.  This process, involving various modules (see below)

  could take months to years to perfect, and the developer of a

  screen reader must be a visionary willing to learn today's

  standards and provide a way to interpret these standards (ARIA,

  User Interface Automation, threads, etc.) for the benefit of his

  or her screen reader users.

  Taking NVDA as an example, a screen reader is composed of various

  modules which must (and I emphasize the word "must") work

  together to gather, interpret and present user interface

  elements.  These modules include, in order of importance, remote

  code communicator and injector (used to retrieve useful UI

  elements from programs), screen scraper and parser, accessibility

  API handlers and responders (MSAA, UIA, JAB, etc.), input

  processors (keyboard, braille display, touchscreen, speech

  recognition, etc.) and output presenters (speech, braille,

  motion, etc.).  In addition, a screen reader, being a privileged

  software, must be aware of mechanisms provided by the operating

  system for security, memory access, network connectivity, input

  hooks, performance (UI presentation and IPC (inter-process

  communication)) and so on, as well as being sensitive to changes

  in UI environments across OS versions and among different

  accessibilityAPI versions, implementation and compliance by the

  OS and third-party applications (and in case of websites,

  compliance to web standards and guidelines on accessibility to

  ease the work of the screen reader in presenting a site's

  content).

  As for coding such a software, many would use C++ or another

  compiled language for speed, although some components might be

  coded in a scripting language like Python for ease of maintenance

  and extensions at the cost of performance.  For a screen reader

  like NVDA which uses C++ for low-level communications subsystem

  (NVDA Helper) and Python for the high-level code, it is important

  to decide which component must be run at full speed and which

  components should easily was extendable by developers and users.

  Although many users would want a screen reader to be a

  performance champion in UI presentation, many would point out

  that due to technology in use (not just the screen reading

  algorithms, which most are parsing and communications, but also

  the framework in use and for compatibility) would prevent the

  screen reader from taking advantage of latest capabilities of

  operating systems, hardware features and programming models and

  interfaces.

  I mention NVDA a lot here because its source code has a lot of

  potential to be studied in academia in developing better screen

  reading algorithms and to provide a historical (and active)

  example of how a screen reader is developed.

  Hope this helps.

  Cheers,

  Joseph

  Now, Bruce, I hve delt with most of the above technical at one time or 
another so understand what he is trying to say but I don't agree with some of 
it for a "Targeted" project limited to asingle product like Internet Explorer 
or say Visual Studio or Sql Management Studio.

  I don't believe in ReInventing the wheel so if I ever do something along 
these lines I would either work in an existing scripting language or create a 
very, very limited project either self voicing or using one of the existing 
screen readers api capabilities.

  Anyway, thought youd get a kick out of seeing the stuff the screen reader 
guys deal with on a daily basis and might get some ideas for future research if 
interested.

  I don't use NVDA as I find WindowEyes usually

  adiquit and use narrator if needed to supplement it where narrator will help.

  Good Hunting:

  Rick USA


---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com

Reply via email to