It seems clear to me that VO doesn't use any sort of OSM at all.
OSM's are unreliable and eventually will be rendered mostly
unnecessary. Even Windows screen readers have been moving away from
them by degrees. VoiceOver, from what I can gather, moves from
object to object, or control to control, within the current Window,
obtaining this information from the OS. It has no need to build a an
offscreen model, because it has more direct access to the OS, being
an integrated part of it. Offscreen models were primarily used by
Windows screenreaders to try to keep track of all the info being
written to the display by Windows as well as running applications,
something that I do not believe at all applies to VO.
The plus side to this will be in the long term, more accurate,
reliable access to the OS and applications. The down side, at
present, is that it limits access to older applications running in
OSX that don't use the Cocoa frameworks.
On Apr 14, 2007, at 3:49 AM, Joshue O Connor wrote:
It would be really useful to get a definite idea on exactly how VO
works. Rich and Alastair mentions that it is possibly a combination of
the DOM and visual navigation which in this context we could refer
to as
the off screen model (OSM). But I would suggest that this may only
be in
the context of browsing the web. When using other screen readers to
browse the web say JFW, this is done in a "virtual" mode using the off
screen model. When the user comes across a form element that they need
to enter some content they then enter "forms mode". This is in effect
JFW actually interacting with the page itself and not with a "virtual"
version or using the OSM. HAL interacts directly with the DOM AFAIK
and
does not use the OSM at all.
Also when a VO user is doing more OS level interaction, the mode
must be
different, does it still use the DOM or other different DOM types
depending on the application being used?
Josh