On 5/1/07, Willie Walker <[EMAIL PROTECTED]> wrote: > > I know pretty much everything I need to know on the compiz-side of > > things, but accessibility is new to me, and I'm hoping for a few > > pointers for where to look for information, or rather what information > > to look for. > > The current API for driving a magnifier is the gnome-mag API. It's a > pretty complex API, and I think there may be opportunity for improvement > with something such as Compiz. From my standpoint (that of a consumer), > the main concepts behind gnome-mag are: > > 1) The magnifier is a system service to be accessed by an assistive > technology (AT). AT's discover (and activate if necessary) the > gnome-mag magnifier and talk to it. > > 2) Each AT needs to write its own configuration GUI for the magnifier. > > 3) The AT tells the magnifier to create one or more 'zoomers', each of > which magnifies an area of the display. Each zoomer can have various > settings associated with it, and the AT can update the region of > interest for each zoomer at any time. > > 4) In use, the AT tells each zoomer to magnify an area of the screen, > typically in response to mouse, object focus, caret movement events, > and/or speech progress. > > With Compiz, I think there's opportunity for the magnifier to be more > autonomous, but continue to offer itself up for interaction with other > assistive technologies.
One of the things I wish to accomplish is to let users who otherwise wouldn't use AT's get the benefits of this project. The benefit for the people who truly need this would be that the zoom functionality would receive a lot more attention, both with regards to development time and feedback on features. I can allready do almost everything entirely with compiz' existing functionality; Tracking focus, mouse movement, etc, but the one thing I can't achieve with Compiz alone is caret movement, and the speech progress you speak of. > As a loose analogy to this, one might consider what is done with > BrlTTY/BrlAPI. BrlTTY runs as its own process, providing interaction > with braille displays, most notably for virtual consoles. BrlAPI > provides an interface for other applications to access the braille > display while BrlTTY is running. By default, when Orca detects BrlTTY > is running, it assumes the user wants braille and goes ahead and uses > it. > > Something similar could be done with Compiz. For example: > > 1) Compiz could provide its own sophisticated GUI for the magnifier, > allowing for at least what (and perhaps more than) could be accomplished > with gnome-mag. This UI would also include the ability to > create/destroy/modify new magnifier views of the display. I suspect a > typical use case would be that the user has a main 'dynamic' magnified > region that tracks where they are on the display and 0 or more 'static' > magnified regions that monitor fixed areas on the display. This would > be a HUGE improvement, and would not require all other AT's to write > their own magnifier GUI. Compiz has never had the ability to magnify anything except the entire screen with the zoom plugin(s), so this will be interesting, but shouldn't take too much time. I can imagine a few diffrent attributes for a magnified area; - Comletly static. This would stay focused on the same area, regardless. - Follow focus. - Follow caret/mouse. - Follow both focus and caret/mouse. I imagine writing (or reading) while zoomed in, and then suddenly being thrown to a diffrent area because of focus being a bit of an annoyance. This would allow them to have a generic region to alert them of focus (or new windows, in the case of focus stealing prevention). Since this just occured to me, I haven't thought this out too thoroughly, but it shouldn't be too hard to implement, I'm just concerned with the usability of it. > 2) Compiz itself could more easily provide the support for the various > mouse tracking methods rather than requiring the assistive technology to > do so. I think this is the logical thing to do, specially since the zoom plugin will have to keep track of the mouse for input enabled zoom anyway. In Compiz' case, I don't think it makes any sense to let another application tell it where the mouse is. Compiz knows this better than anything else. > 3) Compiz could expose itself as a simpler service to the screen reader, > with an API that provides mostly hints/recommendations about the area of > the screen to magnify. One would need to engage end users in a > discussion about what they really want to achieve, but I'd guess that > the API would be more for suggesting where the 'dynamic' magnified > region would go. Again, talking with end users would be a requirement. This is where I draw a blank; I'll have to play with gnome-mag and Orca a bit to really understand what Compiz should and shouldn't need to expose to other AT's. Most of the users I've spoken to are not in need of AT's, so their concerns are, I would imagine, quite diffrent from those who need this the most. > 4) If need be, the assistive technology could provide font/text hints > about the portions of the screen being magnified, but I suspect Compiz > itself could probably accomplish this by using the AT-SPI directly. It would make sense to let Compiz use AT-SPI directly to fetch this, if possible. But if we're taking hints from an assitive technology, we could probably fetch this too. I'm inclined to go for a solution where both options would be possible, at least at an early stage. > I know I'm going to repeat myself, but I think now is the time to really > engage end users in terms of gathering their requirements and then > develop the architecture from that. > > Hope this helps. You are going to be exciting and useful stuff for the > community. It helps a lot, I have a meeting scheduled with Henrik tomorrow to discuss this, so I'll hopefully have an even better idea of how I'll go about this project then, but any input is helpfull. -- Regards, Kristian -- Ubuntu-accessibility mailing list Ubuntu-accessibility@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility