[Re-sending my message from April 6 which was emailed but is not showing up in the web interface: apologies to anybody who sees this twice!]
Hi Eric, I'm an engineer on Chrome, and I've been working with the Polymer team on making sure that we address Custom Element accessibility. I agree with you that it is critically important that we don't leave users with disabilities behind in the Web Components world. I am cautiously optimistic, however. Firstly, web developers are already a long way down the path of creating complicated, mouse-driven UIs which lack semantic information. Web Components gives us an opportunity to formalize techniques for creating robust custom elements with good accessibility. For example, the ARIA working group are also currently discussing proposals for allowing ARIA to be extended by developers (see the discussion beginning with this email: http://lists.w3.org/Archives/Public/public-pfwg/2014Apr/0034.html) to better support novel types of UI. One problem with ARIA is that it is not supported by the dominant voice control software, Dragon, which is still using very outdated technology - we do have work to do on the standards side, but we need those standards to be supported by the relevant tools as well. There is also work underway to formalize the possible actions and events which may occur on an element: http://www.w3.org/TR/indie-ui-events/ - although this is currently in the early stages and in need of more support from the community. Secondly, Web Components gives developers a chance to try out novel ways of supporting accessibility-related use cases, as argued by Steve Faulkner: http://blog.paciellogroup.com/2014/04/usability-accessibility-opportunities-web-compenent-world/ - which could, in the future, possibly become part of the standard HTML spec. Our recent article, http://www.polymer-project.org/articles/accessible-web-components.html, outlines the strategies the Polymer team are using to address accessibility. Here is how we are trying to address your four points using the technology available today: > 1) read the state of anything that can be displayed or changed via a > GUI. this is a getter function We annotate custom elements with ARIA role, name, state and value properties. This provides the state information which can be queried by speech recognition technology, via platform APIs like IAccessible2, UI Automation, NSAccessible etc., and allows you to query the interface by name. > 2) change the state of anything that can be changed by a GUI. This is a > putter function. This is where there is currently a gap in specifications, and authors are forced to implement their own interactions. The IndieUI spec proposes one possible mechanism for addressing this: http://www.w3.org/TR/indie-ui-events/#intro-example-valuechangerequest . To fill this gap for now, we suggest using data binding to translate user gestures via keyboard or mouse events into changes to attributes on the custom elements, which are then reflected in the ARIA attributes accessible via platform APIs. > 3) do something. This is usually the action associated with a link or > button but can also drive mouse over or any other event causing an action. Similarly, acting on an element is currently handled via keyboard and mouse events, and this could be supported at a higher level by something like IndieUI actions (http://www.w3.org/TR/indie-ui-events/#actions). Currently, Polymer elements listen for mouse and keyboard events which are used to drive actions on the element. As you say, these events can be simulated by assistive technology via the platform accessibility APIs. We do recommend having a robust keyboard story and ensuring that elements are focusable, to avoid having to perform fiddly mouse interactions. > 4) tell me when something changes. These event notifications allow you > to use hand/mouse at the same time as speech and it lets the speech > system stay in sync with what's being displayed. ARIA provides an aria-live attribute, which is also implicit in certain role values, to notify users when the content of certain regions changes. I would greatly appreciate it if you could let us know if you see any areas where we could improve this strategy to address your needs. Also, it would help to hear more about your specific experience: what browser and assistive technology are you using? Are there any sites which work well for you now which could be used as a model for best practices? Thanks, Alice On Thursday, April 10, 2014 8:17:17 PM UTC-7, Kaleb Hornsby wrote: > > I've been experimenting with aria-attributes in polymer element,s and they > seem to do OK. Eric, what have your experiences been with polymer and > WAI-ARIA? > > On Monday, April 7, 2014 9:15:21 PM UTC-4, Eric wrote: >> >> As a disabled person using speech recognition for the past 20 years, I >> see absolutely nothing in your project addressing my needs and I'm >> feeling left out. in fact, you are reinventing the same old >> accessibility framework that was first published in roughly the early >> 1990s. gluing accessibility to a GUI which at best is a horrible >> mismatch. >> >> Here's what I need as speech recognition user. >> >> 1) read the state of anything that can be displayed or changed via a >> GUI. this is a getter function >> 2) change the state of anything that can be changed by a GUI. This is a >> putter function. >> 3) do something. This is usually the action associated with a link or >> button but can also drive mouse over or any other event causing an >> action. >> 4) tell me when something changes. These event notifications allow you >> to use hand/mouse at the same time as speech and it lets the speech >> system stay in sync with what's being displayed. >> >> By these four simple interfaces, you eliminate the mismatch between a >> GUI and a speech interface. No longer do you have to navigate through >> menus and clicks and links to get to a data element which you can't even >> speak because it has no name. Your grammar can short-circuit all that >> and go change the data element in question. >> >> Remember, a GUI navigation is tall and narrow, speech navigation is wide >> and shallow. Don't try to force one to be the other. >> >> If you look at the other accessibility requirements, you will see >> similar mismatches and this API set or something very similar to it >> would make it possible to address the needs of the new accessibility >> interface and provide a simpler, more direct interface. >> > Follow Polymer on Google+: plus.google.com/107187849809354688692 --- You received this message because you are subscribed to the Google Groups "Polymer" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/polymer-dev/355f2ff1-501b-4ab7-90ec-3c5040bf76be%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
