p.s. In the case of a Braille user how would the user be notified of a
polite update?  I assume in this case only a notification is provided
rather than automatically changing the POR.  Would a message be forced
onto the Braille display?  And what means would be used to transfer the
POR to the newly updated live region?

Pete

On 7/7/2011 11:34 PM, Pete Brunet wrote:
>
> On 7/7/2011 10:34 PM, James Teh wrote:
>> On 8/07/2011 3:46 AM, Pete Brunet wrote:
>>> The case of text descriptions is similar, i.e. after reading the
>>> description the "return to prior POR" key could cause return to the
>>> video object (a non-text POR).
>> I really don't think the live region aspect of this is relevant to 
>> braille users. For braille users, scrolling the display if they're 
>> already focused in the description object makes more sense. Being 
>> constantly bounced back and forth between PORs seems to me a fairly 
>> unlikely use case when consuming media. In any case:
> How do you envision non-video live regions working for a Braille user? 
> How would the user get notified of an update and how would the user move
> to the newly updated content and then back to the point of interruption?
>
> One way to handle it would be to have Braille device contents
> automatically change to that of the live region.  The change of contents
> would be the indication of a live region update.  There would need to be
> a means to get back to the point of interruption and perhaps a Braille
> key could be used for that function.
>
> In the case of text descriptions there wouldn't be any back and forth. 
> While listening to the audio track the hands would stay on the Braille
> device reading the text descriptions as they are received and after
> consumption of each one a Braille key would be pressed to request a
> resumption of playback.  That Braille key could be the same one used in
> the case of non-video live regions.
>>> There would have to be some way to indicate the prior POR object,
>>> perhaps a new IA2 relation that links back to either the prior form
>>> control or caret offset in the case of a legacy live region or to the
>>> video element in the case of this new concept of a video text
>>> description live region.
>> I don't think the API needs to cover this, not least because the POR 
>> might have been somewhere else entirely (e.g. screen reader review). If 
>> an AT wants to implement this, it should just keep track of the last POR 
>> itself, though as I noted above, I'm not convinced this is really useful.
>>
>>> In the latter case the AT could activate
>>> IAccessibleAction::doAction on the accessilble object representing the
>>> video object to signal completion.
>> That's actually a really interesting point and would avoid the need for 
>> a separate method. However, it would require a better action interface 
>> which can accept predefined action constants, as iterating through all 
>> actions is fairly inefficient. I'm thinking something similar to the 
>> proposed relations change.
>>
>> Jamie
>>
> _______________________________________________
> Accessibility-ia2 mailing list
> [email protected]
> https://lists.linux-foundation.org/mailman/listinfo/accessibility-ia2
>

-- 
*Pete Brunet*
                                                                
a11ysoft - Accessibility Architecture and Development
(512) 467-4706 (work), (512) 689-4155 (cell)
Skype: pete.brunet
IM: ptbrunet (AOL, Google), [email protected] (MSN)
http://www.a11ysoft.com/about/
Ionosphere: WS4G
_______________________________________________
Accessibility-ia2 mailing list
[email protected]
https://lists.linux-foundation.org/mailman/listinfo/accessibility-ia2

Reply via email to