|
HI Rich, was referring to Sylvia's request to handle the streaming
of text descriptions which are the text equivalent of audio
descriptions used to describe visual content which is not obvious
from a video's audio track. The text descriptions are not displayed
and meant for consumption by an AT. The UA's video renderer would
signal the availability of each new text description as it is
encountered in the video track and the UA would then fire a
WinEvent. This is a non-traditional scenario for a Braille user, but it's similar to the case of a live region so I was interested in the live region UX for the Braille user. After reading what Jamie said I think the experience would be: the user enables text descriptions via a standard UI mechanism, activates a video via another standard UI mechanism, and then places his hands on the Braille display waiting for the text descriptions. After reading a text description the user would signal readiness for more content (because the UA pauses the video until the TTS is done or the Braille user is done). I don' t know how that would be accomplished. What UI means have Braille device managers provided for the Braille user to move the POR (point of regard) from the live region back to the prior POR? Perhaps in the case of text descriptions the same method could be used to signal that the user is ready for the video to continue. Pete On 7/6/2011 8:00 AM, Richard Schwerdtfeger wrote:
--
Pete Brunet a11ysoft - Accessibility Architecture and Development (512) 467-4706 (work), (512) 689-4155 (cell) Skype: pete.brunet IM: ptbrunet (AOL, Google), [email protected] (MSN) http://www.a11ysoft.com/about/ Ionosphere: WS4G |
_______________________________________________ Accessibility-ia2 mailing list [email protected] https://lists.linux-foundation.org/mailman/listinfo/accessibility-ia2
