Hi everyone,

I have been looking at how the video element might work in an adaptive 
streaming context where the available media are specified with some kind of 
manifest file (e.g. MPEG DASH Media Presentation Description) rather than in 
HTML.

In this context there may be choices available as to what to present, many but 
not all related to accessibility:
- multiple audio languages
- text tracks in multiple languages
- audio description of video
- video with open captions (in various languages)
- video with sign language
- audio with directors commentary
- etc.

It seems natural that for text tracks, loading the manifest could cause the 
video element to be populated with associated <track> elements, allowing the 
application to discover the choices and activate/deactivate the tracks.

But this seems just for text tracks. I know discussions are underway on what to 
do for other media types, but my question is whether it would be better to have 
a consistent solution for selection amongst the available media that applies 
for all media types ?

Thanks,

Mark Watson
Netflix

Reply via email to