On 13-01-22 6:34 PM, Adam Roach wrote:

> I have to admit that [WebVTT's] relationship to WebRTC isn't
> immediately obvious (at least to me). Could you give a short
> executive summary of how you see them interacting?

AFAIK they don't interact at all, currently.

The 'webvtt' project we're talking about in our implementation of the
HTML <track> element. That's several pieces: a javascript API for
manipulating text tracks, i.e. textual data associated with a playback
timeline; a file format for expressing those tracks; and native controls
for selecting among available text tracks and displaying them in sync
with an HTML <video> element. This all assumes traditional delivery of
static media files over HTTP.

<video src=somefile.webm controls>
 <track src=somefile.vtt>
</video>

WebRTC is a framework for sending audio, video, and data streams
directly between web user agents. It focusses on low latency and does
not use HTTP for stream transmission, although the audio and video
streams can be played through HTML <video> and <audio> elements.

While there has been some discussion of supporting text streams, tty,
caption data, etc. in WebRTC, my understanding is that there's no
concensus to standardize any of them at this time, so there's no direct
connection between the two features.

What can be done is to send WebVTT 'cues', as the individual timeline
elements are called, over an WebRTC data connection, and then use the
TextTrack api to insert them into the playback context of a <video>
element. This is only a few lines of code, and can simplify
implementation of any particular subtitle or captioning delivery in the
context of WebRTC.

Does that help with the relative context?
 -r
_______________________________________________
dev-media mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-media

Reply via email to