Hi WebKit team,

I’m curious about the original rationale behind the restriction that
prevents concurrent playback of multiple <video> elements. Was it primarily
introduced to save battery life?

In practice, this behavior appears to have unintended side effects. There’s
a reproducible issue where playback can be started with the video muted and
then immediately unmuted, effectively bypassing the restriction. However,
this often results in videos being randomly paused later—sometimes very
frequently—leading to a “play/pause ping-pong” between Safari/WebKit and
JavaScript restarting playback. This erratic behavior may actually
*increase* battery consumption, despite appearing to work smoothly from the
user’s perspective.

Even if this workaround is eventually blocked, developers who rely on
concurrent playback (e.g., outside of WebRTC contexts) will turn to more
complex solutions, such as decoding video and audio using WebCodecs or/and
WebAssembly, and rendering via <canvas> and AudioContext. While technically
feasible, these approaches are likely to be significantly less
power-efficient than simply allowing multiple <video> elements to play
concurrently.
Another similarly inefficient workaround would be to synthesize a
MediaStream using the VideoTrackGenerator API and
AudioContext.createMediaStreamDestination().

Lastly, another issue is that creating a MediaElementSource from a <video>
element and routing its audio through a shared AudioContext also does not
disable the playback restriction—whereas it *is* disabled when the <video>
element itself is muted. This feels inconsistent and may point to a
separate bug.

Could you please clarify the motivation behind this restriction, and
whether there are any plans to revisit or improve its behavior?

Thank you!
_______________________________________________
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev

Reply via email to