=Summary/benefits:

"The AudioWorklet object allows developers to supply scripts
 (such as JavaScript or WebAssembly code) to process audio on the
 rendering thread, supporting custom AudioNodes." [[Concepts]]

Allowing scripts to process audio on the rendering thread is
important for low latency general client-side generation or
processing of audio, free from problems of main thread delays.
This is what game developers, for example, have wanted for some
time.  Other parts of the Web Audio API may have been presented in
the past as solutions, but they were not a good fit in general.

A MessageChannel permits, for example, decoding on a web worker
thread and delivery on an audio thread, with no main thread
influence.

Custom AudioNodes would be important as a way for developers to
extend the Web Audio API.  Hopefully we may no longer even need to
add more special-purpose nodes to the API surface.  See, however,
Usability/interoperability concerns below.

Some work has already landed in Gecko, but I'm not aware of a
previous explicit intent to implement.

=Bug:
https://bugzilla.mozilla.org/show_bug.cgi?id=1062849

=Link to standard:
https://webaudio.github.io/web-audio-api/#audioworklet

=Platform coverage:
Desktop + Android.

=Estimated or target release:
When ready,
which may require resolution of GC/interoperability concerns below.

=Preference behind which this will be implemented: 
dom.audioWorklet.enabled and dom.worklet.enabled

=Is this feature enabled by default in sandboxed iframes?
No.
=If not, is there a proposed sandbox flag to enable it?
Yes, "allow-scripts".

=DevTools bug:
https://bugzilla.mozilla.org/show_bug.cgi?id=1458445

=Do other browser engines implement this? 
Blink: since Chrome 66, Opera 53.
https://www.chromestatus.com/feature/4588498229133312
Edge: bug assigned.
https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/15812544/
Webkit: no indication.
https://bugs.webkit.org/show_bug.cgi?id=182506

=web-platform-tests:
https://github.com/w3c/web-platform-tests/tree/master/webaudio/the-audio-api/the-audioworklet-interface

=Secure contexts:
Yes.

=Usability/interoperability concerns:

AudioNodes are often typically set up, scheduled, and forgotten.
Once they have finished what they have been scheduled to do and
upstream nodes have also finished, associated resources can be
reclaimed, but some effects on downstream nodes remain.

AudioWorkletNode, as currently specified, OTOH, is different.
There is provision through an [[active source]] flag for an
AudioWorkletProcessor to indicate that if there are no further
inputs, then it no longer needs to perform further processing.
However, the client needs to disconnect the inputs when finished.
If the input nodes are forgotten (as is typical), then processing
continues repeatedly and indefinitely (unless the whole
AudioContext is stopped).  The need for clients to keep track of
whether inputs to these nodes have finished makes
AudioWorkletNodes with inputs second class nodes in practice.

A solution based on silent input rather than connection count was
proposed in https://github.com/WebAudio/web-audio-api/issues/1453
but this appears to have been rejected.

It seems that Chrome works around this by choosing to garbage
collect input nodes even when their presence is specified to
require (observable) AudioWorkletProcessor.process() calls.
This garbage collection is performed in a way that causes the
process() calls to be halted (which stops sound production), and
so the AudioWorkletProcessor can subsequently also be garbage
collected if there are no rooted references, as usual.

Having the behavior of AudioWorkletProcess depend on whether or
not the client maintains references to input nodes is not
something I'd like to implement.  It would be comparable to an
audio element stopping playback at a time when an implementation
chooses to perform garbage collection after the client has removed
its last reference.  It is contrary to [[TAG design principles]].
The Chrome approach seems to be based on a different understanding
of [[AudioNode Lifetime]].

Because Chrome reclaims CPU and memory resources even when the
client does not disconnect inputs from AudioWorkletNode, authors
are likely to forget to track input nodes, in which case their
applications will have performance problems in implementations
with deterministic behavior.

=Security/privacy concerns:

The audio thread runs with reasonably precise timing, providing a
clock edge.

The additional surface from AudioWorklet is that script can run on
the audio thread, at the precise time, and send messages to other
threads, rather than merely having browser-messages from the audio
thread to the main thread.

This may be mitigated to some extent by slightly reduced precision
from [[AudioIPC]] and perhaps by using tail dispatch for messages
from the audio thread.

[[Concepts]] https://webaudio.github.io/web-audio-api/#AudioWorklet-concepts
[[active source]] https://webaudio.github.io/web-audio-api/#active-source
[[TAG design principles]] https://w3ctag.github.io/design-principles/#js-gc
[[AudioNode Lifetime]] https://github.com/WebAudio/web-audio-api/issues/1471
[[AudioIPC]] https://bugzilla.mozilla.org/show_bug.cgi?id=1362220
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to