On Tue, 07 Jul 2009 03:44:25 +0200, Charles Pritchard <ch...@jumis.com> wrote:

Ian Hickson wrote:
On Mon, 6 Jul 2009, Charles Pritchard wrote:

Ian Hickson wrote:

On Mon, 6 Jul 2009, Charles Pritchard wrote:

This is on the list of things to consider in a future version. At this point I don't really want to add new features yet because otherwise we'll never get the browser vendors caught up to implementing the same spec. :-)

Consider a programmable <audio> element as a priority.

Could you elaborate on what your use cases are? Is it just the ability to manually decode audio tracks?

Some users could manually decode a Vorbis audio stream.

I'm interested in altering pitch and pre-mixing channels. I believe some of these things are explored in CSS already.

There are accessibility cases, for the visually impaired, and I think that they will be better explored.


If you could elaborate on these use cases that would be really useful. How do you envisage using these features on Web pages?

Use a sound of varying pitch to hint to a user the location of their mouse (is it hovering over a button, is it x/y pixels away from the edge of the screen, how close is it to the center).

Alter the pitch of a sound to make a very cheap midi instrument.

Pre-mix a few generated sounds, because the client processor is slow.

Alter the pitch of an actual audio recording, and pre-mix it,
to give different sounding voices to pre-recorded readings of a single text.
As has been tried for "male" "female" sound fonts.

Support very simple audio codecs, and programmable synthesizers.

The API must support a playback buffer.

putAudioBuffer(in AudioData) [ Error if audiodata properties are not supported ] createAudioData( in sample hz, bits per sample, length ) [ Error if properties are not supported. ]
AudioData ( sampleHz, bitsPerSample, length, AudioDataArray )
AudioDataArray ( length, IndexGetter, IndexSetter ) 8 bits per property.

I think that's about it.

(There has been some discussion of suppoting an "audio canvas" before, but a lack of compelling use cases has really been the main blocker. Without a good understanding of the use cases, it's hard to design an API.)



For all of the simpler use cases you can already generate sounds yourself with a data uri. For example, with is 2 samples of silence: "data:audio/wav;base64,UklGRigAAABXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YQQAAAAAAAAA".

It might be worthwhile implementing the API you want as a JavaScript library and see if you can actually do useful things with it. If the use cases are compelling and require native browser support to be performant enough, perhaps it could go into a future version of HTML.

--
Philip Jägenstedt
Core Developer
Opera Software

Reply via email to