Title: [113769] trunk
Revision
113769
Author
[email protected]
Date
2012-04-10 14:28:57 -0700 (Tue, 10 Apr 2012)

Log Message

AudioParam must support connections from audio-rate signals
https://bugs.webkit.org/show_bug.cgi?id=83524

Source/WebCore:

Reviewed by Eric Carlson.

In the Web Audio API, it's possible to connect one AudioNode to another AudioNode.
Similary we should allow an AudioNode to connect to an AudioParam, thus controlling
a parameter with an audio-rate signal.  This is important in many audio processing
applications.

Test: webaudio/audioparam-connect-audioratesignal.html

Simple method name change of AudioParam::hasTimelineValues() to AudioParam::hasSampleAccurateValues().
* Modules/webaudio/AudioGainNode.cpp:
(WebCore::AudioGainNode::process):

* Modules/webaudio/AudioNode.cpp:
(WebCore::AudioNode::connect): Add connect() method from AudioNode -> AudioParam.
(WebCore):
(WebCore::AudioNode::disconnect):
(WebCore::AudioNode::finishDeref):
Use AudioNodeOutput::disconnectAll() instead of AudioNodeOutput::disconnectAllInputs().
* Modules/webaudio/AudioNode.h: Add connect() method from AudioNode -> AudioParam.
(WebCore):
(AudioNode):
* Modules/webaudio/AudioNode.idl: Add connect() method from AudioNode -> AudioParam.

Implement support for an AudioNodeOutput to fanout to multiple AudioParams.
* Modules/webaudio/AudioNodeOutput.cpp:
(WebCore::AudioNodeOutput::AudioNodeOutput):
(WebCore::AudioNodeOutput::updateRenderingState): Update rendering state related to AudioParams.
(WebCore::AudioNodeOutput::pull): pull() must now take into account fanout to AudioParams for in-place processing.
(WebCore::AudioNodeOutput::fanOutCount):
(WebCore):
(WebCore::AudioNodeOutput::paramFanOutCount): New method keeping track of number of connections to AudioParams.
(WebCore::AudioNodeOutput::renderingParamFanOutCount): New method keeping track of number of connections to AudioParams for rendering.
(WebCore::AudioNodeOutput::addParam): Add a connection to an AudioParam.
(WebCore::AudioNodeOutput::removeParam): Remove a connection to an AudioParam.
(WebCore::AudioNodeOutput::disconnectAllParams): Remove all connections to AudioParams.
(WebCore::AudioNodeOutput::disconnectAll): New method to disconnect all AudioNodeInputs and AudioParams.
* Modules/webaudio/AudioNodeOutput.h:
(AudioNodeOutput):

Allow an AudioParam to accept a connection from an AudioNodeOutput, thus being controlled
by an audio-rate signal.
* Modules/webaudio/AudioParam.cpp:
(WebCore::AudioParam::calculateSampleAccurateValues): Calculates sample-accurate values from timeline or an AudioNode.
(WebCore):
(WebCore::AudioParam::calculateAudioRateSignalValues): Calculates sample-accurate values from an AudioNode.
(WebCore::AudioParam::calculateTimelineValues): Calculates sample-accurate values scheduled on the timeline.
(WebCore::AudioParam::connect): Connect from an AudioNodeOutput for control from an audio-rate signal.
(WebCore::AudioParam::disconnect): Disconnect from an AudioNodeOutput.
* Modules/webaudio/AudioParam.h:
(WebCore):
(WebCore::AudioParam::AudioParam):
(WebCore::AudioParam::hasSampleAccurateValues): Change name from hasTimelineValues() and return true
either if we have timeline values or if we've been connected from an AudioNode.
(AudioParam):

Simple method name change of AudioParam::hasTimelineValues() to AudioParam::hasSampleAccurateValues().
* Modules/webaudio/Oscillator.cpp:
(WebCore::Oscillator::calculateSampleAccuratePhaseIncrements):
(WebCore::Oscillator::process):

LayoutTests:

Reviewed by Eric Carlson.

* webaudio/audioparam-connect-audioratesignal-expected.txt: Added.
* webaudio/audioparam-connect-audioratesignal.html: Added.
* webaudio/resources/audio-testing.js:
(createLinearRampBuffer):
(createConstantBuffer):

Modified Paths

Added Paths

Diff

Modified: trunk/LayoutTests/ChangeLog (113768 => 113769)


--- trunk/LayoutTests/ChangeLog	2012-04-10 21:25:50 UTC (rev 113768)
+++ trunk/LayoutTests/ChangeLog	2012-04-10 21:28:57 UTC (rev 113769)
@@ -1,3 +1,16 @@
+2012-04-10  Chris Rogers  <[email protected]>
+
+        AudioParam must support connections from audio-rate signals
+        https://bugs.webkit.org/show_bug.cgi?id=83524
+
+        Reviewed by Eric Carlson.
+
+        * webaudio/audioparam-connect-audioratesignal-expected.txt: Added.
+        * webaudio/audioparam-connect-audioratesignal.html: Added.
+        * webaudio/resources/audio-testing.js:
+        (createLinearRampBuffer):
+        (createConstantBuffer):
+
 2012-04-10  Abhishek Arya  <[email protected]>
 
         Crash due to captions list not updated after section recalc.

Added: trunk/LayoutTests/webaudio/audioparam-connect-audioratesignal-expected.txt (0 => 113769)


--- trunk/LayoutTests/webaudio/audioparam-connect-audioratesignal-expected.txt	                        (rev 0)
+++ trunk/LayoutTests/webaudio/audioparam-connect-audioratesignal-expected.txt	2012-04-10 21:28:57 UTC (rev 113769)
@@ -0,0 +1,6 @@
+PASS Rendered signal is of correct length.
+PASS Rendered signal exactly matches the audio-rate gain changing signal.
+PASS successfullyParsed is true
+
+TEST COMPLETE
+

Added: trunk/LayoutTests/webaudio/audioparam-connect-audioratesignal.html (0 => 113769)


--- trunk/LayoutTests/webaudio/audioparam-connect-audioratesignal.html	                        (rev 0)
+++ trunk/LayoutTests/webaudio/audioparam-connect-audioratesignal.html	2012-04-10 21:28:57 UTC (rev 113769)
@@ -0,0 +1,112 @@
+<!DOCTYPE html>
+
+<!--
+Tests that an audio-rate signal (AudioNode output) can be connected to an AudioParam.
+Specifically, this tests that an audio-rate signal coming from an AudioBufferSourceNode
+playing an AudioBuffer containing a specific curve can be connected to an AudioGainNode's
+.gain attribute (an AudioParam).  Another AudioBufferSourceNode will be the audio source
+having its gain changed.  We load this one with an AudioBuffer containing a constant value of 1.
+Thus it's easy to check that the resultant signal should be equal to the gain-scaling curve.
+-->
+
+<html>
+<head>
+<link rel="stylesheet" href=""
+<script src=""
+<script src=""
+
+</head>
+<body>
+
+<script>
+
+var sampleRate = 44100.0;
+var lengthInSeconds = 1;
+
+var context = 0;
+var constantOneBuffer = 0;
+var linearRampBuffer = 0;
+
+function checkResult(event) {
+    var renderedBuffer = event.renderedBuffer;
+    var renderedData = renderedBuffer.getChannelData(0);
+    var expectedData = linearRampBuffer.getChannelData(0);
+    var n = renderedBuffer.length;
+
+    if (n == linearRampBuffer.length) {
+        testPassed("Rendered signal is of correct length.");
+    } else {
+        testFailed("Rendered signal is not of correct length.");
+    }
+
+    // Check that the rendered result exactly matches the buffer used to control gain.
+    // This is because we're changing the gain of a signal having constant value 1.
+    var success = true;
+    for (var i = 0; i < n; ++i) {
+        if (renderedData[i] != expectedData[i]) {
+            success = false;
+            break;
+        }
+    }
+
+    if (success) {
+        testPassed("Rendered signal exactly matches the audio-rate gain changing signal.");
+    } else {
+        testFailed("Rendered signal differs from the audio-rate gain changing signal.");
+    }
+
+    finishJSTest();
+}
+
+function runTest() {
+    if (window.layoutTestController) {
+        layoutTestController.dumpAsText();
+        layoutTestController.waitUntilDone();
+    }
+
+    window.jsTestIsAsync = true;
+
+    var sampleFrameLength = sampleRate * lengthInSeconds;
+
+    // Create offline audio context.
+    context = new webkitAudioContext(1, sampleFrameLength, sampleRate);
+
+    // Create buffer used by the source which will have its gain controlled.
+    constantOneBuffer = createConstantBuffer(context, sampleFrameLength, 1);
+
+    // Create buffer used to control gain.
+    linearRampBuffer = createLinearRampBuffer(context, sampleFrameLength);
+
+    // Create the two sources.
+
+    var constantSource = context.createBufferSource();
+    constantSource.buffer = constantOneBuffer;
+
+    var gainChangingSource = context.createBufferSource();
+    gainChangingSource.buffer = linearRampBuffer;
+
+    // Create a gain node controlling the gain of constantSource and make the connections.
+    var gainNode = context.createGainNode();
+    constantSource.connect(gainNode);
+    gainNode.connect(context.destination);
+
+    // Connect an audio-rate signal to control the .gain AudioParam.
+    // This is the heart of what is being tested.
+    gainChangingSource.connect(gainNode.gain);
+
+    // Start both sources at time 0.
+    constantSource.noteOn(0);
+    gainChangingSource.noteOn(0);
+
+    context._oncomplete_ = checkResult;
+    context.startRendering();
+}
+
+runTest();
+successfullyParsed = true;
+
+</script>
+<script src=""
+
+</body>
+</html>

Modified: trunk/LayoutTests/webaudio/resources/audio-testing.js (113768 => 113769)


--- trunk/LayoutTests/webaudio/resources/audio-testing.js	2012-04-10 21:25:50 UTC (rev 113768)
+++ trunk/LayoutTests/webaudio/resources/audio-testing.js	2012-04-10 21:28:57 UTC (rev 113769)
@@ -115,6 +115,30 @@
     return audioBuffer;
 }
 
+// Create a buffer of the given length with a linear ramp having values 0 <= x < 1.
+function createLinearRampBuffer(context, sampleFrameLength) {
+    var audioBuffer = context.createBuffer(1, sampleFrameLength, context.sampleRate);
+    var n = audioBuffer.length;
+    var dataL = audioBuffer.getChannelData(0);
+
+    for (var i = 0; i < n; ++i)
+        dataL[i] = i / n;
+
+    return audioBuffer;
+}
+
+// Create a buffer of the given length having a constant value.
+function createConstantBuffer(context, sampleFrameLength, constantValue) {
+    var audioBuffer = context.createBuffer(1, sampleFrameLength, context.sampleRate);
+    var n = audioBuffer.length;
+    var dataL = audioBuffer.getChannelData(0);
+
+    for (var i = 0; i < n; ++i)
+        dataL[i] = constantValue;
+
+    return audioBuffer;
+}
+
 // Convert time (in seconds) to sample frames.
 function timeToSampleFrame(time, sampleRate) {
     return Math.floor(0.5 + time * sampleRate);

Modified: trunk/Source/WebCore/ChangeLog (113768 => 113769)


--- trunk/Source/WebCore/ChangeLog	2012-04-10 21:25:50 UTC (rev 113768)
+++ trunk/Source/WebCore/ChangeLog	2012-04-10 21:28:57 UTC (rev 113769)
@@ -1,3 +1,69 @@
+2012-04-10  Chris Rogers  <[email protected]>
+
+        AudioParam must support connections from audio-rate signals
+        https://bugs.webkit.org/show_bug.cgi?id=83524
+
+        Reviewed by Eric Carlson.
+        
+        In the Web Audio API, it's possible to connect one AudioNode to another AudioNode.
+        Similary we should allow an AudioNode to connect to an AudioParam, thus controlling
+        a parameter with an audio-rate signal.  This is important in many audio processing
+        applications.
+
+        Test: webaudio/audioparam-connect-audioratesignal.html
+
+        Simple method name change of AudioParam::hasTimelineValues() to AudioParam::hasSampleAccurateValues().
+        * Modules/webaudio/AudioGainNode.cpp:
+        (WebCore::AudioGainNode::process):
+
+        * Modules/webaudio/AudioNode.cpp:
+        (WebCore::AudioNode::connect): Add connect() method from AudioNode -> AudioParam.
+        (WebCore):
+        (WebCore::AudioNode::disconnect):
+        (WebCore::AudioNode::finishDeref):
+        Use AudioNodeOutput::disconnectAll() instead of AudioNodeOutput::disconnectAllInputs().
+        * Modules/webaudio/AudioNode.h: Add connect() method from AudioNode -> AudioParam.
+        (WebCore):
+        (AudioNode):
+        * Modules/webaudio/AudioNode.idl: Add connect() method from AudioNode -> AudioParam.
+
+        Implement support for an AudioNodeOutput to fanout to multiple AudioParams.
+        * Modules/webaudio/AudioNodeOutput.cpp:
+        (WebCore::AudioNodeOutput::AudioNodeOutput):
+        (WebCore::AudioNodeOutput::updateRenderingState): Update rendering state related to AudioParams.
+        (WebCore::AudioNodeOutput::pull): pull() must now take into account fanout to AudioParams for in-place processing.
+        (WebCore::AudioNodeOutput::fanOutCount):
+        (WebCore):
+        (WebCore::AudioNodeOutput::paramFanOutCount): New method keeping track of number of connections to AudioParams.
+        (WebCore::AudioNodeOutput::renderingParamFanOutCount): New method keeping track of number of connections to AudioParams for rendering.
+        (WebCore::AudioNodeOutput::addParam): Add a connection to an AudioParam.
+        (WebCore::AudioNodeOutput::removeParam): Remove a connection to an AudioParam.
+        (WebCore::AudioNodeOutput::disconnectAllParams): Remove all connections to AudioParams.
+        (WebCore::AudioNodeOutput::disconnectAll): New method to disconnect all AudioNodeInputs and AudioParams.
+        * Modules/webaudio/AudioNodeOutput.h:
+        (AudioNodeOutput):
+
+        Allow an AudioParam to accept a connection from an AudioNodeOutput, thus being controlled
+        by an audio-rate signal.
+        * Modules/webaudio/AudioParam.cpp:
+        (WebCore::AudioParam::calculateSampleAccurateValues): Calculates sample-accurate values from timeline or an AudioNode.
+        (WebCore):
+        (WebCore::AudioParam::calculateAudioRateSignalValues): Calculates sample-accurate values from an AudioNode.
+        (WebCore::AudioParam::calculateTimelineValues): Calculates sample-accurate values scheduled on the timeline.
+        (WebCore::AudioParam::connect): Connect from an AudioNodeOutput for control from an audio-rate signal.
+        (WebCore::AudioParam::disconnect): Disconnect from an AudioNodeOutput.
+        * Modules/webaudio/AudioParam.h:
+        (WebCore):
+        (WebCore::AudioParam::AudioParam):
+        (WebCore::AudioParam::hasSampleAccurateValues): Change name from hasTimelineValues() and return true
+        either if we have timeline values or if we've been connected from an AudioNode.
+        (AudioParam):
+
+        Simple method name change of AudioParam::hasTimelineValues() to AudioParam::hasSampleAccurateValues().
+        * Modules/webaudio/Oscillator.cpp:
+        (WebCore::Oscillator::calculateSampleAccuratePhaseIncrements):
+        (WebCore::Oscillator::process):
+
 2012-04-10  Patrick Gansterer  <[email protected]>
 
         [CMake] Enable USE_FOLDERS property

Modified: trunk/Source/WebCore/Modules/webaudio/AudioGainNode.cpp (113768 => 113769)


--- trunk/Source/WebCore/Modules/webaudio/AudioGainNode.cpp	2012-04-10 21:25:50 UTC (rev 113768)
+++ trunk/Source/WebCore/Modules/webaudio/AudioGainNode.cpp	2012-04-10 21:28:57 UTC (rev 113769)
@@ -64,7 +64,7 @@
     else {
         AudioBus* inputBus = input(0)->bus();
 
-        if (gain()->hasTimelineValues()) {
+        if (gain()->hasSampleAccurateValues()) {
             // Apply sample-accurate gain scaling for precise envelopes, grain windows, etc.
             ASSERT(framesToProcess <= m_sampleAccurateGainValues.size());
             if (framesToProcess <= m_sampleAccurateGainValues.size()) {

Modified: trunk/Source/WebCore/Modules/webaudio/AudioNode.cpp (113768 => 113769)


--- trunk/Source/WebCore/Modules/webaudio/AudioNode.cpp	2012-04-10 21:25:50 UTC (rev 113768)
+++ trunk/Source/WebCore/Modules/webaudio/AudioNode.cpp	2012-04-10 21:28:57 UTC (rev 113769)
@@ -31,6 +31,7 @@
 #include "AudioContext.h"
 #include "AudioNodeInput.h"
 #include "AudioNodeOutput.h"
+#include "AudioParam.h"
 #include "ExceptionCode.h"
 #include <wtf/Atomics.h>
 #include <wtf/MainThread.h>
@@ -149,6 +150,30 @@
     context()->incrementConnectionCount();
 }
 
+void AudioNode::connect(AudioParam* param, unsigned outputIndex, ExceptionCode& ec)
+{
+    ASSERT(isMainThread());
+    AudioContext::AutoLocker locker(context());
+
+    if (!param) {
+        ec = SYNTAX_ERR;
+        return;
+    }
+
+    if (outputIndex >= numberOfOutputs()) {
+        ec = INDEX_SIZE_ERR;
+        return;
+    }
+
+    if (context() != param->context()) {
+        ec = SYNTAX_ERR;
+        return;
+    }
+
+    AudioNodeOutput* output = this->output(outputIndex);
+    param->connect(output);
+}
+
 void AudioNode::disconnect(unsigned outputIndex, ExceptionCode& ec)
 {
     ASSERT(isMainThread());
@@ -161,7 +186,7 @@
     }
 
     AudioNodeOutput* output = this->output(outputIndex);
-    output->disconnectAllInputs();
+    output->disconnectAll();
 }
 
 void AudioNode::processIfNecessary(size_t framesToProcess)
@@ -337,7 +362,7 @@
             if (!m_isMarkedForDeletion) {
                 // All references are gone - we need to go away.
                 for (unsigned i = 0; i < m_outputs.size(); ++i)
-                    output(i)->disconnectAllInputs(); // this will deref() nodes we're connected to...
+                    output(i)->disconnectAll(); // This will deref() nodes we're connected to.
 
                 // Mark for deletion at end of each render quantum or when context shuts down.
                 context()->markForDeletion(this);

Modified: trunk/Source/WebCore/Modules/webaudio/AudioNode.h (113768 => 113769)


--- trunk/Source/WebCore/Modules/webaudio/AudioNode.h	2012-04-10 21:25:50 UTC (rev 113768)
+++ trunk/Source/WebCore/Modules/webaudio/AudioNode.h	2012-04-10 21:28:57 UTC (rev 113769)
@@ -37,6 +37,7 @@
 class AudioContext;
 class AudioNodeInput;
 class AudioNodeOutput;
+class AudioParam;
 
 typedef int ExceptionCode;
 
@@ -116,6 +117,7 @@
 
     // Called from main thread by corresponding _javascript_ methods.
     void connect(AudioNode*, unsigned outputIndex, unsigned inputIndex, ExceptionCode&);
+    void connect(AudioParam*, unsigned outputIndex, ExceptionCode&);
     void disconnect(unsigned outputIndex, ExceptionCode&);
 
     virtual float sampleRate() const { return m_sampleRate; }

Modified: trunk/Source/WebCore/Modules/webaudio/AudioNode.idl (113768 => 113769)


--- trunk/Source/WebCore/Modules/webaudio/AudioNode.idl	2012-04-10 21:25:50 UTC (rev 113768)
+++ trunk/Source/WebCore/Modules/webaudio/AudioNode.idl	2012-04-10 21:28:57 UTC (rev 113769)
@@ -33,6 +33,9 @@
         void connect(in AudioNode destination, in [Optional=DefaultIsUndefined] unsigned long output, in [Optional=DefaultIsUndefined] unsigned long input)
             raises(DOMException);
 
+        void connect(in AudioParam destination, in [Optional=DefaultIsUndefined] unsigned long output)
+            raises(DOMException);
+
         void disconnect(in [Optional=DefaultIsUndefined] unsigned long output)
             raises(DOMException);
     };

Modified: trunk/Source/WebCore/Modules/webaudio/AudioNodeOutput.cpp (113768 => 113769)


--- trunk/Source/WebCore/Modules/webaudio/AudioNodeOutput.cpp	2012-04-10 21:25:50 UTC (rev 113768)
+++ trunk/Source/WebCore/Modules/webaudio/AudioNodeOutput.cpp	2012-04-10 21:28:57 UTC (rev 113769)
@@ -31,6 +31,7 @@
 #include "AudioBus.h"
 #include "AudioContext.h"
 #include "AudioNodeInput.h"
+#include "AudioParam.h"
 #include <wtf/Threading.h>
 
 namespace WebCore {
@@ -42,6 +43,7 @@
     , m_actualDestinationBus(0)
     , m_isEnabled(true)
     , m_renderingFanOutCount(0)
+    , m_renderingParamFanOutCount(0)
 {
     ASSERT(numberOfChannels <= AudioContext::maxNumberOfChannels());
 
@@ -80,6 +82,7 @@
 {
     updateNumberOfChannels();
     m_renderingFanOutCount = fanOutCount();
+    m_renderingParamFanOutCount = paramFanOutCount();
 }
 
 void AudioNodeOutput::updateNumberOfChannels()
@@ -110,7 +113,7 @@
 AudioBus* AudioNodeOutput::pull(AudioBus* inPlaceBus, size_t framesToProcess)
 {
     ASSERT(context()->isAudioThread());
-    ASSERT(m_renderingFanOutCount > 0);
+    ASSERT(m_renderingFanOutCount > 0 || m_renderingParamFanOutCount > 0);
     
     // Causes our AudioNode to process if it hasn't already for this render quantum.
     // We try to do in-place processing (using inPlaceBus) if at all possible,
@@ -118,7 +121,7 @@
     // In this case pull() is called multiple times per rendering quantum, and the processIfNecessary() call below will
     // cause our node to process() only the first time, caching the output in m_internalOutputBus for subsequent calls.    
     
-    bool isInPlace = inPlaceBus && inPlaceBus->numberOfChannels() == numberOfChannels() && m_renderingFanOutCount == 1;
+    bool isInPlace = inPlaceBus && inPlaceBus->numberOfChannels() == numberOfChannels() && (m_renderingFanOutCount + m_renderingParamFanOutCount) == 1;
 
     // Setup the actual destination bus for processing when our node's process() method gets called in processIfNecessary() below.
     m_actualDestinationBus = isInPlace ? inPlaceBus : m_internalBus.get();
@@ -134,15 +137,26 @@
     return m_actualDestinationBus;
 }
 
+unsigned AudioNodeOutput::fanOutCount()
+{
+    ASSERT(context()->isGraphOwner());
+    return m_inputs.size();
+}
+
+unsigned AudioNodeOutput::paramFanOutCount()
+{
+    ASSERT(context()->isGraphOwner());
+    return m_params.size();
+}
+
 unsigned AudioNodeOutput::renderingFanOutCount() const
 {
     return m_renderingFanOutCount;
 }
 
-unsigned AudioNodeOutput::fanOutCount()
+unsigned AudioNodeOutput::renderingParamFanOutCount() const
 {
-    ASSERT(context()->isGraphOwner());
-    return m_inputs.size();
+    return m_renderingParamFanOutCount;
 }
 
 void AudioNodeOutput::addInput(AudioNodeInput* input)
@@ -178,6 +192,46 @@
     }
 }
 
+void AudioNodeOutput::addParam(AudioParam* param)
+{
+    ASSERT(context()->isGraphOwner());
+
+    ASSERT(param);
+    if (!param)
+        return;
+
+    m_params.add(param);
+}
+
+void AudioNodeOutput::removeParam(AudioParam* param)
+{
+    ASSERT(context()->isGraphOwner());
+
+    ASSERT(param);
+    if (!param)
+        return;
+
+    m_params.remove(param);
+}
+
+void AudioNodeOutput::disconnectAllParams()
+{
+    ASSERT(context()->isGraphOwner());
+
+    for (ParamsIterator i = m_params.begin(); i != m_params.end(); ++i) {
+        AudioParam* param = *i;
+        param->disconnect(this);
+    }
+
+    m_params.clear();
+}
+
+void AudioNodeOutput::disconnectAll()
+{
+    disconnectAllInputs();
+    disconnectAllParams();
+}
+
 void AudioNodeOutput::disable()
 {
     ASSERT(context()->isGraphOwner());

Modified: trunk/Source/WebCore/Modules/webaudio/AudioNodeOutput.h (113768 => 113769)


--- trunk/Source/WebCore/Modules/webaudio/AudioNodeOutput.h	2012-04-10 21:25:50 UTC (rev 113768)
+++ trunk/Source/WebCore/Modules/webaudio/AudioNodeOutput.h	2012-04-10 21:28:57 UTC (rev 113769)
@@ -57,18 +57,17 @@
     // Called from context's audio thread.
     AudioBus* bus() const;
 
-    // fanOutCount() is the number of AudioNodeInputs that we're connected to.
-    // This function should not be called in audio thread rendering code, instead renderingFanOutCount() should be used.
-    // It must be called with the context's graph lock.
-    unsigned fanOutCount();
-
     // renderingFanOutCount() is the number of AudioNodeInputs that we're connected to during rendering.
     // Unlike fanOutCount() it will not change during the course of a render quantum.
     unsigned renderingFanOutCount() const;
 
-    // It must be called with the context's graph lock.
-    void disconnectAllInputs();
+    // renderingParamFanOutCount() is the number of AudioParams that we're connected to during rendering.
+    // Unlike paramFanOutCount() it will not change during the course of a render quantum.
+    unsigned renderingParamFanOutCount() const;
 
+    // Must be called with the context's graph lock.
+    void disconnectAll();
+
     void setNumberOfChannels(unsigned);
     unsigned numberOfChannels() const { return m_numberOfChannels; }
     bool isChannelCountKnown() const { return numberOfChannels() > 0; }
@@ -87,12 +86,29 @@
     AudioNode* m_node;
 
     friend class AudioNodeInput;
+    friend class AudioParam;
     
     // These are called from AudioNodeInput.
     // They must be called with the context's graph lock.
     void addInput(AudioNodeInput*);
     void removeInput(AudioNodeInput*);
+    void addParam(AudioParam*);
+    void removeParam(AudioParam*);
 
+    // fanOutCount() is the number of AudioNodeInputs that we're connected to.
+    // This method should not be called in audio thread rendering code, instead renderingFanOutCount() should be used.
+    // It must be called with the context's graph lock.
+    unsigned fanOutCount();
+
+    // Similar to fanOutCount(), paramFanOutCount() is the number of AudioParams that we're connected to.
+    // This method should not be called in audio thread rendering code, instead renderingParamFanOutCount() should be used.
+    // It must be called with the context's graph lock.
+    unsigned paramFanOutCount();
+
+    // Must be called with the context's graph lock.
+    void disconnectAllInputs();
+    void disconnectAllParams();
+
     // updateInternalBus() updates m_internalBus appropriately for the number of channels.
     // It is called in the constructor or in the audio thread with the context's graph lock.
     void updateInternalBus();
@@ -120,10 +136,14 @@
     HashSet<AudioNodeInput*> m_inputs;
     typedef HashSet<AudioNodeInput*>::iterator InputsIterator;
     bool m_isEnabled;
-    
-    // For the purposes of rendering, keeps track of the number of inputs we're connected to.
-    // This value should only be changed at the very start or end of the rendering quantum.
+
+    // For the purposes of rendering, keeps track of the number of inputs and AudioParams we're connected to.
+    // These value should only be changed at the very start or end of the rendering quantum.
     unsigned m_renderingFanOutCount;
+    unsigned m_renderingParamFanOutCount;
+
+    HashSet<AudioParam*> m_params;
+    typedef HashSet<AudioParam*>::iterator ParamsIterator;
 };
 
 } // namespace WebCore

Modified: trunk/Source/WebCore/Modules/webaudio/AudioParam.cpp (113768 => 113769)


--- trunk/Source/WebCore/Modules/webaudio/AudioParam.cpp	2012-04-10 21:25:50 UTC (rev 113768)
+++ trunk/Source/WebCore/Modules/webaudio/AudioParam.cpp	2012-04-10 21:28:57 UTC (rev 113769)
@@ -30,6 +30,7 @@
 #include "AudioParam.h"
 
 #include "AudioNode.h"
+#include "AudioNodeOutput.h"
 #include "AudioUtilities.h"
 #include "FloatConversion.h"
 #include <wtf/MathExtras.h>
@@ -95,11 +96,43 @@
 
 void AudioParam::calculateSampleAccurateValues(float* values, unsigned numberOfValues)
 {
-    bool isSafe = context() && context()->isAudioThread() && values;
+    bool isSafe = context() && context()->isAudioThread() && values && numberOfValues;
     ASSERT(isSafe);
     if (!isSafe)
         return;
 
+    if (m_audioRateSignal)
+        calculateAudioRateSignalValues(values, numberOfValues);
+    else
+        calculateTimelineValues(values, numberOfValues);
+}
+
+void AudioParam::calculateAudioRateSignalValues(float* values, unsigned numberOfValues)
+{
+    // FIXME: support fan-in (multiple audio connections to this parameter with unity-gain summing).
+    // https://bugs.webkit.org/show_bug.cgi?id=83610
+    ASSERT(m_audioRateSignal);
+
+    AudioBus* bus = m_audioRateSignal->pull(0, numberOfValues);
+    bool isBusGood = bus && bus->numberOfChannels() && bus->length() >= numberOfValues;
+    ASSERT(isBusGood);
+    if (!isBusGood)
+        return;
+
+    if (bus->numberOfChannels() == 1) {
+        // The normal case is to deal with a mono audio-rate signal.
+        memcpy(values, bus->channel(0)->data(), sizeof(float) * numberOfValues);
+    } else {
+        // Do a standard mixdown to one channel if necessary.
+        AudioBus wrapperBus(1, numberOfValues, false);
+        wrapperBus.setChannelMemory(0, values, numberOfValues);
+        wrapperBus.copyFrom(*bus); // Mixdown.
+    }
+    m_value = values[0]; // Update to first value.
+}
+
+void AudioParam::calculateTimelineValues(float* values, unsigned numberOfValues)
+{
     // Calculate values for this render quantum.
     // Normally numberOfValues will equal AudioNode::ProcessingSizeInFrames (the render quantum size).
     float sampleRate = context()->sampleRate();
@@ -111,6 +144,35 @@
     m_value = m_timeline.valuesForTimeRange(startTime, endTime, narrowPrecisionToFloat(m_value), values, numberOfValues, sampleRate, sampleRate);
 }
 
+void AudioParam::connect(AudioNodeOutput* audioRateSignal)
+{
+    ASSERT(context()->isGraphOwner());
+    ASSERT(audioRateSignal);
+    if (!audioRateSignal)
+        return;
+
+    if (m_audioRateSignal && m_audioRateSignal != audioRateSignal) {
+        // Because we don't currently support fan-in we must explicitly disconnect from an old output.
+        m_audioRateSignal->removeParam(this);
+    }
+
+    audioRateSignal->addParam(this);
+    m_audioRateSignal = audioRateSignal;
+}
+
+void AudioParam::disconnect(AudioNodeOutput* audioRateSignal)
+{
+    ASSERT(context()->isGraphOwner());
+    ASSERT(audioRateSignal);
+    if (!audioRateSignal)
+        return;
+
+    // FIXME: support fan-in (multiple audio connections to this parameter with unity-gain summing).
+    // https://bugs.webkit.org/show_bug.cgi?id=83610
+    if (m_audioRateSignal == audioRateSignal)
+        m_audioRateSignal = 0;
+}
+
 } // namespace WebCore
 
 #endif // ENABLE(WEB_AUDIO)

Modified: trunk/Source/WebCore/Modules/webaudio/AudioParam.h (113768 => 113769)


--- trunk/Source/WebCore/Modules/webaudio/AudioParam.h	2012-04-10 21:25:50 UTC (rev 113768)
+++ trunk/Source/WebCore/Modules/webaudio/AudioParam.h	2012-04-10 21:28:57 UTC (rev 113769)
@@ -39,6 +39,8 @@
 
 namespace WebCore {
 
+class AudioNodeOutput;
+
 class AudioParam : public RefCounted<AudioParam> {
 public:
     static const double DefaultSmoothingConstant;
@@ -58,6 +60,7 @@
         , m_units(units)
         , m_smoothedValue(defaultValue)
         , m_smoothingConstant(DefaultSmoothingConstant)
+        , m_audioRateSignal(0)
     {
     }
     
@@ -96,13 +99,20 @@
     void setValueCurveAtTime(Float32Array* curve, float time, float duration) { m_timeline.setValueCurveAtTime(curve, time, duration); }
     void cancelScheduledValues(float startTime) { m_timeline.cancelScheduledValues(startTime); }
 
-    bool hasTimelineValues() { return m_timeline.hasValues(); }
+    bool hasSampleAccurateValues() { return m_timeline.hasValues() || m_audioRateSignal; }
     
     // Calculates numberOfValues parameter values starting at the context's current time.
     // Must be called in the context's render thread.
     void calculateSampleAccurateValues(float* values, unsigned numberOfValues);
 
+    // Connect an audio-rate signal to control this parameter.
+    void connect(AudioNodeOutput*);
+    void disconnect(AudioNodeOutput*);
+
 private:
+    void calculateAudioRateSignalValues(float* values, unsigned numberOfValues);
+    void calculateTimelineValues(float* values, unsigned numberOfValues);
+
     RefPtr<AudioContext> m_context;
     String m_name;
     double m_value;
@@ -116,6 +126,11 @@
     double m_smoothingConstant;
     
     AudioParamTimeline m_timeline;
+
+    // An audio-rate signal directly providing parameter values.
+    // FIXME: support fan-in (multiple audio connections to this parameter with unity-gain summing).
+    // https://bugs.webkit.org/show_bug.cgi?id=83610
+    AudioNodeOutput* m_audioRateSignal;
 };
 
 } // namespace WebCore

Modified: trunk/Source/WebCore/Modules/webaudio/Oscillator.cpp (113768 => 113769)


--- trunk/Source/WebCore/Modules/webaudio/Oscillator.cpp	2012-04-10 21:25:50 UTC (rev 113768)
+++ trunk/Source/WebCore/Modules/webaudio/Oscillator.cpp	2012-04-10 21:28:57 UTC (rev 113769)
@@ -112,14 +112,14 @@
     if (!isGood)
         return false;
 
-    bool hasTimelineValues = false;
+    bool hasSampleAccurateValues = false;
     bool hasFrequencyChanges = false;
     float* phaseIncrements = m_phaseIncrements.data();
 
     float finalScale = m_waveTable->rateScale();
 
-    if (m_frequency->hasTimelineValues()) {
-        hasTimelineValues = true;
+    if (m_frequency->hasSampleAccurateValues()) {
+        hasSampleAccurateValues = true;
         hasFrequencyChanges = true;
 
         // Get the sample-accurate frequency values and convert to phase increments.
@@ -132,8 +132,8 @@
         finalScale *= frequency;
     }
 
-    if (m_detune->hasTimelineValues()) {
-        hasTimelineValues = true;
+    if (m_detune->hasSampleAccurateValues()) {
+        hasSampleAccurateValues = true;
 
         // Get the sample-accurate detune values.
         float* detuneValues = hasFrequencyChanges ? m_detuneValues.data() : phaseIncrements;
@@ -157,20 +157,18 @@
         finalScale *= detuneScale;
     }
 
-    if (hasTimelineValues) {
+    if (hasSampleAccurateValues) {
         // Convert from frequency to wavetable increment.
         vsmul(phaseIncrements, 1, &finalScale, phaseIncrements, 1, framesToProcess);
     }
 
-    return hasTimelineValues;
+    return hasSampleAccurateValues;
 }
 
 void Oscillator::process(size_t framesToProcess)
 {
     AudioBus* outputBus = output(0)->bus();
 
-    outputBus->zero();
-
     if (!isInitialized() || !outputBus->numberOfChannels()) {
         outputBus->zero();
         return;
@@ -206,14 +204,14 @@
 
     float rateScale = m_waveTable->rateScale();
     float invRateScale = 1 / rateScale;
-    bool hasTimelineValues = calculateSampleAccuratePhaseIncrements(framesToProcess);
+    bool hasSampleAccurateValues = calculateSampleAccuratePhaseIncrements(framesToProcess);
 
     float frequency = 0;
     float* higherWaveData = 0;
     float* lowerWaveData = 0;
     float tableInterpolationFactor;
 
-    if (!hasTimelineValues) {
+    if (!hasSampleAccurateValues) {
         frequency = m_frequency->smoothedValue();
         float detune = m_detune->smoothedValue();
         float detuneScale = powf(2, detune / 1200);
@@ -234,7 +232,7 @@
         readIndex = readIndex & readIndexMask;
         readIndex2 = readIndex2 & readIndexMask;
 
-        if (hasTimelineValues) {
+        if (hasSampleAccurateValues) {
             incr = *phaseIncrements++;
 
             frequency = invRateScale * incr;
_______________________________________________
webkit-changes mailing list
[email protected]
http://lists.webkit.org/mailman/listinfo.cgi/webkit-changes

Reply via email to