Title: [199775] trunk
Revision
199775
Author
cdu...@apple.com
Date
2016-04-20 10:27:37 -0700 (Wed, 20 Apr 2016)

Log Message

Drop [UsePointersEvenForNonNullableObjectArguments] from WebAudio
https://bugs.webkit.org/show_bug.cgi?id=156777

Reviewed by Darin Adler.

Source/WebCore:

Drop [UsePointersEvenForNonNullableObjectArguments] from WebAudio and
modernize the interface a bit.

There is no major Web-exposed behavioral change except for
the exception type thrown when passing null (now always TypeError).
Tests were updated to add coverage for this.

* Modules/webaudio/AsyncAudioDecoder.cpp:
(WebCore::AsyncAudioDecoder::decodeAsync):
(WebCore::AsyncAudioDecoder::DecodingTask::DecodingTask):
(WebCore::AsyncAudioDecoder::DecodingTask::decode): Deleted.
* Modules/webaudio/AsyncAudioDecoder.h:
(WebCore::AsyncAudioDecoder::DecodingTask::audioData):
* Modules/webaudio/AudioContext.cpp:
(WebCore::AudioContext::lazyInitialize):
(WebCore::AudioContext::createBuffer):
(WebCore::AudioContext::decodeAudioData):
(WebCore::AudioContext::createBufferSource):
(WebCore::AudioContext::createMediaElementSource):
(WebCore::AudioContext::createMediaStreamSource):
(WebCore::AudioContext::createMediaStreamDestination):
(WebCore::AudioContext::createScriptProcessor):
(WebCore::AudioContext::createBiquadFilter):
(WebCore::AudioContext::createWaveShaper):
(WebCore::AudioContext::createPanner):
(WebCore::AudioContext::createConvolver):
(WebCore::AudioContext::createDynamicsCompressor):
(WebCore::AudioContext::createAnalyser):
(WebCore::AudioContext::createGain):
(WebCore::AudioContext::createDelay):
(WebCore::AudioContext::createChannelSplitter):
(WebCore::AudioContext::createChannelMerger):
(WebCore::AudioContext::createOscillator):
(WebCore::AudioContext::createPeriodicWave):
(WebCore::AudioContext::derefFinishedSourceNodes):
(WebCore::AudioContext::refNode):
(WebCore::AudioContext::derefNode):
(WebCore::AudioContext::notifyNodeFinishedProcessing): Deleted.
(WebCore::AudioContext::derefUnfinishedSourceNodes): Deleted.
(WebCore::AudioContext::lock): Deleted.
* Modules/webaudio/AudioContext.h:
* Modules/webaudio/AudioContext.idl:
* Modules/webaudio/MediaElementAudioSourceNode.cpp:
(WebCore::MediaElementAudioSourceNode::create):
(WebCore::MediaElementAudioSourceNode::MediaElementAudioSourceNode):
(WebCore::MediaElementAudioSourceNode::process):
* Modules/webaudio/MediaElementAudioSourceNode.h:
(WebCore::MediaElementAudioSourceNode::mediaElement):
* Modules/webaudio/OscillatorNode.idl:

LayoutTests:

Added test coverage for the type of the exception being thrown when
pasing null to various WebAudio API.

* webaudio/audiobuffer-expected.txt:
* webaudio/audiobuffer.html:
* webaudio/createMediaStreamSource-null-expected.txt: Added.
* webaudio/createMediaStreamSource-null.html: Added.
* webaudio/decode-audio-data-basic-expected.txt:
* webaudio/decode-audio-data-basic.html:
* webaudio/mediaelementaudiosourcenode-expected.txt:
* webaudio/mediaelementaudiosourcenode.html:

Modified Paths

Added Paths

Diff

Modified: trunk/LayoutTests/ChangeLog (199774 => 199775)


--- trunk/LayoutTests/ChangeLog	2016-04-20 16:14:14 UTC (rev 199774)
+++ trunk/LayoutTests/ChangeLog	2016-04-20 17:27:37 UTC (rev 199775)
@@ -1,3 +1,22 @@
+2016-04-20  Chris Dumez  <cdu...@apple.com>
+
+        Drop [UsePointersEvenForNonNullableObjectArguments] from WebAudio
+        https://bugs.webkit.org/show_bug.cgi?id=156777
+
+        Reviewed by Darin Adler.
+
+        Added test coverage for the type of the exception being thrown when
+        pasing null to various WebAudio API.
+
+        * webaudio/audiobuffer-expected.txt:
+        * webaudio/audiobuffer.html:
+        * webaudio/createMediaStreamSource-null-expected.txt: Added.
+        * webaudio/createMediaStreamSource-null.html: Added.
+        * webaudio/decode-audio-data-basic-expected.txt:
+        * webaudio/decode-audio-data-basic.html:
+        * webaudio/mediaelementaudiosourcenode-expected.txt:
+        * webaudio/mediaelementaudiosourcenode.html:
+
 2016-04-20  Brady Eidson  <beid...@apple.com>
 
         Addressing additional review feedback for:

Modified: trunk/LayoutTests/webaudio/audiobuffer-expected.txt (199774 => 199775)


--- trunk/LayoutTests/webaudio/audiobuffer-expected.txt	2016-04-20 16:14:14 UTC (rev 199774)
+++ trunk/LayoutTests/webaudio/audiobuffer-expected.txt	2016-04-20 17:27:37 UTC (rev 199775)
@@ -12,6 +12,7 @@
 PASS getChannelData(2) returns a Float32Array object.
 PASS getChannelData(3) returns a Float32Array object.
 PASS Exception has been thrown correctly when index is not less than numberOfChannels.
+PASS context.createBuffer(null, false) threw exception TypeError: Type error.
 PASS successfullyParsed is true
 
 TEST COMPLETE

Modified: trunk/LayoutTests/webaudio/audiobuffer.html (199774 => 199775)


--- trunk/LayoutTests/webaudio/audiobuffer.html	2016-04-20 16:14:14 UTC (rev 199774)
+++ trunk/LayoutTests/webaudio/audiobuffer.html	2016-04-20 17:27:37 UTC (rev 199775)
@@ -49,6 +49,8 @@
     testPassed("Exception has been thrown correctly when index is not less than numberOfChannels.");
 }
 
+shouldThrow("context.createBuffer(null, false)", "'TypeError: Type error'");
+
 </script>
 
 <script src=""

Added: trunk/LayoutTests/webaudio/createMediaStreamSource-null-expected.txt (0 => 199775)


--- trunk/LayoutTests/webaudio/createMediaStreamSource-null-expected.txt	                        (rev 0)
+++ trunk/LayoutTests/webaudio/createMediaStreamSource-null-expected.txt	2016-04-20 17:27:37 UTC (rev 199775)
@@ -0,0 +1,10 @@
+Tests that passing null to context.createMediaStreamSource() throws a TypeError.
+
+On success, you will see a series of "PASS" messages, followed by "TEST COMPLETE".
+
+
+PASS context.createMediaStreamSource(null) threw exception TypeError: Type error.
+PASS successfullyParsed is true
+
+TEST COMPLETE
+

Added: trunk/LayoutTests/webaudio/createMediaStreamSource-null.html (0 => 199775)


--- trunk/LayoutTests/webaudio/createMediaStreamSource-null.html	                        (rev 0)
+++ trunk/LayoutTests/webaudio/createMediaStreamSource-null.html	2016-04-20 17:27:37 UTC (rev 199775)
@@ -0,0 +1,17 @@
+<!DOCTYPE html>
+<html>
+<head>
+<script src=""
+<script src=""
+</head>
+<body>
+<script>
+description("Tests that passing null to context.createMediaStreamSource() throws a TypeError.");
+
+var context = new webkitAudioContext();
+shouldThrow("context.createMediaStreamSource(null)", "'TypeError: Type error'");
+
+</script>
+<script src=""
+</body>
+</html>

Modified: trunk/LayoutTests/webaudio/decode-audio-data-basic-expected.txt (199774 => 199775)


--- trunk/LayoutTests/webaudio/decode-audio-data-basic-expected.txt	2016-04-20 16:14:14 UTC (rev 199774)
+++ trunk/LayoutTests/webaudio/decode-audio-data-basic-expected.txt	2016-04-20 17:27:37 UTC (rev 199775)
@@ -3,7 +3,7 @@
 On success, you will see a series of "PASS" messages, followed by "TEST COMPLETE".
 
 
-PASS decodeAudioData raises exception correctly when arraybuffer parameter is null.
+PASS context.decodeAudioData(null, function(){}, function(){}); threw exception TypeError: Type error.
 PASS The resources/media/24bit-44khz.wav test: successCallback has been called correctly.
 PASS The resources/media/invalid-audio-file.txt test: errorCallback has been called correctly.
 PASS successfullyParsed is true

Modified: trunk/LayoutTests/webaudio/decode-audio-data-basic.html (199774 => 199775)


--- trunk/LayoutTests/webaudio/decode-audio-data-basic.html	2016-04-20 16:14:14 UTC (rev 199774)
+++ trunk/LayoutTests/webaudio/decode-audio-data-basic.html	2016-04-20 17:27:37 UTC (rev 199775)
@@ -17,12 +17,8 @@
 
 var context = new webkitAudioContext();
 
-try {
-    context.decodeAudioData(null, function(){}, function(){});
-    testFailed("decodeAudioData should raise exception when arraybuffer parameter is null.");
-} catch(e) {
-    testPassed("decodeAudioData raises exception correctly when arraybuffer parameter is null.");
-}
+// decodeAudioData should raise exception when arraybuffer parameter is null.
+shouldThrow("context.decodeAudioData(null, function(){}, function(){});", "'TypeError: Type error'");
 
 var decodeCaseArray = [{url: "resources/media/24bit-44khz.wav", result: true},
                        {url: "resources/media/invalid-audio-file.txt", result: false}];

Modified: trunk/LayoutTests/webaudio/mediaelementaudiosourcenode-expected.txt (199774 => 199775)


--- trunk/LayoutTests/webaudio/mediaelementaudiosourcenode-expected.txt	2016-04-20 16:14:14 UTC (rev 199774)
+++ trunk/LayoutTests/webaudio/mediaelementaudiosourcenode-expected.txt	2016-04-20 17:27:37 UTC (rev 199775)
@@ -9,6 +9,7 @@
 PASS connect() exception thrown for illegal input index.
 PASS audioNode.connect(context.destination) succeeded.
 PASS createMediaElementSource() threw error when called twice on same HTMLMediaElement.
+PASS context.createMediaElementSource(null) threw exception TypeError: Type error.
 PASS successfullyParsed is true
 
 TEST COMPLETE

Modified: trunk/LayoutTests/webaudio/mediaelementaudiosourcenode.html (199774 => 199775)


--- trunk/LayoutTests/webaudio/mediaelementaudiosourcenode.html	2016-04-20 16:14:14 UTC (rev 199774)
+++ trunk/LayoutTests/webaudio/mediaelementaudiosourcenode.html	2016-04-20 17:27:37 UTC (rev 199775)
@@ -79,6 +79,8 @@
         testPassed("createMediaElementSource() threw error when called twice on same HTMLMediaElement.");
     }
 
+    shouldThrow("context.createMediaElementSource(null)", "'TypeError: Type error'");
+
     finishJSTest();
 }
 

Modified: trunk/Source/WebCore/ChangeLog (199774 => 199775)


--- trunk/Source/WebCore/ChangeLog	2016-04-20 16:14:14 UTC (rev 199774)
+++ trunk/Source/WebCore/ChangeLog	2016-04-20 17:27:37 UTC (rev 199775)
@@ -1,3 +1,60 @@
+2016-04-20  Chris Dumez  <cdu...@apple.com>
+
+        Drop [UsePointersEvenForNonNullableObjectArguments] from WebAudio
+        https://bugs.webkit.org/show_bug.cgi?id=156777
+
+        Reviewed by Darin Adler.
+
+        Drop [UsePointersEvenForNonNullableObjectArguments] from WebAudio and
+        modernize the interface a bit.
+
+        There is no major Web-exposed behavioral change except for
+        the exception type thrown when passing null (now always TypeError).
+        Tests were updated to add coverage for this.
+
+        * Modules/webaudio/AsyncAudioDecoder.cpp:
+        (WebCore::AsyncAudioDecoder::decodeAsync):
+        (WebCore::AsyncAudioDecoder::DecodingTask::DecodingTask):
+        (WebCore::AsyncAudioDecoder::DecodingTask::decode): Deleted.
+        * Modules/webaudio/AsyncAudioDecoder.h:
+        (WebCore::AsyncAudioDecoder::DecodingTask::audioData):
+        * Modules/webaudio/AudioContext.cpp:
+        (WebCore::AudioContext::lazyInitialize):
+        (WebCore::AudioContext::createBuffer):
+        (WebCore::AudioContext::decodeAudioData):
+        (WebCore::AudioContext::createBufferSource):
+        (WebCore::AudioContext::createMediaElementSource):
+        (WebCore::AudioContext::createMediaStreamSource):
+        (WebCore::AudioContext::createMediaStreamDestination):
+        (WebCore::AudioContext::createScriptProcessor):
+        (WebCore::AudioContext::createBiquadFilter):
+        (WebCore::AudioContext::createWaveShaper):
+        (WebCore::AudioContext::createPanner):
+        (WebCore::AudioContext::createConvolver):
+        (WebCore::AudioContext::createDynamicsCompressor):
+        (WebCore::AudioContext::createAnalyser):
+        (WebCore::AudioContext::createGain):
+        (WebCore::AudioContext::createDelay):
+        (WebCore::AudioContext::createChannelSplitter):
+        (WebCore::AudioContext::createChannelMerger):
+        (WebCore::AudioContext::createOscillator):
+        (WebCore::AudioContext::createPeriodicWave):
+        (WebCore::AudioContext::derefFinishedSourceNodes):
+        (WebCore::AudioContext::refNode):
+        (WebCore::AudioContext::derefNode):
+        (WebCore::AudioContext::notifyNodeFinishedProcessing): Deleted.
+        (WebCore::AudioContext::derefUnfinishedSourceNodes): Deleted.
+        (WebCore::AudioContext::lock): Deleted.
+        * Modules/webaudio/AudioContext.h:
+        * Modules/webaudio/AudioContext.idl:
+        * Modules/webaudio/MediaElementAudioSourceNode.cpp:
+        (WebCore::MediaElementAudioSourceNode::create):
+        (WebCore::MediaElementAudioSourceNode::MediaElementAudioSourceNode):
+        (WebCore::MediaElementAudioSourceNode::process):
+        * Modules/webaudio/MediaElementAudioSourceNode.h:
+        (WebCore::MediaElementAudioSourceNode::mediaElement):
+        * Modules/webaudio/OscillatorNode.idl:
+
 2016-04-20  Brady Eidson  <beid...@apple.com>
 
         Addressing additional review feedback for:

Modified: trunk/Source/WebCore/Modules/webaudio/AsyncAudioDecoder.cpp (199774 => 199775)


--- trunk/Source/WebCore/Modules/webaudio/AsyncAudioDecoder.cpp	2016-04-20 16:14:14 UTC (rev 199774)
+++ trunk/Source/WebCore/Modules/webaudio/AsyncAudioDecoder.cpp	2016-04-20 17:27:37 UTC (rev 199775)
@@ -51,14 +51,11 @@
     m_threadID = 0;
 }
 
-void AsyncAudioDecoder::decodeAsync(ArrayBuffer* audioData, float sampleRate, PassRefPtr<AudioBufferCallback> successCallback, PassRefPtr<AudioBufferCallback> errorCallback)
+void AsyncAudioDecoder::decodeAsync(Ref<ArrayBuffer>&& audioData, float sampleRate, RefPtr<AudioBufferCallback>&& successCallback, RefPtr<AudioBufferCallback>&& errorCallback)
 {
     ASSERT(isMainThread());
-    ASSERT(audioData);
-    if (!audioData)
-        return;
 
-    auto decodingTask = std::make_unique<DecodingTask>(audioData, sampleRate, successCallback, errorCallback);
+    auto decodingTask = std::make_unique<DecodingTask>(WTFMove(audioData), sampleRate, WTFMove(successCallback), WTFMove(errorCallback));
     m_queue.append(WTFMove(decodingTask)); // note that ownership of the task is effectively taken by the queue.
 }
 
@@ -87,20 +84,16 @@
     }
 }
 
-AsyncAudioDecoder::DecodingTask::DecodingTask(ArrayBuffer* audioData, float sampleRate, PassRefPtr<AudioBufferCallback> successCallback, PassRefPtr<AudioBufferCallback> errorCallback)
-    : m_audioData(audioData)
+AsyncAudioDecoder::DecodingTask::DecodingTask(Ref<ArrayBuffer>&& audioData, float sampleRate, RefPtr<AudioBufferCallback>&& successCallback, RefPtr<AudioBufferCallback>&& errorCallback)
+    : m_audioData(WTFMove(audioData))
     , m_sampleRate(sampleRate)
-    , m_successCallback(successCallback)
-    , m_errorCallback(errorCallback)
+    , m_successCallback(WTFMove(successCallback))
+    , m_errorCallback(WTFMove(errorCallback))
 {
 }
 
 void AsyncAudioDecoder::DecodingTask::decode()
 {
-    ASSERT(m_audioData.get());
-    if (!m_audioData.get())
-        return;
-
     // Do the actual decoding and invoke the callback.
     m_audioBuffer = AudioBuffer::createFromAudioFileData(m_audioData->data(), m_audioData->byteLength(), false, sampleRate());
     

Modified: trunk/Source/WebCore/Modules/webaudio/AsyncAudioDecoder.h (199774 => 199775)


--- trunk/Source/WebCore/Modules/webaudio/AsyncAudioDecoder.h	2016-04-20 16:14:14 UTC (rev 199774)
+++ trunk/Source/WebCore/Modules/webaudio/AsyncAudioDecoder.h	2016-04-20 17:27:37 UTC (rev 199775)
@@ -51,17 +51,17 @@
     ~AsyncAudioDecoder();
 
     // Must be called on the main thread.
-    void decodeAsync(JSC::ArrayBuffer* audioData, float sampleRate, PassRefPtr<AudioBufferCallback> successCallback, PassRefPtr<AudioBufferCallback> errorCallback);
+    void decodeAsync(Ref<JSC::ArrayBuffer>&& audioData, float sampleRate, RefPtr<AudioBufferCallback>&& successCallback, RefPtr<AudioBufferCallback>&& errorCallback);
 
 private:
     class DecodingTask {
         WTF_MAKE_NONCOPYABLE(DecodingTask);
     public:
-        DecodingTask(JSC::ArrayBuffer* audioData, float sampleRate, PassRefPtr<AudioBufferCallback> successCallback, PassRefPtr<AudioBufferCallback> errorCallback);
+        DecodingTask(Ref<JSC::ArrayBuffer>&& audioData, float sampleRate, RefPtr<AudioBufferCallback>&& successCallback, RefPtr<AudioBufferCallback>&& errorCallback);
         void decode();
         
     private:
-        JSC::ArrayBuffer* audioData() { return m_audioData.get(); }
+        JSC::ArrayBuffer& audioData() { return m_audioData; }
         float sampleRate() const { return m_sampleRate; }
         AudioBufferCallback* successCallback() { return m_successCallback.get(); }
         AudioBufferCallback* errorCallback() { return m_errorCallback.get(); }
@@ -69,7 +69,7 @@
 
         void notifyComplete();
 
-        RefPtr<JSC::ArrayBuffer> m_audioData;
+        Ref<JSC::ArrayBuffer> m_audioData;
         float m_sampleRate;
         RefPtr<AudioBufferCallback> m_successCallback;
         RefPtr<AudioBufferCallback> m_errorCallback;

Modified: trunk/Source/WebCore/Modules/webaudio/AudioContext.cpp (199774 => 199775)


--- trunk/Source/WebCore/Modules/webaudio/AudioContext.cpp	2016-04-20 16:14:14 UTC (rev 199774)
+++ trunk/Source/WebCore/Modules/webaudio/AudioContext.cpp	2016-04-20 17:27:37 UTC (rev 199775)
@@ -1,5 +1,6 @@
 /*
- * Copyright (C) 2010, Google Inc. All rights reserved.
+ * Copyright (C) 2010 Google Inc. All rights reserved.
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -212,7 +213,7 @@
     if (m_isAudioThreadFinished)
         return;
 
-    if (m_destinationNode.get()) {
+    if (m_destinationNode) {
         m_destinationNode->initialize();
 
         if (!isOfflineContext()) {
@@ -377,10 +378,10 @@
     return downcast<Document>(m_scriptExecutionContext);
 }
 
-PassRefPtr<AudioBuffer> AudioContext::createBuffer(unsigned numberOfChannels, size_t numberOfFrames, float sampleRate, ExceptionCode& ec)
+RefPtr<AudioBuffer> AudioContext::createBuffer(unsigned numberOfChannels, size_t numberOfFrames, float sampleRate, ExceptionCode& ec)
 {
     RefPtr<AudioBuffer> audioBuffer = AudioBuffer::create(numberOfChannels, numberOfFrames, sampleRate);
-    if (!audioBuffer.get()) {
+    if (!audioBuffer) {
         ec = NOT_SUPPORTED_ERR;
         return nullptr;
     }
@@ -388,84 +389,62 @@
     return audioBuffer;
 }
 
-PassRefPtr<AudioBuffer> AudioContext::createBuffer(ArrayBuffer* arrayBuffer, bool mixToMono, ExceptionCode& ec)
+RefPtr<AudioBuffer> AudioContext::createBuffer(ArrayBuffer& arrayBuffer, bool mixToMono, ExceptionCode& ec)
 {
-    ASSERT(arrayBuffer);
-    if (!arrayBuffer) {
+    RefPtr<AudioBuffer> audioBuffer = AudioBuffer::createFromAudioFileData(arrayBuffer.data(), arrayBuffer.byteLength(), mixToMono, sampleRate());
+    if (!audioBuffer) {
         ec = SYNTAX_ERR;
         return nullptr;
     }
 
-    RefPtr<AudioBuffer> audioBuffer = AudioBuffer::createFromAudioFileData(arrayBuffer->data(), arrayBuffer->byteLength(), mixToMono, sampleRate());
-    if (!audioBuffer.get()) {
-        ec = SYNTAX_ERR;
-        return nullptr;
-    }
-
     return audioBuffer;
 }
 
-void AudioContext::decodeAudioData(ArrayBuffer* audioData, PassRefPtr<AudioBufferCallback> successCallback, PassRefPtr<AudioBufferCallback> errorCallback, ExceptionCode& ec)
+void AudioContext::decodeAudioData(Ref<ArrayBuffer>&& audioData, RefPtr<AudioBufferCallback>&& successCallback, RefPtr<AudioBufferCallback>&& errorCallback)
 {
-    if (!audioData) {
-        ec = SYNTAX_ERR;
-        return;
-    }
-    m_audioDecoder.decodeAsync(audioData, sampleRate(), successCallback, errorCallback);
+    m_audioDecoder.decodeAsync(WTFMove(audioData), sampleRate(), WTFMove(successCallback), WTFMove(errorCallback));
 }
 
-PassRefPtr<AudioBufferSourceNode> AudioContext::createBufferSource()
+Ref<AudioBufferSourceNode> AudioContext::createBufferSource()
 {
     ASSERT(isMainThread());
     lazyInitialize();
-    RefPtr<AudioBufferSourceNode> node = AudioBufferSourceNode::create(*this, m_destinationNode->sampleRate());
+    Ref<AudioBufferSourceNode> node = AudioBufferSourceNode::create(*this, m_destinationNode->sampleRate());
 
     // Because this is an AudioScheduledSourceNode, the context keeps a reference until it has finished playing.
     // When this happens, AudioScheduledSourceNode::finish() calls AudioContext::notifyNodeFinishedProcessing().
-    refNode(node.get());
+    refNode(node);
 
     return node;
 }
 
 #if ENABLE(VIDEO)
-PassRefPtr<MediaElementAudioSourceNode> AudioContext::createMediaElementSource(HTMLMediaElement* mediaElement, ExceptionCode& ec)
+RefPtr<MediaElementAudioSourceNode> AudioContext::createMediaElementSource(HTMLMediaElement& mediaElement, ExceptionCode& ec)
 {
-    ASSERT(mediaElement);
-    if (!mediaElement) {
-        ec = INVALID_STATE_ERR;
-        return nullptr;
-    }
-        
     ASSERT(isMainThread());
     lazyInitialize();
     
     // First check if this media element already has a source node.
-    if (mediaElement->audioSourceNode()) {
+    if (mediaElement.audioSourceNode()) {
         ec = INVALID_STATE_ERR;
         return nullptr;
     }
         
-    RefPtr<MediaElementAudioSourceNode> node = MediaElementAudioSourceNode::create(*this, mediaElement);
+    Ref<MediaElementAudioSourceNode> node = MediaElementAudioSourceNode::create(*this, mediaElement);
 
-    mediaElement->setAudioSourceNode(node.get());
+    mediaElement.setAudioSourceNode(node.ptr());
 
     refNode(node.get()); // context keeps reference until node is disconnected
-    return node;
+    return WTFMove(node);
 }
 #endif
 
 #if ENABLE(MEDIA_STREAM)
-PassRefPtr<MediaStreamAudioSourceNode> AudioContext::createMediaStreamSource(MediaStream* mediaStream, ExceptionCode& ec)
+RefPtr<MediaStreamAudioSourceNode> AudioContext::createMediaStreamSource(MediaStream& mediaStream, ExceptionCode& ec)
 {
     ASSERT(isMainThread());
 
-    ASSERT(mediaStream);
-    if (!mediaStream) {
-        ec = INVALID_STATE_ERR;
-        return nullptr;
-    }
-
-    auto audioTracks = mediaStream->getAudioTracks();
+    auto audioTracks = mediaStream.getAudioTracks();
     if (audioTracks.isEmpty()) {
         ec = INVALID_STATE_ERR;
         return nullptr;
@@ -486,14 +465,14 @@
 
     lazyInitialize();
 
-    auto node = MediaStreamAudioSourceNode::create(*this, *mediaStream, *providerTrack);
+    auto node = MediaStreamAudioSourceNode::create(*this, mediaStream, *providerTrack);
     node->setFormat(2, sampleRate());
 
-    refNode(&node.get()); // context keeps reference until node is disconnected
-    return &node.get();
+    refNode(node); // context keeps reference until node is disconnected
+    return WTFMove(node);
 }
 
-PassRefPtr<MediaStreamAudioDestinationNode> AudioContext::createMediaStreamDestination()
+Ref<MediaStreamAudioDestinationNode> AudioContext::createMediaStreamDestination()
 {
     // FIXME: Add support for an optional argument which specifies the number of channels.
     // FIXME: The default should probably be stereo instead of mono.
@@ -502,112 +481,112 @@
 
 #endif
 
-PassRefPtr<ScriptProcessorNode> AudioContext::createScriptProcessor(size_t bufferSize, ExceptionCode& ec)
+RefPtr<ScriptProcessorNode> AudioContext::createScriptProcessor(size_t bufferSize, ExceptionCode& ec)
 {
     // Set number of input/output channels to stereo by default.
     return createScriptProcessor(bufferSize, 2, 2, ec);
 }
 
-PassRefPtr<ScriptProcessorNode> AudioContext::createScriptProcessor(size_t bufferSize, size_t numberOfInputChannels, ExceptionCode& ec)
+RefPtr<ScriptProcessorNode> AudioContext::createScriptProcessor(size_t bufferSize, size_t numberOfInputChannels, ExceptionCode& ec)
 {
     // Set number of output channels to stereo by default.
     return createScriptProcessor(bufferSize, numberOfInputChannels, 2, ec);
 }
 
-PassRefPtr<ScriptProcessorNode> AudioContext::createScriptProcessor(size_t bufferSize, size_t numberOfInputChannels, size_t numberOfOutputChannels, ExceptionCode& ec)
+RefPtr<ScriptProcessorNode> AudioContext::createScriptProcessor(size_t bufferSize, size_t numberOfInputChannels, size_t numberOfOutputChannels, ExceptionCode& ec)
 {
     ASSERT(isMainThread());
     lazyInitialize();
     RefPtr<ScriptProcessorNode> node = ScriptProcessorNode::create(*this, m_destinationNode->sampleRate(), bufferSize, numberOfInputChannels, numberOfOutputChannels);
 
-    if (!node.get()) {
+    if (!node) {
         ec = INDEX_SIZE_ERR;
         return nullptr;
     }
 
-    refNode(node.get()); // context keeps reference until we stop making _javascript_ rendering callbacks
+    refNode(*node); // context keeps reference until we stop making _javascript_ rendering callbacks
     return node;
 }
 
-PassRefPtr<BiquadFilterNode> AudioContext::createBiquadFilter()
+Ref<BiquadFilterNode> AudioContext::createBiquadFilter()
 {
     ASSERT(isMainThread());
     lazyInitialize();
     return BiquadFilterNode::create(*this, m_destinationNode->sampleRate());
 }
 
-PassRefPtr<WaveShaperNode> AudioContext::createWaveShaper()
+Ref<WaveShaperNode> AudioContext::createWaveShaper()
 {
     ASSERT(isMainThread());
     lazyInitialize();
     return WaveShaperNode::create(*this);
 }
 
-PassRefPtr<PannerNode> AudioContext::createPanner()
+Ref<PannerNode> AudioContext::createPanner()
 {
     ASSERT(isMainThread());
     lazyInitialize();
     return PannerNode::create(*this, m_destinationNode->sampleRate());
 }
 
-PassRefPtr<ConvolverNode> AudioContext::createConvolver()
+Ref<ConvolverNode> AudioContext::createConvolver()
 {
     ASSERT(isMainThread());
     lazyInitialize();
     return ConvolverNode::create(*this, m_destinationNode->sampleRate());
 }
 
-PassRefPtr<DynamicsCompressorNode> AudioContext::createDynamicsCompressor()
+Ref<DynamicsCompressorNode> AudioContext::createDynamicsCompressor()
 {
     ASSERT(isMainThread());
     lazyInitialize();
     return DynamicsCompressorNode::create(*this, m_destinationNode->sampleRate());
 }
 
-PassRefPtr<AnalyserNode> AudioContext::createAnalyser()
+Ref<AnalyserNode> AudioContext::createAnalyser()
 {
     ASSERT(isMainThread());
     lazyInitialize();
     return AnalyserNode::create(*this, m_destinationNode->sampleRate());
 }
 
-PassRefPtr<GainNode> AudioContext::createGain()
+Ref<GainNode> AudioContext::createGain()
 {
     ASSERT(isMainThread());
     lazyInitialize();
     return GainNode::create(*this, m_destinationNode->sampleRate());
 }
 
-PassRefPtr<DelayNode> AudioContext::createDelay(ExceptionCode& ec)
+RefPtr<DelayNode> AudioContext::createDelay(ExceptionCode& ec)
 {
     const double defaultMaxDelayTime = 1;
     return createDelay(defaultMaxDelayTime, ec);
 }
 
-PassRefPtr<DelayNode> AudioContext::createDelay(double maxDelayTime, ExceptionCode& ec)
+RefPtr<DelayNode> AudioContext::createDelay(double maxDelayTime, ExceptionCode& ec)
 {
     ASSERT(isMainThread());
     lazyInitialize();
-    RefPtr<DelayNode> node = DelayNode::create(*this, m_destinationNode->sampleRate(), maxDelayTime, ec);
+    Ref<DelayNode> node = DelayNode::create(*this, m_destinationNode->sampleRate(), maxDelayTime, ec);
     if (ec)
         return nullptr;
-    return node;
+    return WTFMove(node);
 }
 
-PassRefPtr<ChannelSplitterNode> AudioContext::createChannelSplitter(ExceptionCode& ec)
+RefPtr<ChannelSplitterNode> AudioContext::createChannelSplitter(ExceptionCode& ec)
 {
     const unsigned ChannelSplitterDefaultNumberOfOutputs = 6;
     return createChannelSplitter(ChannelSplitterDefaultNumberOfOutputs, ec);
 }
 
-PassRefPtr<ChannelSplitterNode> AudioContext::createChannelSplitter(size_t numberOfOutputs, ExceptionCode& ec)
+RefPtr<ChannelSplitterNode> AudioContext::createChannelSplitter(size_t numberOfOutputs, ExceptionCode& ec)
 {
     ASSERT(isMainThread());
     lazyInitialize();
 
     RefPtr<ChannelSplitterNode> node = ChannelSplitterNode::create(*this, m_destinationNode->sampleRate(), numberOfOutputs);
 
-    if (!node.get()) {
+    if (!node) {
         ec = INDEX_SIZE_ERR;
         return nullptr;
     }
@@ -615,20 +594,20 @@
     return node;
 }
 
-PassRefPtr<ChannelMergerNode> AudioContext::createChannelMerger(ExceptionCode& ec)
+RefPtr<ChannelMergerNode> AudioContext::createChannelMerger(ExceptionCode& ec)
 {
     const unsigned ChannelMergerDefaultNumberOfInputs = 6;
     return createChannelMerger(ChannelMergerDefaultNumberOfInputs, ec);
 }
 
-PassRefPtr<ChannelMergerNode> AudioContext::createChannelMerger(size_t numberOfInputs, ExceptionCode& ec)
+RefPtr<ChannelMergerNode> AudioContext::createChannelMerger(size_t numberOfInputs, ExceptionCode& ec)
 {
     ASSERT(isMainThread());
     lazyInitialize();
 
     RefPtr<ChannelMergerNode> node = ChannelMergerNode::create(*this, m_destinationNode->sampleRate(), numberOfInputs);
 
-    if (!node.get()) {
+    if (!node) {
         ec = INDEX_SIZE_ERR;
         return nullptr;
     }
@@ -636,21 +615,21 @@
     return node;
 }
 
-PassRefPtr<OscillatorNode> AudioContext::createOscillator()
+Ref<OscillatorNode> AudioContext::createOscillator()
 {
     ASSERT(isMainThread());
     lazyInitialize();
 
-    RefPtr<OscillatorNode> node = OscillatorNode::create(*this, m_destinationNode->sampleRate());
+    Ref<OscillatorNode> node = OscillatorNode::create(*this, m_destinationNode->sampleRate());
 
     // Because this is an AudioScheduledSourceNode, the context keeps a reference until it has finished playing.
     // When this happens, AudioScheduledSourceNode::finish() calls AudioContext::notifyNodeFinishedProcessing().
-    refNode(node.get());
+    refNode(node);
 
     return node;
 }
 
-PassRefPtr<PeriodicWave> AudioContext::createPeriodicWave(Float32Array* real, Float32Array* imag, ExceptionCode& ec)
+RefPtr<PeriodicWave> AudioContext::createPeriodicWave(Float32Array* real, Float32Array* imag, ExceptionCode& ec)
 {
     ASSERT(isMainThread());
     
@@ -674,28 +653,28 @@
     ASSERT(isGraphOwner());
     ASSERT(isAudioThread() || isAudioThreadFinished());
     for (auto& node : m_finishedNodes)
-        derefNode(node);
+        derefNode(*node);
 
     m_finishedNodes.clear();
 }
 
-void AudioContext::refNode(AudioNode* node)
+void AudioContext::refNode(AudioNode& node)
 {
     ASSERT(isMainThread());
     AutoLocker locker(*this);
     
-    node->ref(AudioNode::RefTypeConnection);
-    m_referencedNodes.append(node);
+    node.ref(AudioNode::RefTypeConnection);
+    m_referencedNodes.append(&node);
 }
 
-void AudioContext::derefNode(AudioNode* node)
+void AudioContext::derefNode(AudioNode& node)
 {
     ASSERT(isGraphOwner());
     
-    node->deref(AudioNode::RefTypeConnection);
+    node.deref(AudioNode::RefTypeConnection);
 
-    ASSERT(m_referencedNodes.contains(node));
-    m_referencedNodes.removeFirst(node);
+    ASSERT(m_referencedNodes.contains(&node));
+    m_referencedNodes.removeFirst(&node);
 }
 
 void AudioContext::derefUnfinishedSourceNodes()

Modified: trunk/Source/WebCore/Modules/webaudio/AudioContext.h (199774 => 199775)


--- trunk/Source/WebCore/Modules/webaudio/AudioContext.h	2016-04-20 16:14:14 UTC (rev 199774)
+++ trunk/Source/WebCore/Modules/webaudio/AudioContext.h	2016-04-20 17:27:37 UTC (rev 199775)
@@ -1,5 +1,6 @@
 /*
- * Copyright (C) 2010, Google Inc. All rights reserved.
+ * Copyright (C) 2010 Google Inc. All rights reserved.
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -38,7 +39,6 @@
 #include <atomic>
 #include <wtf/HashSet.h>
 #include <wtf/MainThread.h>
-#include <wtf/PassRefPtr.h>
 #include <wtf/RefCounted.h>
 #include <wtf/RefPtr.h>
 #include <wtf/ThreadSafeRefCounted.h>
@@ -101,11 +101,11 @@
     void incrementActiveSourceCount();
     void decrementActiveSourceCount();
     
-    PassRefPtr<AudioBuffer> createBuffer(unsigned numberOfChannels, size_t numberOfFrames, float sampleRate, ExceptionCode&);
-    PassRefPtr<AudioBuffer> createBuffer(ArrayBuffer*, bool mixToMono, ExceptionCode&);
+    RefPtr<AudioBuffer> createBuffer(unsigned numberOfChannels, size_t numberOfFrames, float sampleRate, ExceptionCode&);
+    RefPtr<AudioBuffer> createBuffer(ArrayBuffer&, bool mixToMono, ExceptionCode&);
 
     // Asynchronous audio file data decoding.
-    void decodeAudioData(ArrayBuffer*, PassRefPtr<AudioBufferCallback>, PassRefPtr<AudioBufferCallback>, ExceptionCode& ec);
+    void decodeAudioData(Ref<ArrayBuffer>&&, RefPtr<AudioBufferCallback>&&, RefPtr<AudioBufferCallback>&&);
 
     AudioListener* listener() { return m_listener.get(); }
 
@@ -121,32 +121,32 @@
     const AtomicString& state() const;
 
     // The AudioNode create methods are called on the main thread (from _javascript_).
-    PassRefPtr<AudioBufferSourceNode> createBufferSource();
+    Ref<AudioBufferSourceNode> createBufferSource();
 #if ENABLE(VIDEO)
-    PassRefPtr<MediaElementAudioSourceNode> createMediaElementSource(HTMLMediaElement*, ExceptionCode&);
+    RefPtr<MediaElementAudioSourceNode> createMediaElementSource(HTMLMediaElement&, ExceptionCode&);
 #endif
 #if ENABLE(MEDIA_STREAM)
-    PassRefPtr<MediaStreamAudioSourceNode> createMediaStreamSource(MediaStream*, ExceptionCode&);
-    PassRefPtr<MediaStreamAudioDestinationNode> createMediaStreamDestination();
+    RefPtr<MediaStreamAudioSourceNode> createMediaStreamSource(MediaStream&, ExceptionCode&);
+    Ref<MediaStreamAudioDestinationNode> createMediaStreamDestination();
 #endif
-    PassRefPtr<GainNode> createGain();
-    PassRefPtr<BiquadFilterNode> createBiquadFilter();
-    PassRefPtr<WaveShaperNode> createWaveShaper();
-    PassRefPtr<DelayNode> createDelay(ExceptionCode&);
-    PassRefPtr<DelayNode> createDelay(double maxDelayTime, ExceptionCode&);
-    PassRefPtr<PannerNode> createPanner();
-    PassRefPtr<ConvolverNode> createConvolver();
-    PassRefPtr<DynamicsCompressorNode> createDynamicsCompressor();    
-    PassRefPtr<AnalyserNode> createAnalyser();
-    PassRefPtr<ScriptProcessorNode> createScriptProcessor(size_t bufferSize, ExceptionCode&);
-    PassRefPtr<ScriptProcessorNode> createScriptProcessor(size_t bufferSize, size_t numberOfInputChannels, ExceptionCode&);
-    PassRefPtr<ScriptProcessorNode> createScriptProcessor(size_t bufferSize, size_t numberOfInputChannels, size_t numberOfOutputChannels, ExceptionCode&);
-    PassRefPtr<ChannelSplitterNode> createChannelSplitter(ExceptionCode&);
-    PassRefPtr<ChannelSplitterNode> createChannelSplitter(size_t numberOfOutputs, ExceptionCode&);
-    PassRefPtr<ChannelMergerNode> createChannelMerger(ExceptionCode&);
-    PassRefPtr<ChannelMergerNode> createChannelMerger(size_t numberOfInputs, ExceptionCode&);
-    PassRefPtr<OscillatorNode> createOscillator();
-    PassRefPtr<PeriodicWave> createPeriodicWave(Float32Array* real, Float32Array* imag, ExceptionCode&);
+    Ref<GainNode> createGain();
+    Ref<BiquadFilterNode> createBiquadFilter();
+    Ref<WaveShaperNode> createWaveShaper();
+    RefPtr<DelayNode> createDelay(ExceptionCode&);
+    RefPtr<DelayNode> createDelay(double maxDelayTime, ExceptionCode&);
+    Ref<PannerNode> createPanner();
+    Ref<ConvolverNode> createConvolver();
+    Ref<DynamicsCompressorNode> createDynamicsCompressor();
+    Ref<AnalyserNode> createAnalyser();
+    RefPtr<ScriptProcessorNode> createScriptProcessor(size_t bufferSize, ExceptionCode&);
+    RefPtr<ScriptProcessorNode> createScriptProcessor(size_t bufferSize, size_t numberOfInputChannels, ExceptionCode&);
+    RefPtr<ScriptProcessorNode> createScriptProcessor(size_t bufferSize, size_t numberOfInputChannels, size_t numberOfOutputChannels, ExceptionCode&);
+    RefPtr<ChannelSplitterNode> createChannelSplitter(ExceptionCode&);
+    RefPtr<ChannelSplitterNode> createChannelSplitter(size_t numberOfOutputs, ExceptionCode&);
+    RefPtr<ChannelMergerNode> createChannelMerger(ExceptionCode&);
+    RefPtr<ChannelMergerNode> createChannelMerger(size_t numberOfInputs, ExceptionCode&);
+    Ref<OscillatorNode> createOscillator();
+    RefPtr<PeriodicWave> createPeriodicWave(Float32Array* real, Float32Array* imag, ExceptionCode&);
 
     // When a source node has no more processing to do (has finished playing), then it tells the context to dereference it.
     void notifyNodeFinishedProcessing(AudioNode*);
@@ -303,8 +303,8 @@
     // In turn, these nodes reference all nodes they're connected to.  All nodes are ultimately connected to the AudioDestinationNode.
     // When the context dereferences a source node, it will be deactivated from the rendering graph along with all other nodes it is
     // uniquely connected to.  See the AudioNode::ref() and AudioNode::deref() methods for more details.
-    void refNode(AudioNode*);
-    void derefNode(AudioNode*);
+    void refNode(AudioNode&);
+    void derefNode(AudioNode&);
 
     // ActiveDOMObject API.
     void stop() override;

Modified: trunk/Source/WebCore/Modules/webaudio/AudioContext.idl (199774 => 199775)


--- trunk/Source/WebCore/Modules/webaudio/AudioContext.idl	2016-04-20 16:14:14 UTC (rev 199774)
+++ trunk/Source/WebCore/Modules/webaudio/AudioContext.idl	2016-04-20 17:27:37 UTC (rev 199775)
@@ -31,13 +31,12 @@
 };
 
 [
-    EnabledBySetting=WebAudio,
-    Conditional=WEB_AUDIO,
     ActiveDOMObject,
+    Conditional=WEB_AUDIO,
     CustomConstructor,
-    UsePointersEvenForNonNullableObjectArguments,
-    InterfaceName=webkitAudioContext,
+    EnabledBySetting=WebAudio,
     ExportMacro=WEBCORE_EXPORT,
+    InterfaceName=webkitAudioContext,
 ] interface AudioContext : EventTarget {
     // All rendered audio ultimately connects to destination, which represents the audio hardware.
     readonly attribute AudioDestinationNode destination;
@@ -62,22 +61,19 @@
     readonly attribute unsigned long activeSourceCount;
     
     [RaisesException] AudioBuffer createBuffer(unsigned long numberOfChannels, unsigned long numberOfFrames, unrestricted float sampleRate);
-    [RaisesException] AudioBuffer createBuffer(ArrayBuffer? buffer, boolean mixToMono);
+    [RaisesException] AudioBuffer createBuffer(ArrayBuffer buffer, boolean mixToMono);
 
     // Asynchronous audio file data decoding.
-    [RaisesException] void decodeAudioData(ArrayBuffer audioData, AudioBufferCallback successCallback, optional AudioBufferCallback errorCallback);
+    // FIXME: successCallback should be optional and the callbacks should not be nullable. This should also return a Promise.
+    void decodeAudioData(ArrayBuffer audioData, AudioBufferCallback? successCallback, optional AudioBufferCallback? errorCallback);
 
     // Sources
     AudioBufferSourceNode createBufferSource();
 
-#if defined(ENABLE_VIDEO) && ENABLE_VIDEO
-    [RaisesException] MediaElementAudioSourceNode createMediaElementSource(HTMLMediaElement mediaElement);
-#endif
+    [Conditional=VIDEO, RaisesException] MediaElementAudioSourceNode createMediaElementSource(HTMLMediaElement mediaElement);
 
-#if defined(ENABLE_MEDIA_STREAM) && ENABLE_MEDIA_STREAM
-    [RaisesException] MediaStreamAudioSourceNode createMediaStreamSource(MediaStream mediaStream);
-    MediaStreamAudioDestinationNode createMediaStreamDestination();
-#endif
+    [Conditional=MEDIA_STREAM, RaisesException] MediaStreamAudioSourceNode createMediaStreamSource(MediaStream mediaStream);
+    [Conditional=MEDIA_STREAM] MediaStreamAudioDestinationNode createMediaStreamDestination();
 
     // Processing nodes
     GainNode createGain();

Modified: trunk/Source/WebCore/Modules/webaudio/MediaElementAudioSourceNode.cpp (199774 => 199775)


--- trunk/Source/WebCore/Modules/webaudio/MediaElementAudioSourceNode.cpp	2016-04-20 16:14:14 UTC (rev 199774)
+++ trunk/Source/WebCore/Modules/webaudio/MediaElementAudioSourceNode.cpp	2016-04-20 17:27:37 UTC (rev 199775)
@@ -40,12 +40,12 @@
 
 namespace WebCore {
 
-Ref<MediaElementAudioSourceNode> MediaElementAudioSourceNode::create(AudioContext& context, HTMLMediaElement* mediaElement)
+Ref<MediaElementAudioSourceNode> MediaElementAudioSourceNode::create(AudioContext& context, HTMLMediaElement& mediaElement)
 {
     return adoptRef(*new MediaElementAudioSourceNode(context, mediaElement));
 }
 
-MediaElementAudioSourceNode::MediaElementAudioSourceNode(AudioContext& context, HTMLMediaElement* mediaElement)
+MediaElementAudioSourceNode::MediaElementAudioSourceNode(AudioContext& context, HTMLMediaElement& mediaElement)
     : AudioNode(context, context.sampleRate())
     , m_mediaElement(mediaElement)
     , m_sourceNumberOfChannels(0)
@@ -104,7 +104,7 @@
 {
     AudioBus* outputBus = output(0)->bus();
 
-    if (!mediaElement() || !m_sourceNumberOfChannels || !m_sourceSampleRate) {
+    if (!m_sourceNumberOfChannels || !m_sourceSampleRate) {
         outputBus->zero();
         return;
     }
@@ -119,7 +119,7 @@
         return;
     }
 
-    if (AudioSourceProvider* provider = mediaElement()->audioSourceProvider()) {
+    if (AudioSourceProvider* provider = mediaElement().audioSourceProvider()) {
         if (m_multiChannelResampler.get()) {
             ASSERT(m_sourceSampleRate != sampleRate());
             m_multiChannelResampler->process(provider, outputBus, numberOfFrames);

Modified: trunk/Source/WebCore/Modules/webaudio/MediaElementAudioSourceNode.h (199774 => 199775)


--- trunk/Source/WebCore/Modules/webaudio/MediaElementAudioSourceNode.h	2016-04-20 16:14:14 UTC (rev 199774)
+++ trunk/Source/WebCore/Modules/webaudio/MediaElementAudioSourceNode.h	2016-04-20 17:27:37 UTC (rev 199775)
@@ -41,11 +41,11 @@
     
 class MediaElementAudioSourceNode : public AudioNode, public AudioSourceProviderClient {
 public:
-    static Ref<MediaElementAudioSourceNode> create(AudioContext&, HTMLMediaElement*);
+    static Ref<MediaElementAudioSourceNode> create(AudioContext&, HTMLMediaElement&);
 
     virtual ~MediaElementAudioSourceNode();
 
-    HTMLMediaElement* mediaElement() { return m_mediaElement.get(); }                                        
+    HTMLMediaElement& mediaElement() { return m_mediaElement; }
 
     // AudioNode
     void process(size_t framesToProcess) override;
@@ -58,7 +58,7 @@
     void unlock();
 
 private:
-    MediaElementAudioSourceNode(AudioContext&, HTMLMediaElement*);
+    MediaElementAudioSourceNode(AudioContext&, HTMLMediaElement&);
 
     double tailTime() const override { return 0; }
     double latencyTime() const override { return 0; }
@@ -66,7 +66,7 @@
     // As an audio source, we will never propagate silence.
     bool propagatesSilence() const override { return false; }
 
-    RefPtr<HTMLMediaElement> m_mediaElement;
+    Ref<HTMLMediaElement> m_mediaElement;
     Lock m_processMutex;
 
     unsigned m_sourceNumberOfChannels;

Modified: trunk/Source/WebCore/Modules/webaudio/OscillatorNode.idl (199774 => 199775)


--- trunk/Source/WebCore/Modules/webaudio/OscillatorNode.idl	2016-04-20 16:14:14 UTC (rev 199774)
+++ trunk/Source/WebCore/Modules/webaudio/OscillatorNode.idl	2016-04-20 17:27:37 UTC (rev 199775)
@@ -25,7 +25,6 @@
 // OscillatorNode is an audio generator of periodic waveforms.
 [
     Conditional=WEB_AUDIO,
-    UsePointersEvenForNonNullableObjectArguments,
     JSGenerateToJSObject,
 ] interface OscillatorNode : AudioNode {
 
@@ -55,7 +54,7 @@
     [Conditional=LEGACY_WEB_AUDIO, RaisesException] void noteOn(unrestricted double when);
     [Conditional=LEGACY_WEB_AUDIO, RaisesException] void noteOff(unrestricted double when);
 
-    void setPeriodicWave(PeriodicWave wave);
+    void setPeriodicWave(PeriodicWave? wave); // FIXME: The parameter should not be nullable.
 
     attribute EventHandler onended;
 };
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to