Title: [120913] trunk
Revision
120913
Author
[email protected]
Date
2012-06-21 03:23:15 -0700 (Thu, 21 Jun 2012)

Log Message

Speech _javascript_ API: SpeechRecognition should hook up with ActiveDOMObject more
https://bugs.webkit.org/show_bug.cgi?id=89217

Reviewed by Adam Barth.

Source/WebCore:

Previously, the SpeechRecognition was kept alive while waiting for
pending events by making the embedder hold a reference to the object.
We should do this by using ActiveDOMObject's setPendingActivity() instead.

Also, override ActiveDOMObject::stop() to get notified when the user
leaves the page.

Test: fast/speech/scripted/navigate-away.html

* Modules/speech/SpeechRecognition.cpp:
(WebCore::SpeechRecognition::start):
(WebCore::SpeechRecognition::didEnd):
(WebCore::SpeechRecognition::stop):
(WebCore):
* Modules/speech/SpeechRecognition.h:
(SpeechRecognition):

Tools:

Add a method for checking whether the mock speech
recognition was aborted.

Also redo the way the mock posts tasks. Instead of posting them all at once,
maintain an internal queue of task objects, and call postTask() for them
once at the time. This means that for example when the page is navigated
away and abort() is called, that call doesn't end up after a bunch
of previously posted events on the event loop.

* DumpRenderTree/chromium/LayoutTestController.cpp:
(LayoutTestController::LayoutTestController):
(LayoutTestController::wasMockSpeechRecognitionAborted):
* DumpRenderTree/chromium/LayoutTestController.h:
(LayoutTestController):
* DumpRenderTree/chromium/MockWebSpeechRecognizer.cpp:
(WebKit::ClientCallTask::ClientCallTask):
(WebKit::ResultTask::ResultTask):
(WebKit::NoMatchTask::NoMatchTask):
(WebKit::ErrorTask::ErrorTask):
(MockWebSpeechRecognizer::start):
(MockWebSpeechRecognizer::abort):
(MockWebSpeechRecognizer::setError):
(MockWebSpeechRecognizer::MockWebSpeechRecognizer):
(MockWebSpeechRecognizer::startTaskQueue):
(MockWebSpeechRecognizer::StepTask::runIfValid):
* DumpRenderTree/chromium/MockWebSpeechRecognizer.h:
(MockWebSpeechRecognizer::hasBeenAborted):
(MockWebSpeechRecognizer):
(MockWebSpeechRecognizer::taskList):
(Task):
(MockWebSpeechRecognizer::Task::Task):
(MockWebSpeechRecognizer::Task::~Task):
(StepTask):
(MockWebSpeechRecognizer::StepTask::StepTask):

LayoutTests:

Add a layout test to check that speech recognition gets aborted when
navigating away from the page.

* fast/speech/scripted/navigate-away-expected.txt: Added.
* fast/speech/scripted/navigate-away-iframe-expected.txt: Added.
* fast/speech/scripted/navigate-away-iframe.html: Added.
* fast/speech/scripted/navigate-away.html: Added.

Modified Paths

Added Paths

Diff

Modified: trunk/LayoutTests/ChangeLog (120912 => 120913)


--- trunk/LayoutTests/ChangeLog	2012-06-21 10:06:23 UTC (rev 120912)
+++ trunk/LayoutTests/ChangeLog	2012-06-21 10:23:15 UTC (rev 120913)
@@ -1,3 +1,18 @@
+2012-06-20  Hans Wennborg  <[email protected]>
+
+        Speech _javascript_ API: SpeechRecognition should hook up with ActiveDOMObject more
+        https://bugs.webkit.org/show_bug.cgi?id=89217
+
+        Reviewed by Adam Barth.
+
+        Add a layout test to check that speech recognition gets aborted when
+        navigating away from the page.
+
+        * fast/speech/scripted/navigate-away-expected.txt: Added.
+        * fast/speech/scripted/navigate-away-iframe-expected.txt: Added.
+        * fast/speech/scripted/navigate-away-iframe.html: Added.
+        * fast/speech/scripted/navigate-away.html: Added.
+
 2012-06-21  Mikhail Pozdnyakov  <[email protected]>
 
         [EFL] Gardening after r120847

Added: trunk/LayoutTests/fast/speech/scripted/navigate-away-expected.txt (0 => 120913)


--- trunk/LayoutTests/fast/speech/scripted/navigate-away-expected.txt	                        (rev 0)
+++ trunk/LayoutTests/fast/speech/scripted/navigate-away-expected.txt	2012-06-21 10:23:15 UTC (rev 120913)
@@ -0,0 +1,19 @@
+Test behaviour when navigating away from a page using the Speech _javascript_ API
+
+On success, you will see a series of "PASS" messages, followed by "TEST COMPLETE".
+
+
+The iframe was loaded.
+PASS 'webkitSpeechRecognition' in self is true
+PASS webkitSpeechRecognition == null is false
+PASS testRunner.wasMockSpeechRecognitionAborted() is false
+iframe: Created SpeechRecognition
+iframe: calling start()
+iframe: onstart
+iframe: navigating away
+The iframe was navigated away.
+PASS testRunner.wasMockSpeechRecognitionAborted() is true
+PASS successfullyParsed is true
+
+TEST COMPLETE
+
Property changes on: trunk/LayoutTests/fast/speech/scripted/navigate-away-expected.txt
___________________________________________________________________

Added: svn:eol-style

Added: trunk/LayoutTests/fast/speech/scripted/navigate-away-iframe-expected.txt (0 => 120913)


--- trunk/LayoutTests/fast/speech/scripted/navigate-away-iframe-expected.txt	                        (rev 0)
+++ trunk/LayoutTests/fast/speech/scripted/navigate-away-iframe-expected.txt	2012-06-21 10:23:15 UTC (rev 120913)
@@ -0,0 +1 @@
+This file is meant to be used as part of the navigate-away.html test.
Property changes on: trunk/LayoutTests/fast/speech/scripted/navigate-away-iframe-expected.txt
___________________________________________________________________

Added: svn:eol-style

Added: trunk/LayoutTests/fast/speech/scripted/navigate-away-iframe.html (0 => 120913)


--- trunk/LayoutTests/fast/speech/scripted/navigate-away-iframe.html	                        (rev 0)
+++ trunk/LayoutTests/fast/speech/scripted/navigate-away-iframe.html	2012-06-21 10:23:15 UTC (rev 120913)
@@ -0,0 +1,48 @@
+<!DOCTYPE HTML>
+<html>
+<head>
+<script src=""
+</head>
+<body id="body">
+<script type="text/_javascript_">
+
+function log(msg) {
+    parent.postMessage(msg, '*');
+}
+
+function setDefaultHandlers(r) {
+    for (var prop in r) {
+        if (prop.match('^on')) {
+            r[prop] = function() {
+                log('unexpected ' + event.type + ' event!');
+            }
+        }
+    }
+}
+
+function run() {
+    if (window.top == window.self) {
+        // We are not in an iframe.
+        document.getElementById('body').innerHTML = 'This file is meant to be used as part of the navigate-away.html test.';
+        return;
+    }
+
+    window.speechreco = new webkitSpeechRecognition();
+    log('Created SpeechRecognition');
+    setDefaultHandlers(speechreco);
+
+    speechreco._onstart_ = function() {
+        log('onstart');
+        log('navigating away');
+        window.location = 'data:text/html,Navigated away.';
+    };
+
+    log('calling start()');
+    speechreco.start();
+}
+
+window._onload_ = run;
+</script>
+This iframe creates a speechreco object.
+</html>
+
Property changes on: trunk/LayoutTests/fast/speech/scripted/navigate-away-iframe.html
___________________________________________________________________

Added: svn:eol-style

Added: trunk/LayoutTests/fast/speech/scripted/navigate-away.html (0 => 120913)


--- trunk/LayoutTests/fast/speech/scripted/navigate-away.html	                        (rev 0)
+++ trunk/LayoutTests/fast/speech/scripted/navigate-away.html	2012-06-21 10:23:15 UTC (rev 120913)
@@ -0,0 +1,45 @@
+<!DOCTYPE HTML>
+<html>
+<head>
+<script src=""
+</head>
+<body>
+<script type="text/_javascript_">
+description('Test behaviour when navigating away from a page using the Speech _javascript_ API');
+
+function run() {
+    // Check availability of constructors.
+    shouldBeTrue("'webkitSpeechRecognition' in self");
+    shouldBeFalse("webkitSpeechRecognition == null");
+
+    if (window.testRunner)
+        shouldBeFalse("testRunner.wasMockSpeechRecognitionAborted()");
+
+    window._onmessage_ = function() {
+        debug('iframe: ' + event.data);
+    };
+}
+
+window.iframeonloadcount = 0;
+function iframeonload() {
+    ++iframeonloadcount;
+
+    if (iframeonloadcount === 1) {
+        // The iframe has loaded for the first time.
+        debug('The iframe was loaded.');
+        return;
+    }
+
+    debug('The iframe was navigated away.');
+    if (window.testRunner)
+        shouldBeTrue("testRunner.wasMockSpeechRecognitionAborted()");
+    finishJSTest();
+}
+
+window._onload_ = run;
+window.jsTestIsAsync = true;
+</script>
+<script src=""
+<iframe id="iframe" src="" _onload_="iframeonload()" >
+</body>
+</html>
Property changes on: trunk/LayoutTests/fast/speech/scripted/navigate-away.html
___________________________________________________________________

Added: svn:eol-style

Modified: trunk/Source/WebCore/ChangeLog (120912 => 120913)


--- trunk/Source/WebCore/ChangeLog	2012-06-21 10:06:23 UTC (rev 120912)
+++ trunk/Source/WebCore/ChangeLog	2012-06-21 10:23:15 UTC (rev 120913)
@@ -1,3 +1,27 @@
+2012-06-20  Hans Wennborg  <[email protected]>
+
+        Speech _javascript_ API: SpeechRecognition should hook up with ActiveDOMObject more
+        https://bugs.webkit.org/show_bug.cgi?id=89217
+
+        Reviewed by Adam Barth.
+
+        Previously, the SpeechRecognition was kept alive while waiting for
+        pending events by making the embedder hold a reference to the object.
+        We should do this by using ActiveDOMObject's setPendingActivity() instead.
+
+        Also, override ActiveDOMObject::stop() to get notified when the user
+        leaves the page.
+
+        Test: fast/speech/scripted/navigate-away.html
+
+        * Modules/speech/SpeechRecognition.cpp:
+        (WebCore::SpeechRecognition::start):
+        (WebCore::SpeechRecognition::didEnd):
+        (WebCore::SpeechRecognition::stop):
+        (WebCore):
+        * Modules/speech/SpeechRecognition.h:
+        (SpeechRecognition):
+
 2012-06-21  Taiju Tsuiki  <[email protected]>
 
         Web Inspector: Move requestId allocation from FileSystem frontend to backend

Modified: trunk/Source/WebCore/Modules/speech/SpeechRecognition.cpp (120912 => 120913)


--- trunk/Source/WebCore/Modules/speech/SpeechRecognition.cpp	2012-06-21 10:06:23 UTC (rev 120912)
+++ trunk/Source/WebCore/Modules/speech/SpeechRecognition.cpp	2012-06-21 10:23:15 UTC (rev 120913)
@@ -46,7 +46,10 @@
 
 void SpeechRecognition::start()
 {
-    ASSERT(m_controller); // FIXME: Spec should say what to do if we are already started.
+    ASSERT(m_controller);
+    // FIXME: Spec should say what to do if we are already started.
+
+    setPendingActivity(this);
     m_controller->start(this, m_grammars.get(), m_lang, m_continuous, m_maxAlternatives);
 }
 
@@ -121,6 +124,7 @@
 void SpeechRecognition::didEnd()
 {
     dispatchEvent(Event::create(eventNames().endEvent, /*canBubble=*/false, /*cancelable=*/false));
+    unsetPendingActivity(this);
 }
 
 const AtomicString& SpeechRecognition::interfaceName() const
@@ -133,6 +137,12 @@
     return ActiveDOMObject::scriptExecutionContext();
 }
 
+void SpeechRecognition::stop()
+{
+    if (hasPendingActivity())
+        abort();
+}
+
 SpeechRecognition::SpeechRecognition(ScriptExecutionContext* context)
     : ActiveDOMObject(context, this)
     , m_grammars(SpeechGrammarList::create()) // FIXME: The spec is not clear on the default value for the grammars attribute.

Modified: trunk/Source/WebCore/Modules/speech/SpeechRecognition.h (120912 => 120913)


--- trunk/Source/WebCore/Modules/speech/SpeechRecognition.h	2012-06-21 10:06:23 UTC (rev 120912)
+++ trunk/Source/WebCore/Modules/speech/SpeechRecognition.h	2012-06-21 10:23:15 UTC (rev 120913)
@@ -49,15 +49,13 @@
     static PassRefPtr<SpeechRecognition> create(ScriptExecutionContext*);
     ~SpeechRecognition();
 
+    // Attributes.
     PassRefPtr<SpeechGrammarList> grammars() { return m_grammars; }
     void setGrammars(PassRefPtr<SpeechGrammarList> grammars) { m_grammars = grammars; }
-
     String lang() { return m_lang; }
     void setLang(const String& lang) { m_lang = lang; }
-
     bool continuous() { return m_continuous; }
     void setContinuous(bool continuous) { m_continuous = continuous; }
-
     unsigned long maxAlternatives() { return m_maxAlternatives; }
     void setMaxAlternatives(unsigned long maxAlternatives) { m_maxAlternatives = maxAlternatives; }
 
@@ -80,10 +78,13 @@
     void didStart();
     void didEnd();
 
-    // EventTarget
+    // EventTarget.
     virtual const AtomicString& interfaceName() const OVERRIDE;
     virtual ScriptExecutionContext* scriptExecutionContext() const OVERRIDE;
 
+    // ActiveDOMObject.
+    virtual void stop() OVERRIDE;
+
     using RefCounted<SpeechRecognition>::ref;
     using RefCounted<SpeechRecognition>::deref;
 

Modified: trunk/Tools/ChangeLog (120912 => 120913)


--- trunk/Tools/ChangeLog	2012-06-21 10:06:23 UTC (rev 120912)
+++ trunk/Tools/ChangeLog	2012-06-21 10:23:15 UTC (rev 120913)
@@ -1,3 +1,45 @@
+2012-06-20  Hans Wennborg  <[email protected]>
+
+        Speech _javascript_ API: SpeechRecognition should hook up with ActiveDOMObject more
+        https://bugs.webkit.org/show_bug.cgi?id=89217
+
+        Reviewed by Adam Barth.
+
+        Add a method for checking whether the mock speech
+        recognition was aborted.
+
+        Also redo the way the mock posts tasks. Instead of posting them all at once,
+        maintain an internal queue of task objects, and call postTask() for them
+        once at the time. This means that for example when the page is navigated
+        away and abort() is called, that call doesn't end up after a bunch
+        of previously posted events on the event loop.
+
+        * DumpRenderTree/chromium/LayoutTestController.cpp:
+        (LayoutTestController::LayoutTestController):
+        (LayoutTestController::wasMockSpeechRecognitionAborted):
+        * DumpRenderTree/chromium/LayoutTestController.h:
+        (LayoutTestController):
+        * DumpRenderTree/chromium/MockWebSpeechRecognizer.cpp:
+        (WebKit::ClientCallTask::ClientCallTask):
+        (WebKit::ResultTask::ResultTask):
+        (WebKit::NoMatchTask::NoMatchTask):
+        (WebKit::ErrorTask::ErrorTask):
+        (MockWebSpeechRecognizer::start):
+        (MockWebSpeechRecognizer::abort):
+        (MockWebSpeechRecognizer::setError):
+        (MockWebSpeechRecognizer::MockWebSpeechRecognizer):
+        (MockWebSpeechRecognizer::startTaskQueue):
+        (MockWebSpeechRecognizer::StepTask::runIfValid):
+        * DumpRenderTree/chromium/MockWebSpeechRecognizer.h:
+        (MockWebSpeechRecognizer::hasBeenAborted):
+        (MockWebSpeechRecognizer):
+        (MockWebSpeechRecognizer::taskList):
+        (Task):
+        (MockWebSpeechRecognizer::Task::Task):
+        (MockWebSpeechRecognizer::Task::~Task):
+        (StepTask):
+        (MockWebSpeechRecognizer::StepTask::StepTask):
+
 2012-06-21  Mario Sanchez Prada  <[email protected]>
 
         [GTK] Add support for window.showModalDialog in WebKit2GTK+

Modified: trunk/Tools/DumpRenderTree/chromium/LayoutTestController.cpp (120912 => 120913)


--- trunk/Tools/DumpRenderTree/chromium/LayoutTestController.cpp	2012-06-21 10:06:23 UTC (rev 120912)
+++ trunk/Tools/DumpRenderTree/chromium/LayoutTestController.cpp	2012-06-21 10:23:15 UTC (rev 120913)
@@ -118,6 +118,7 @@
 #if ENABLE(SCRIPTED_SPEECH)
     bindMethod("addMockSpeechRecognitionResult", &LayoutTestController::addMockSpeechRecognitionResult);
     bindMethod("setMockSpeechRecognitionError", &LayoutTestController::setMockSpeechRecognitionError);
+    bindMethod("wasMockSpeechRecognitionAborted", &LayoutTestController::wasMockSpeechRecognitionAborted);
 #endif
     bindMethod("addOriginAccessWhitelistEntry", &LayoutTestController::addOriginAccessWhitelistEntry);
     bindMethod("addUserScript", &LayoutTestController::addUserScript);
@@ -1966,6 +1967,13 @@
     if (MockWebSpeechRecognizer* recognizer = m_shell->webViewHost()->mockSpeechRecognizer())
         recognizer->setError(arguments[0].toInt32(), cppVariantToWebString(arguments[1]));
 }
+
+void LayoutTestController::wasMockSpeechRecognitionAborted(const CppArgumentList&, CppVariant* result)
+{
+    result->set(false);
+    if (MockWebSpeechRecognizer* recognizer = m_shell->webViewHost()->mockSpeechRecognizer())
+        result->set(recognizer->wasAborted());
+}
 #endif
 
 void LayoutTestController::startSpeechInput(const CppArgumentList& arguments, CppVariant* result)

Modified: trunk/Tools/DumpRenderTree/chromium/LayoutTestController.h (120912 => 120913)


--- trunk/Tools/DumpRenderTree/chromium/LayoutTestController.h	2012-06-21 10:06:23 UTC (rev 120912)
+++ trunk/Tools/DumpRenderTree/chromium/LayoutTestController.h	2012-06-21 10:23:15 UTC (rev 120913)
@@ -375,6 +375,7 @@
 #if ENABLE(SCRIPTED_SPEECH)
     void addMockSpeechRecognitionResult(const CppArgumentList&, CppVariant*);
     void setMockSpeechRecognitionError(const CppArgumentList&, CppVariant*);
+    void wasMockSpeechRecognitionAborted(const CppArgumentList&, CppVariant*);
 #endif
     void startSpeechInput(const CppArgumentList&, CppVariant*);
 

Modified: trunk/Tools/DumpRenderTree/chromium/MockWebSpeechRecognizer.cpp (120912 => 120913)


--- trunk/Tools/DumpRenderTree/chromium/MockWebSpeechRecognizer.cpp	2012-06-21 10:06:23 UTC (rev 120912)
+++ trunk/Tools/DumpRenderTree/chromium/MockWebSpeechRecognizer.cpp	2012-06-21 10:23:15 UTC (rev 120913)
@@ -37,31 +37,31 @@
 
 // Task class for calling a client function that does not take any parameters.
 typedef void (WebSpeechRecognizerClient::*ClientFunctionPointer)(const WebSpeechRecognitionHandle&);
-class ClientCallTask : public MethodTask<MockWebSpeechRecognizer> {
+class ClientCallTask : public MockWebSpeechRecognizer::Task {
 public:
     ClientCallTask(MockWebSpeechRecognizer* mock, ClientFunctionPointer function)
-        : MethodTask<MockWebSpeechRecognizer>(mock)
+        : MockWebSpeechRecognizer::Task(mock)
         , m_function(function)
     {
     }
 
-    virtual void runIfValid() OVERRIDE { (m_object->client()->*m_function)(m_object->handle()); }
+    virtual void run() OVERRIDE { (m_recognizer->client()->*m_function)(m_recognizer->handle()); }
 
 private:
     ClientFunctionPointer m_function;
 };
 
 // Task for delivering a result event.
-class ResultTask : public MethodTask<MockWebSpeechRecognizer> {
+class ResultTask : public MockWebSpeechRecognizer::Task {
 public:
     ResultTask(MockWebSpeechRecognizer* mock, const WebString transcript, float confidence)
-        : MethodTask<MockWebSpeechRecognizer>(mock)
+        : MockWebSpeechRecognizer::Task(mock)
         , m_transcript(transcript)
         , m_confidence(confidence)
     {
     }
 
-    virtual void runIfValid() OVERRIDE
+    virtual void run() OVERRIDE
     {
         WebVector<WebString> transcripts(static_cast<size_t>(1));
         WebVector<float> confidences(static_cast<size_t>(1));
@@ -70,7 +70,7 @@
         WebSpeechRecognitionResult res;
         res.assign(transcripts, confidences, true);
 
-        m_object->client()->didReceiveResult(m_object->handle(), res, 0, WebVector<WebSpeechRecognitionResult>());
+        m_recognizer->client()->didReceiveResult(m_recognizer->handle(), res, 0, WebVector<WebSpeechRecognitionResult>());
     }
 
 private:
@@ -79,23 +79,23 @@
 };
 
 // Task for delivering a nomatch event.
-class NoMatchTask : public MethodTask<MockWebSpeechRecognizer> {
+class NoMatchTask : public MockWebSpeechRecognizer::Task {
 public:
-    NoMatchTask(MockWebSpeechRecognizer* mock) : MethodTask<MockWebSpeechRecognizer>(mock) { }
-    virtual void runIfValid() OVERRIDE { m_object->client()->didReceiveNoMatch(m_object->handle(), WebSpeechRecognitionResult()); }
+    NoMatchTask(MockWebSpeechRecognizer* mock) : MockWebSpeechRecognizer::Task(mock) { }
+    virtual void run() OVERRIDE { m_recognizer->client()->didReceiveNoMatch(m_recognizer->handle(), WebSpeechRecognitionResult()); }
 };
 
 // Task for delivering an error event.
-class ErrorTask : public MethodTask<MockWebSpeechRecognizer> {
+class ErrorTask : public MockWebSpeechRecognizer::Task {
 public:
     ErrorTask(MockWebSpeechRecognizer* mock, int code, const WebString& message)
-        : MethodTask<MockWebSpeechRecognizer>(mock)
+        : MockWebSpeechRecognizer::Task(mock)
         , m_code(code)
         , m_message(message)
     {
     }
 
-    virtual void runIfValid() OVERRIDE { m_object->client()->didReceiveError(m_object->handle(), m_message, static_cast<WebSpeechRecognizerClient::ErrorCode>(m_code)); }
+    virtual void run() OVERRIDE { m_recognizer->client()->didReceiveError(m_recognizer->handle(), m_message, static_cast<WebSpeechRecognizerClient::ErrorCode>(m_code)); }
 
 private:
     int m_code;
@@ -111,29 +111,32 @@
 
 void MockWebSpeechRecognizer::start(const WebSpeechRecognitionHandle& handle, const WebSpeechRecognitionParams& params, WebSpeechRecognizerClient* client)
 {
+    m_wasAborted = false;
     m_handle = handle;
     m_client = client;
 
-    postTask(new ClientCallTask(this, &WebSpeechRecognizerClient::didStart));
-    postTask(new ClientCallTask(this, &WebSpeechRecognizerClient::didStartAudio));
-    postTask(new ClientCallTask(this, &WebSpeechRecognizerClient::didStartSound));
-    postTask(new ClientCallTask(this, &WebSpeechRecognizerClient::didStartSpeech));
+    m_taskQueue.append(adoptPtr(new ClientCallTask(this, &WebSpeechRecognizerClient::didStart)));
+    m_taskQueue.append(adoptPtr(new ClientCallTask(this, &WebSpeechRecognizerClient::didStartAudio)));
+    m_taskQueue.append(adoptPtr(new ClientCallTask(this, &WebSpeechRecognizerClient::didStartSound)));
+    m_taskQueue.append(adoptPtr(new ClientCallTask(this, &WebSpeechRecognizerClient::didStartSpeech)));
 
     if (!m_mockTranscripts.isEmpty()) {
         ASSERT(m_mockTranscripts.size() == m_mockConfidences.size());
 
         for (size_t i = 0; i < m_mockTranscripts.size(); ++i)
-            postTask(new ResultTask(this, m_mockTranscripts[i], m_mockConfidences[i]));
+            m_taskQueue.append(adoptPtr(new ResultTask(this, m_mockTranscripts[i], m_mockConfidences[i])));
 
         m_mockTranscripts.clear();
         m_mockConfidences.clear();
     } else
-        postTask(new NoMatchTask(this));
+        m_taskQueue.append(adoptPtr(new NoMatchTask(this)));
 
-    postTask(new ClientCallTask(this, &WebSpeechRecognizerClient::didEndSpeech));
-    postTask(new ClientCallTask(this, &WebSpeechRecognizerClient::didEndSound));
-    postTask(new ClientCallTask(this, &WebSpeechRecognizerClient::didEndAudio));
-    postTask(new ClientCallTask(this, &WebSpeechRecognizerClient::didEnd));
+    m_taskQueue.append(adoptPtr(new ClientCallTask(this, &WebSpeechRecognizerClient::didEndSpeech)));
+    m_taskQueue.append(adoptPtr(new ClientCallTask(this, &WebSpeechRecognizerClient::didEndSound)));
+    m_taskQueue.append(adoptPtr(new ClientCallTask(this, &WebSpeechRecognizerClient::didEndAudio)));
+    m_taskQueue.append(adoptPtr(new ClientCallTask(this, &WebSpeechRecognizerClient::didEnd)));
+
+    startTaskQueue();
 }
 
 void MockWebSpeechRecognizer::stop(const WebSpeechRecognitionHandle& handle, WebSpeechRecognizerClient* client)
@@ -150,8 +153,10 @@
     m_handle = handle;
     m_client = client;
 
-    // FIXME: Implement.
-    ASSERT_NOT_REACHED();
+    m_taskQueue.clear();
+    m_wasAborted = true;
+    m_taskQueue.append(adoptPtr(new ClientCallTask(this, &WebSpeechRecognizerClient::didEnd)));
+    startTaskQueue();
 }
 
 void MockWebSpeechRecognizer::addMockResult(const WebString& transcript, float confidence)
@@ -162,12 +167,15 @@
 
 void MockWebSpeechRecognizer::setError(int code, const WebString& message)
 {
-    m_taskList.revokeAll();
-    postTask(new ErrorTask(this, code, message));
-    postTask(new ClientCallTask(this, &WebSpeechRecognizerClient::didEnd));
+    m_taskQueue.clear();
+    m_taskQueue.append(adoptPtr(new ErrorTask(this, code, message)));
+    m_taskQueue.append(adoptPtr(new ClientCallTask(this, &WebSpeechRecognizerClient::didEnd)));
+    startTaskQueue();
 }
 
 MockWebSpeechRecognizer::MockWebSpeechRecognizer()
+    : m_wasAborted(false)
+    , m_taskQueueRunning(false)
 {
 }
 
@@ -175,5 +183,32 @@
 {
 }
 
+void MockWebSpeechRecognizer::startTaskQueue()
+{
+    if (m_taskQueueRunning)
+        return;
+    postTask(new StepTask(this));
+    m_taskQueueRunning = true;
+}
 
+void MockWebSpeechRecognizer::StepTask::runIfValid()
+{
+    ASSERT(m_object->m_taskQueueRunning);
+
+    if (m_object->m_taskQueue.isEmpty()) {
+        m_object->m_taskQueueRunning = false;
+        return;
+    }
+
+    m_object->m_taskQueue[0]->run();
+    m_object->m_taskQueue.remove(0);
+
+    if (m_object->m_taskQueue.isEmpty()) {
+        m_object->m_taskQueueRunning = false;
+        return;
+    }
+
+    postTask(new StepTask(m_object));
+}
+
 #endif // ENABLE(SCRIPTED_SPEECH)

Modified: trunk/Tools/DumpRenderTree/chromium/MockWebSpeechRecognizer.h (120912 => 120913)


--- trunk/Tools/DumpRenderTree/chromium/MockWebSpeechRecognizer.h	2012-06-21 10:06:23 UTC (rev 120912)
+++ trunk/Tools/DumpRenderTree/chromium/MockWebSpeechRecognizer.h	2012-06-21 10:23:15 UTC (rev 120913)
@@ -53,19 +53,43 @@
     // Methods accessed by layout tests:
     void addMockResult(const WebKit::WebString& transcript, float confidence);
     void setError(int code, const WebKit::WebString& message);
+    bool wasAborted() const { return m_wasAborted; }
 
-    TaskList* taskList() { return &m_taskList; }
+    // Methods accessed from Task objects:
     WebKit::WebSpeechRecognizerClient* client() { return m_client; }
     WebKit::WebSpeechRecognitionHandle& handle() { return m_handle; }
+    TaskList* taskList() { return &m_taskList; }
 
+    class Task {
+    public:
+        Task(MockWebSpeechRecognizer* recognizer) : m_recognizer(recognizer) { }
+        virtual ~Task() { }
+        virtual void run() = 0;
+    protected:
+        MockWebSpeechRecognizer* m_recognizer;
+    };
+
 private:
     MockWebSpeechRecognizer();
+    void startTaskQueue();
 
     TaskList m_taskList;
     WebKit::WebSpeechRecognitionHandle m_handle;
     WebKit::WebSpeechRecognizerClient* m_client;
     Vector<WebKit::WebString> m_mockTranscripts;
     Vector<float> m_mockConfidences;
+    bool m_wasAborted;
+
+    // Queue of tasks to be run.
+    Vector<OwnPtr<Task> > m_taskQueue;
+    bool m_taskQueueRunning;
+
+    // Task for stepping the queue.
+    class StepTask : public MethodTask<MockWebSpeechRecognizer> {
+    public:
+        StepTask(MockWebSpeechRecognizer* object) : MethodTask<MockWebSpeechRecognizer>(object) { }
+        virtual void runIfValid() OVERRIDE;
+    };
 };
 
 #endif // ENABLE(SCRIPTED_SPEECH)
_______________________________________________
webkit-changes mailing list
[email protected]
http://lists.webkit.org/mailman/listinfo.cgi/webkit-changes

Reply via email to