Title: [272417] trunk/Source/WebKit
Revision
272417
Author
[email protected]
Date
2021-02-05 08:54:00 -0800 (Fri, 05 Feb 2021)

Log Message

SpeechRecognitionPermissionManager should not handle requests that are already cancelled
https://bugs.webkit.org/show_bug.cgi?id=221296

Patch by Sihui Liu <[email protected]> on 2021-02-05
Reviewed by Youenn Fablet.

It is possible that client asks to stop/abort pending requests, which are waiting for permission check to be
done. In our current implementation, SpeechRecognitionPermissionManager would perform permission checks for
these cancelled requests, which is unnecessary and may cause extra prompts. We should let
SpeechRecognitionPermissionManager check if request is still valid before checking permissions.

No test is added as this behavior change is not observable without resetting the TCC permission between requests.

* UIProcess/SpeechRecognitionPermissionManager.cpp:
(WebKit::SpeechRecognitionPermissionManager::request):
(WebKit::SpeechRecognitionPermissionManager::startNextRequest):
(WebKit::SpeechRecognitionPermissionManager::startProcessingRequest):
(WebKit::SpeechRecognitionPermissionManager::continueProcessingRequest):
(WebKit::SpeechRecognitionPermissionManager::requestUserPermission):
* UIProcess/SpeechRecognitionPermissionManager.h:
* UIProcess/SpeechRecognitionPermissionRequest.h:
(WebKit::SpeechRecognitionPermissionRequest::create):
(WebKit::SpeechRecognitionPermissionRequest::~SpeechRecognitionPermissionRequest):
(WebKit::SpeechRecognitionPermissionRequest::request):
(WebKit::SpeechRecognitionPermissionRequest::SpeechRecognitionPermissionRequest):
(WebKit::SpeechRecognitionPermissionRequest::origin const): Deleted.
(WebKit::SpeechRecognitionPermissionRequest::lang const): Deleted.
(WebKit::SpeechRecognitionPermissionRequest::frameIdentifier const): Deleted.
* UIProcess/SpeechRecognitionServer.cpp:
(WebKit::SpeechRecognitionServer::requestPermissionForRequest):
(WebKit::SpeechRecognitionServer::sendUpdate):
* UIProcess/SpeechRecognitionServer.h:
* UIProcess/WebPageProxy.cpp:
(WebKit::WebPageProxy::requestSpeechRecognitionPermission):
* UIProcess/WebPageProxy.h:
* UIProcess/WebProcessProxy.cpp:
(WebKit::WebProcessProxy::createSpeechRecognitionServer):

Modified Paths

Diff

Modified: trunk/Source/WebKit/ChangeLog (272416 => 272417)


--- trunk/Source/WebKit/ChangeLog	2021-02-05 16:38:48 UTC (rev 272416)
+++ trunk/Source/WebKit/ChangeLog	2021-02-05 16:54:00 UTC (rev 272417)
@@ -1,3 +1,42 @@
+2021-02-05  Sihui Liu  <[email protected]>
+
+        SpeechRecognitionPermissionManager should not handle requests that are already cancelled
+        https://bugs.webkit.org/show_bug.cgi?id=221296
+
+        Reviewed by Youenn Fablet.
+
+        It is possible that client asks to stop/abort pending requests, which are waiting for permission check to be 
+        done. In our current implementation, SpeechRecognitionPermissionManager would perform permission checks for
+        these cancelled requests, which is unnecessary and may cause extra prompts. We should let 
+        SpeechRecognitionPermissionManager check if request is still valid before checking permissions.
+
+        No test is added as this behavior change is not observable without resetting the TCC permission between requests.
+
+        * UIProcess/SpeechRecognitionPermissionManager.cpp:
+        (WebKit::SpeechRecognitionPermissionManager::request):
+        (WebKit::SpeechRecognitionPermissionManager::startNextRequest):
+        (WebKit::SpeechRecognitionPermissionManager::startProcessingRequest):
+        (WebKit::SpeechRecognitionPermissionManager::continueProcessingRequest):
+        (WebKit::SpeechRecognitionPermissionManager::requestUserPermission):
+        * UIProcess/SpeechRecognitionPermissionManager.h:
+        * UIProcess/SpeechRecognitionPermissionRequest.h:
+        (WebKit::SpeechRecognitionPermissionRequest::create):
+        (WebKit::SpeechRecognitionPermissionRequest::~SpeechRecognitionPermissionRequest):
+        (WebKit::SpeechRecognitionPermissionRequest::request):
+        (WebKit::SpeechRecognitionPermissionRequest::SpeechRecognitionPermissionRequest):
+        (WebKit::SpeechRecognitionPermissionRequest::origin const): Deleted.
+        (WebKit::SpeechRecognitionPermissionRequest::lang const): Deleted.
+        (WebKit::SpeechRecognitionPermissionRequest::frameIdentifier const): Deleted.
+        * UIProcess/SpeechRecognitionServer.cpp:
+        (WebKit::SpeechRecognitionServer::requestPermissionForRequest):
+        (WebKit::SpeechRecognitionServer::sendUpdate):
+        * UIProcess/SpeechRecognitionServer.h:
+        * UIProcess/WebPageProxy.cpp:
+        (WebKit::WebPageProxy::requestSpeechRecognitionPermission):
+        * UIProcess/WebPageProxy.h:
+        * UIProcess/WebProcessProxy.cpp:
+        (WebKit::WebProcessProxy::createSpeechRecognitionServer):
+
 2021-02-05  Chris Dumez  <[email protected]>
 
         [GPUProcess][iOS] Audio is lost after media playback recovers from the GPUProcess crash

Modified: trunk/Source/WebKit/UIProcess/SpeechRecognitionPermissionManager.cpp (272416 => 272417)


--- trunk/Source/WebKit/UIProcess/SpeechRecognitionPermissionManager.cpp	2021-02-05 16:38:48 UTC (rev 272416)
+++ trunk/Source/WebKit/UIProcess/SpeechRecognitionPermissionManager.cpp	2021-02-05 16:54:00 UTC (rev 272417)
@@ -75,9 +75,9 @@
         request->complete(WebCore::SpeechRecognitionError { WebCore::SpeechRecognitionErrorType::NotAllowed, "Permission manager has exited"_s });
 }
     
-void SpeechRecognitionPermissionManager::request(const String& lang, const WebCore::ClientOrigin& origin, WebCore::FrameIdentifier frameIdentifier, SpeechRecognitionPermissionRequestCallback&& completiontHandler)
+void SpeechRecognitionPermissionManager::request(WebCore::SpeechRecognitionRequest& request, SpeechRecognitionPermissionRequestCallback&& completiontHandler)
 {
-    m_requests.append(SpeechRecognitionPermissionRequest::create(lang, origin, frameIdentifier, WTFMove(completiontHandler)));
+    m_requests.append(SpeechRecognitionPermissionRequest::create(request, WTFMove(completiontHandler)));
     if (m_requests.size() == 1)
         startNextRequest();
 }
@@ -84,6 +84,9 @@
 
 void SpeechRecognitionPermissionManager::startNextRequest()
 {
+    while (!m_requests.isEmpty() && !m_requests.first()->request())
+        m_requests.removeFirst();
+
     if (m_requests.isEmpty())
         return;
 
@@ -118,7 +121,7 @@
         }
 
 #if HAVE(SPEECHRECOGNIZER)
-        if (!checkSpeechRecognitionServiceAvailability(m_requests.first()->lang())) {
+        if (!checkSpeechRecognitionServiceAvailability(m_requests.first()->request()->lang())) {
             completeCurrentRequest(WebCore::SpeechRecognitionError { WebCore::SpeechRecognitionErrorType::ServiceNotAllowed, "Speech recognition service is not available"_s });
             return;
         }
@@ -135,6 +138,13 @@
 
 void SpeechRecognitionPermissionManager::continueProcessingRequest()
 {
+    ASSERT(!m_requests.isEmpty());
+    auto recognitionRequest = m_requests.first()->request();
+    if (!recognitionRequest) {
+        completeCurrentRequest();
+        return;
+    }
+        
     if (m_speechRecognitionServiceCheck == CheckResult::Unknown) {
         requestSpeechRecognitionServiceAccess();
         return;
@@ -148,13 +158,13 @@
     ASSERT(m_microphoneCheck == CheckResult::Granted);
 
     if (m_userPermissionCheck == CheckResult::Unknown) {
-        requestUserPermission();
+        requestUserPermission(*recognitionRequest);
         return;
     }
     ASSERT(m_userPermissionCheck == CheckResult::Granted);
 
     if (!m_page.isViewVisible()) {
-        completeCurrentRequest(WebCore::SpeechRecognitionError { WebCore::SpeechRecognitionErrorType::NotAllowed, "Page is not visible to user" });
+        completeCurrentRequest(WebCore::SpeechRecognitionError { WebCore::SpeechRecognitionErrorType::NotAllowed, "Page is not visible to user"_s });
         return;
     }
 
@@ -210,12 +220,9 @@
 #endif
 }
 
-void SpeechRecognitionPermissionManager::requestUserPermission()
+void SpeechRecognitionPermissionManager::requestUserPermission(WebCore::SpeechRecognitionRequest& recognitionRequest)
 {
-    ASSERT(!m_requests.isEmpty());
-
-    auto& currentRequest = m_requests.first();
-    auto clientOrigin = currentRequest->origin();
+    auto clientOrigin = recognitionRequest.clientOrigin();
     auto requestingOrigin = clientOrigin.clientOrigin.securityOrigin();
     auto topOrigin = clientOrigin.topOrigin.securityOrigin();
     auto decisionHandler = [this, weakThis = makeWeakPtr(*this)](bool granted) {
@@ -230,7 +237,7 @@
 
         continueProcessingRequest();
     };
-    m_page.requestUserMediaPermissionForSpeechRecognition(currentRequest->frameIdentifier(), requestingOrigin, topOrigin, WTFMove(decisionHandler));
+    m_page.requestUserMediaPermissionForSpeechRecognition(recognitionRequest.frameIdentifier(), requestingOrigin, topOrigin, WTFMove(decisionHandler));
 }
 
 void SpeechRecognitionPermissionManager::decideByDefaultAction(const WebCore::SecurityOrigin& origin, CompletionHandler<void(bool)>&& completionHandler)

Modified: trunk/Source/WebKit/UIProcess/SpeechRecognitionPermissionManager.h (272416 => 272417)


--- trunk/Source/WebKit/UIProcess/SpeechRecognitionPermissionManager.h	2021-02-05 16:38:48 UTC (rev 272416)
+++ trunk/Source/WebKit/UIProcess/SpeechRecognitionPermissionManager.h	2021-02-05 16:54:00 UTC (rev 272417)
@@ -39,7 +39,7 @@
     enum class CheckResult { Denied, Granted, Unknown };
     explicit SpeechRecognitionPermissionManager(WebPageProxy&);
     ~SpeechRecognitionPermissionManager();
-    void request(const String& lang, const WebCore::ClientOrigin&, WebCore::FrameIdentifier, SpeechRecognitionPermissionRequestCallback&&);
+    void request(WebCore::SpeechRecognitionRequest&, SpeechRecognitionPermissionRequestCallback&&);
 
     void decideByDefaultAction(const WebCore::SecurityOrigin&, CompletionHandler<void(bool)>&&);
     WebPageProxy& page() { return m_page; }
@@ -51,7 +51,7 @@
     void completeCurrentRequest(Optional<WebCore::SpeechRecognitionError>&& = WTF::nullopt);
     void requestMicrophoneAccess();
     void requestSpeechRecognitionServiceAccess();
-    void requestUserPermission();
+    void requestUserPermission(WebCore::SpeechRecognitionRequest& request);
 
     WebPageProxy& m_page;
     Deque<Ref<SpeechRecognitionPermissionRequest>> m_requests;

Modified: trunk/Source/WebKit/UIProcess/SpeechRecognitionPermissionRequest.h (272416 => 272417)


--- trunk/Source/WebKit/UIProcess/SpeechRecognitionPermissionRequest.h	2021-02-05 16:38:48 UTC (rev 272416)
+++ trunk/Source/WebKit/UIProcess/SpeechRecognitionPermissionRequest.h	2021-02-05 16:54:00 UTC (rev 272417)
@@ -29,6 +29,7 @@
 #include <WebCore/ClientOrigin.h>
 #include <WebCore/FrameIdentifier.h>
 #include <WebCore/SpeechRecognitionError.h>
+#include <WebCore/SpeechRecognitionRequest.h>
 #include <wtf/CompletionHandler.h>
 
 namespace WebKit {
@@ -37,11 +38,17 @@
 
 class SpeechRecognitionPermissionRequest : public RefCounted<SpeechRecognitionPermissionRequest> {
 public:
-    static Ref<SpeechRecognitionPermissionRequest> create(const String& lang, const WebCore::ClientOrigin& origin, WebCore::FrameIdentifier frameIdentifier, SpeechRecognitionPermissionRequestCallback&& completionHandler)
+    static Ref<SpeechRecognitionPermissionRequest> create(WebCore::SpeechRecognitionRequest& request, SpeechRecognitionPermissionRequestCallback&& completionHandler)
     {
-        return adoptRef(*new SpeechRecognitionPermissionRequest(lang, origin, frameIdentifier, WTFMove(completionHandler)));
+        return adoptRef(*new SpeechRecognitionPermissionRequest(request, WTFMove(completionHandler)));
     }
 
+    ~SpeechRecognitionPermissionRequest()
+    {
+        if (m_completionHandler)
+            m_completionHandler(WebCore::SpeechRecognitionError { WebCore::SpeechRecognitionErrorType::NotAllowed, "Request is cancelled"_s });
+    }
+
     void complete(Optional<WebCore::SpeechRecognitionError>&& error)
     {
         auto completionHandler = std::exchange(m_completionHandler, { });
@@ -48,21 +55,16 @@
         completionHandler(WTFMove(error));
     }
 
-    const WebCore::ClientOrigin& origin() const { return m_origin; }
-    const String& lang() const { return m_lang; }
-    WebCore::FrameIdentifier frameIdentifier() const { return m_frameIdentifier; }
+    WebCore::SpeechRecognitionRequest* request() { return m_request.get(); }
 
 private:
-    SpeechRecognitionPermissionRequest(const String& lang, const WebCore::ClientOrigin& origin, WebCore::FrameIdentifier frameIdentifier, SpeechRecognitionPermissionRequestCallback&& completionHandler)
-        : m_lang(lang)
-        , m_origin(origin)
-        , m_frameIdentifier(frameIdentifier)
+    SpeechRecognitionPermissionRequest(WebCore::SpeechRecognitionRequest& request, SpeechRecognitionPermissionRequestCallback&& completionHandler)
+        : m_request(makeWeakPtr(request))
         , m_completionHandler(WTFMove(completionHandler))
     { }
 
-    String m_lang;
-    WebCore::ClientOrigin m_origin;
-    WebCore::FrameIdentifier m_frameIdentifier;
+
+    WeakPtr<WebCore::SpeechRecognitionRequest> m_request;
     SpeechRecognitionPermissionRequestCallback m_completionHandler;
 };
 

Modified: trunk/Source/WebKit/UIProcess/SpeechRecognitionServer.cpp (272416 => 272417)


--- trunk/Source/WebKit/UIProcess/SpeechRecognitionServer.cpp	2021-02-05 16:38:48 UTC (rev 272416)
+++ trunk/Source/WebKit/UIProcess/SpeechRecognitionServer.cpp	2021-02-05 16:54:00 UTC (rev 272417)
@@ -63,13 +63,10 @@
 
 void SpeechRecognitionServer::requestPermissionForRequest(WebCore::SpeechRecognitionRequest& request)
 {
-    m_permissionChecker(request.lang(), request.clientOrigin(), request.frameIdentifier(), [this, weakThis = makeWeakPtr(this), weakRequest = makeWeakPtr(request)](auto error) mutable {
-        if (!weakThis)
+    m_permissionChecker(request, [this, weakThis = makeWeakPtr(this), weakRequest = makeWeakPtr(request)](auto error) mutable {
+        if (!weakThis || !weakRequest)
             return;
 
-        if (!weakRequest)
-            return;
-
         auto identifier = weakRequest->clientIdentifier();
         auto request = m_requests.take(identifier);
         if (error) {
@@ -159,7 +156,6 @@
 
 void SpeechRecognitionServer::sendUpdate(const WebCore::SpeechRecognitionUpdate& update)
 {
-    WTFLogAlways("[%p]SpeechRecognitionServer::sendUpdate update.type[%d], update.clientIdentifier[%" PRIu64 "]", this, update.type(), update.clientIdentifier().toUInt64());
     send(Messages::WebSpeechRecognitionConnection::DidReceiveUpdate(update));
 }
 

Modified: trunk/Source/WebKit/UIProcess/SpeechRecognitionServer.h (272416 => 272417)


--- trunk/Source/WebKit/UIProcess/SpeechRecognitionServer.h	2021-02-05 16:38:48 UTC (rev 272416)
+++ trunk/Source/WebKit/UIProcess/SpeechRecognitionServer.h	2021-02-05 16:54:00 UTC (rev 272417)
@@ -46,7 +46,7 @@
 class WebProcessProxy;
 
 using SpeechRecognitionServerIdentifier = WebCore::PageIdentifier;
-using SpeechRecognitionPermissionChecker = Function<void(const String&, const WebCore::ClientOrigin&, WebCore::FrameIdentifier, SpeechRecognitionPermissionRequestCallback&&)>;
+using SpeechRecognitionPermissionChecker = Function<void(WebCore::SpeechRecognitionRequest&, SpeechRecognitionPermissionRequestCallback&&)>;
 using SpeechRecognitionCheckIfMockSpeechRecognitionEnabled = Function<bool()>;
 
 class SpeechRecognitionServer : public CanMakeWeakPtr<SpeechRecognitionServer>, public IPC::MessageReceiver, private IPC::MessageSender {

Modified: trunk/Source/WebKit/UIProcess/WebPageProxy.cpp (272416 => 272417)


--- trunk/Source/WebKit/UIProcess/WebPageProxy.cpp	2021-02-05 16:38:48 UTC (rev 272416)
+++ trunk/Source/WebKit/UIProcess/WebPageProxy.cpp	2021-02-05 16:54:00 UTC (rev 272417)
@@ -10406,12 +10406,12 @@
     });
 }
 
-void WebPageProxy::requestSpeechRecognitionPermission(const String& lang, const WebCore::ClientOrigin& clientOrigin, WebCore::FrameIdentifier frameIdentifier, CompletionHandler<void(Optional<SpeechRecognitionError>&&)>&& completionHandler)
+void WebPageProxy::requestSpeechRecognitionPermission(WebCore::SpeechRecognitionRequest& request, CompletionHandler<void(Optional<SpeechRecognitionError>&&)>&& completionHandler)
 {
     if (!m_speechRecognitionPermissionManager)
         m_speechRecognitionPermissionManager = makeUnique<SpeechRecognitionPermissionManager>(*this);
 
-    m_speechRecognitionPermissionManager->request(lang, clientOrigin, frameIdentifier, WTFMove(completionHandler));
+    m_speechRecognitionPermissionManager->request(request, WTFMove(completionHandler));
 }
 
 void WebPageProxy::requestSpeechRecognitionPermissionByDefaultAction(const WebCore::SecurityOrigin& origin, CompletionHandler<void(bool)>&& completionHandler)

Modified: trunk/Source/WebKit/UIProcess/WebPageProxy.h (272416 => 272417)


--- trunk/Source/WebKit/UIProcess/WebPageProxy.h	2021-02-05 16:38:48 UTC (rev 272416)
+++ trunk/Source/WebKit/UIProcess/WebPageProxy.h	2021-02-05 16:54:00 UTC (rev 272417)
@@ -227,6 +227,7 @@
 class RunLoopObserver;
 class SelectionData;
 class SharedBuffer;
+class SpeechRecognitionRequest;
 class TextIndicator;
 class ValidationBubble;
 
@@ -1825,7 +1826,7 @@
     void setMediaCaptureReportingDelay(Seconds captureReportingDelay) { m_mediaCaptureReportingDelay = captureReportingDelay; }
     size_t suspendMediaPlaybackCounter() { return m_suspendMediaPlaybackCounter; }
 
-    void requestSpeechRecognitionPermission(const String& lang, const WebCore::ClientOrigin&, WebCore::FrameIdentifier, SpeechRecognitionPermissionRequestCallback&&);
+    void requestSpeechRecognitionPermission(WebCore::SpeechRecognitionRequest&, SpeechRecognitionPermissionRequestCallback&&);
     void requestSpeechRecognitionPermissionByDefaultAction(const WebCore::SecurityOrigin&, CompletionHandler<void(bool)>&&);
     void requestUserMediaPermissionForSpeechRecognition(WebCore::FrameIdentifier, const WebCore::SecurityOrigin&, const WebCore::SecurityOrigin&, CompletionHandler<void(bool)>&&);
 

Modified: trunk/Source/WebKit/UIProcess/WebProcessProxy.cpp (272416 => 272417)


--- trunk/Source/WebKit/UIProcess/WebProcessProxy.cpp	2021-02-05 16:38:48 UTC (rev 272416)
+++ trunk/Source/WebKit/UIProcess/WebProcessProxy.cpp	2021-02-05 16:54:00 UTC (rev 272417)
@@ -1722,13 +1722,13 @@
 
     ASSERT(!m_speechRecognitionServerMap.contains(identifier));
     auto& speechRecognitionServer = m_speechRecognitionServerMap.add(identifier, nullptr).iterator->value;
-    auto permissionChecker = [weakPage = makeWeakPtr(targetPage)](auto& lang, auto& origin, auto frameIdentifier, auto&& completionHandler) mutable {
+    auto permissionChecker = [weakPage = makeWeakPtr(targetPage)](auto& request, auto&& completionHandler) mutable {
         if (!weakPage) {
             completionHandler(WebCore::SpeechRecognitionError { SpeechRecognitionErrorType::NotAllowed, "Page no longer exists"_s });
             return;
         }
 
-        weakPage->requestSpeechRecognitionPermission(lang, origin, frameIdentifier, WTFMove(completionHandler));
+        weakPage->requestSpeechRecognitionPermission(request, WTFMove(completionHandler));
     };
     auto checkIfMockCaptureDevicesEnabled = [weakPage = makeWeakPtr(targetPage)]() {
         return weakPage && weakPage->preferences().mockCaptureDevicesEnabled();
_______________________________________________
webkit-changes mailing list
[email protected]
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to