Title: [272434] trunk
Revision
272434
Author
[email protected]
Date
2021-02-05 12:27:19 -0800 (Fri, 05 Feb 2021)

Log Message

Enable audio capture for speech recognition in GPUProcess
https://bugs.webkit.org/show_bug.cgi?id=221457

Reviewed by Eric Carlson.

Source/WebCore:

Add fake deviceId to play nice with capture ASSERTs.
Covered by updated tests.

* Modules/speech/SpeechRecognitionCaptureSource.cpp:
(WebCore::SpeechRecognitionCaptureSource::createRealtimeMediaSource):

Source/WebKit:

Allow to create remote sources without any constraint.
To do so, we serialize through IPC a MediaConstraints with isValid = false and treat it as no constraint in capture process.

Make sure to send sandbox extensions and authorizations for GPUProcess to capture in case of speech recognition audio capture request.

In case of GPUProcess audio capture, send the request to capture to WebProcess like done for iOS.
WebProcess is then responsible to get audio samples from GPUProcess and forward them to UIProcess.
A future refactoring should move speech recognition to GPUProcess.

* UIProcess/Cocoa/UserMediaCaptureManagerProxy.cpp:
(WebKit::UserMediaCaptureManagerProxy::createMediaSourceForCaptureDeviceWithConstraints):
* UIProcess/UserMediaPermissionRequestManagerProxy.cpp:
(WebKit::UserMediaPermissionRequestManagerProxy::grantRequest):
* UIProcess/WebPageProxy.cpp:
(WebKit::WebPageProxy::createRealtimeMediaSourceForSpeechRecognition):
* WebProcess/Speech/SpeechRecognitionRealtimeMediaSourceManager.cpp:
(WebKit::SpeechRecognitionRealtimeMediaSourceManager::grantSandboxExtensions):
(WebKit::SpeechRecognitionRealtimeMediaSourceManager::createSource):
* WebProcess/cocoa/RemoteRealtimeMediaSource.cpp:
(WebKit::RemoteRealtimeMediaSource::create):
(WebKit::RemoteRealtimeMediaSource::RemoteRealtimeMediaSource):
(WebKit::RemoteRealtimeMediaSource::createRemoteMediaSource):
(WebKit::RemoteRealtimeMediaSource::~RemoteRealtimeMediaSource):
(WebKit::RemoteRealtimeMediaSource::cloneVideoSource):
(WebKit::RemoteRealtimeMediaSource::gpuProcessConnectionDidClose):
* WebProcess/cocoa/RemoteRealtimeMediaSource.h:
* WebProcess/cocoa/UserMediaCaptureManager.cpp:
(WebKit::UserMediaCaptureManager::AudioFactory::createAudioCaptureSource):
(WebKit::UserMediaCaptureManager::VideoFactory::createVideoCaptureSource):
(WebKit::UserMediaCaptureManager::DisplayFactory::createDisplayCaptureSource):

LayoutTests:

* fast/speechrecognition/ios/restart-recognition-after-stop.html:
* fast/speechrecognition/ios/start-recognition-then-stop.html:
* fast/speechrecognition/start-recognition-then-stop.html:
* fast/speechrecognition/start-second-recognition.html:

Modified Paths

Diff

Modified: trunk/LayoutTests/ChangeLog (272433 => 272434)


--- trunk/LayoutTests/ChangeLog	2021-02-05 20:18:09 UTC (rev 272433)
+++ trunk/LayoutTests/ChangeLog	2021-02-05 20:27:19 UTC (rev 272434)
@@ -1,3 +1,15 @@
+2021-02-05  Youenn Fablet  <[email protected]>
+
+        Enable audio capture for speech recognition in GPUProcess
+        https://bugs.webkit.org/show_bug.cgi?id=221457
+
+        Reviewed by Eric Carlson.
+
+        * fast/speechrecognition/ios/restart-recognition-after-stop.html:
+        * fast/speechrecognition/ios/start-recognition-then-stop.html:
+        * fast/speechrecognition/start-recognition-then-stop.html:
+        * fast/speechrecognition/start-second-recognition.html:
+
 2021-02-05  Patrick Angle  <[email protected]>
 
         Web Inspector: Implement backend support for maintaining a list of Grid layout contexts

Modified: trunk/LayoutTests/fast/speechrecognition/ios/restart-recognition-after-stop.html (272433 => 272434)


--- trunk/LayoutTests/fast/speechrecognition/ios/restart-recognition-after-stop.html	2021-02-05 20:18:09 UTC (rev 272433)
+++ trunk/LayoutTests/fast/speechrecognition/ios/restart-recognition-after-stop.html	2021-02-05 20:27:19 UTC (rev 272434)
@@ -1,4 +1,4 @@
-<!DOCTYPE html><!-- webkit-test-runner [ CaptureAudioInGPUProcessEnabled=false ] -->
+<!DOCTYPE html>
 <html>
 <body>
 <script src=""

Modified: trunk/LayoutTests/fast/speechrecognition/ios/start-recognition-then-stop.html (272433 => 272434)


--- trunk/LayoutTests/fast/speechrecognition/ios/start-recognition-then-stop.html	2021-02-05 20:18:09 UTC (rev 272433)
+++ trunk/LayoutTests/fast/speechrecognition/ios/start-recognition-then-stop.html	2021-02-05 20:27:19 UTC (rev 272434)
@@ -1,4 +1,4 @@
-<!DOCTYPE html><!-- webkit-test-runner [ CaptureAudioInGPUProcessEnabled=false ] -->
+<!DOCTYPE html>
 <html>
 <body>
 <script src=""

Modified: trunk/LayoutTests/fast/speechrecognition/start-recognition-then-stop.html (272433 => 272434)


--- trunk/LayoutTests/fast/speechrecognition/start-recognition-then-stop.html	2021-02-05 20:18:09 UTC (rev 272433)
+++ trunk/LayoutTests/fast/speechrecognition/start-recognition-then-stop.html	2021-02-05 20:27:19 UTC (rev 272434)
@@ -1,4 +1,4 @@
-<!DOCTYPE html><!-- webkit-test-runner [ CaptureAudioInGPUProcessEnabled=false ] -->
+<!DOCTYPE html>
 <html>
 <body>
 <script src=""

Modified: trunk/LayoutTests/fast/speechrecognition/start-second-recognition.html (272433 => 272434)


--- trunk/LayoutTests/fast/speechrecognition/start-second-recognition.html	2021-02-05 20:18:09 UTC (rev 272433)
+++ trunk/LayoutTests/fast/speechrecognition/start-second-recognition.html	2021-02-05 20:27:19 UTC (rev 272434)
@@ -1,4 +1,4 @@
-<!DOCTYPE html><!-- webkit-test-runner [ CaptureAudioInGPUProcessEnabled=false ] -->
+<!DOCTYPE html>
 <html>
 <body>
 <script src=""

Modified: trunk/Source/WebCore/ChangeLog (272433 => 272434)


--- trunk/Source/WebCore/ChangeLog	2021-02-05 20:18:09 UTC (rev 272433)
+++ trunk/Source/WebCore/ChangeLog	2021-02-05 20:27:19 UTC (rev 272434)
@@ -1,3 +1,16 @@
+2021-02-05  Youenn Fablet  <[email protected]>
+
+        Enable audio capture for speech recognition in GPUProcess
+        https://bugs.webkit.org/show_bug.cgi?id=221457
+
+        Reviewed by Eric Carlson.
+
+        Add fake deviceId to play nice with capture ASSERTs.
+        Covered by updated tests.
+
+        * Modules/speech/SpeechRecognitionCaptureSource.cpp:
+        (WebCore::SpeechRecognitionCaptureSource::createRealtimeMediaSource):
+
 2021-02-05  Patrick Angle  <[email protected]>
 
         Web Inspector: Implement backend support for maintaining a list of Grid layout contexts

Modified: trunk/Source/WebCore/Modules/speech/SpeechRecognitionCaptureSource.cpp (272433 => 272434)


--- trunk/Source/WebCore/Modules/speech/SpeechRecognitionCaptureSource.cpp	2021-02-05 20:18:09 UTC (rev 272433)
+++ trunk/Source/WebCore/Modules/speech/SpeechRecognitionCaptureSource.cpp	2021-02-05 20:27:19 UTC (rev 272434)
@@ -64,7 +64,7 @@
 
 CaptureSourceOrError SpeechRecognitionCaptureSource::createRealtimeMediaSource(const CaptureDevice& captureDevice)
 {
-    return RealtimeMediaSourceCenter::singleton().audioCaptureFactory().createAudioCaptureSource(captureDevice, { }, { });
+    return RealtimeMediaSourceCenter::singleton().audioCaptureFactory().createAudioCaptureSource(captureDevice, "SpeechID"_s, { });
 }
 
 SpeechRecognitionCaptureSource::SpeechRecognitionCaptureSource(SpeechRecognitionConnectionClientIdentifier clientIdentifier, DataCallback&& dataCallback, StateUpdateCallback&& stateUpdateCallback, Ref<RealtimeMediaSource>&& source)

Modified: trunk/Source/WebKit/ChangeLog (272433 => 272434)


--- trunk/Source/WebKit/ChangeLog	2021-02-05 20:18:09 UTC (rev 272433)
+++ trunk/Source/WebKit/ChangeLog	2021-02-05 20:27:19 UTC (rev 272434)
@@ -1,3 +1,41 @@
+2021-02-05  Youenn Fablet  <[email protected]>
+
+        Enable audio capture for speech recognition in GPUProcess
+        https://bugs.webkit.org/show_bug.cgi?id=221457
+
+        Reviewed by Eric Carlson.
+
+        Allow to create remote sources without any constraint.
+        To do so, we serialize through IPC a MediaConstraints with isValid = false and treat it as no constraint in capture process.
+
+        Make sure to send sandbox extensions and authorizations for GPUProcess to capture in case of speech recognition audio capture request.
+
+        In case of GPUProcess audio capture, send the request to capture to WebProcess like done for iOS.
+        WebProcess is then responsible to get audio samples from GPUProcess and forward them to UIProcess.
+        A future refactoring should move speech recognition to GPUProcess.
+
+        * UIProcess/Cocoa/UserMediaCaptureManagerProxy.cpp:
+        (WebKit::UserMediaCaptureManagerProxy::createMediaSourceForCaptureDeviceWithConstraints):
+        * UIProcess/UserMediaPermissionRequestManagerProxy.cpp:
+        (WebKit::UserMediaPermissionRequestManagerProxy::grantRequest):
+        * UIProcess/WebPageProxy.cpp:
+        (WebKit::WebPageProxy::createRealtimeMediaSourceForSpeechRecognition):
+        * WebProcess/Speech/SpeechRecognitionRealtimeMediaSourceManager.cpp:
+        (WebKit::SpeechRecognitionRealtimeMediaSourceManager::grantSandboxExtensions):
+        (WebKit::SpeechRecognitionRealtimeMediaSourceManager::createSource):
+        * WebProcess/cocoa/RemoteRealtimeMediaSource.cpp:
+        (WebKit::RemoteRealtimeMediaSource::create):
+        (WebKit::RemoteRealtimeMediaSource::RemoteRealtimeMediaSource):
+        (WebKit::RemoteRealtimeMediaSource::createRemoteMediaSource):
+        (WebKit::RemoteRealtimeMediaSource::~RemoteRealtimeMediaSource):
+        (WebKit::RemoteRealtimeMediaSource::cloneVideoSource):
+        (WebKit::RemoteRealtimeMediaSource::gpuProcessConnectionDidClose):
+        * WebProcess/cocoa/RemoteRealtimeMediaSource.h:
+        * WebProcess/cocoa/UserMediaCaptureManager.cpp:
+        (WebKit::UserMediaCaptureManager::AudioFactory::createAudioCaptureSource):
+        (WebKit::UserMediaCaptureManager::VideoFactory::createVideoCaptureSource):
+        (WebKit::UserMediaCaptureManager::DisplayFactory::createDisplayCaptureSource):
+
 2021-02-05  Kate Cheney  <[email protected]>
 
         ASSERTION FAILED: Completion handler should always be called under WebKit::VideoFullscreenManagerProxy::forEachSession

Modified: trunk/Source/WebKit/UIProcess/Cocoa/UserMediaCaptureManagerProxy.cpp (272433 => 272434)


--- trunk/Source/WebKit/UIProcess/Cocoa/UserMediaCaptureManagerProxy.cpp	2021-02-05 20:18:09 UTC (rev 272433)
+++ trunk/Source/WebKit/UIProcess/Cocoa/UserMediaCaptureManagerProxy.cpp	2021-02-05 20:27:19 UTC (rev 272434)
@@ -238,24 +238,26 @@
     m_connectionProxy->removeMessageReceiver(Messages::UserMediaCaptureManagerProxy::messageReceiverName());
 }
 
-void UserMediaCaptureManagerProxy::createMediaSourceForCaptureDeviceWithConstraints(RealtimeMediaSourceIdentifier id, const CaptureDevice& device, String&& hashSalt, const MediaConstraints& constraints, CompletionHandler<void(bool succeeded, String invalidConstraints, WebCore::RealtimeMediaSourceSettings&&, WebCore::RealtimeMediaSourceCapabilities&&)>&& completionHandler)
+void UserMediaCaptureManagerProxy::createMediaSourceForCaptureDeviceWithConstraints(RealtimeMediaSourceIdentifier id, const CaptureDevice& device, String&& hashSalt, const MediaConstraints& mediaConstraints, CompletionHandler<void(bool succeeded, String invalidConstraints, WebCore::RealtimeMediaSourceSettings&&, WebCore::RealtimeMediaSourceCapabilities&&)>&& completionHandler)
 {
     if (!m_connectionProxy->willStartCapture(device.type()))
         return completionHandler(false, "Request is not allowed"_s, RealtimeMediaSourceSettings { }, { });
 
+    auto* constraints = mediaConstraints.isValid ? &mediaConstraints : nullptr;
+
     CaptureSourceOrError sourceOrError;
     switch (device.type()) {
     case WebCore::CaptureDevice::DeviceType::Microphone:
-        sourceOrError = RealtimeMediaSourceCenter::singleton().audioCaptureFactory().createAudioCaptureSource(device, WTFMove(hashSalt), &constraints);
+        sourceOrError = RealtimeMediaSourceCenter::singleton().audioCaptureFactory().createAudioCaptureSource(device, WTFMove(hashSalt), constraints);
         break;
     case WebCore::CaptureDevice::DeviceType::Camera:
-        sourceOrError = RealtimeMediaSourceCenter::singleton().videoCaptureFactory().createVideoCaptureSource(device, WTFMove(hashSalt), &constraints);
+        sourceOrError = RealtimeMediaSourceCenter::singleton().videoCaptureFactory().createVideoCaptureSource(device, WTFMove(hashSalt), constraints);
         if (sourceOrError)
             sourceOrError.captureSource->monitorOrientation(m_orientationNotifier);
         break;
     case WebCore::CaptureDevice::DeviceType::Screen:
     case WebCore::CaptureDevice::DeviceType::Window:
-        sourceOrError = RealtimeMediaSourceCenter::singleton().displayCaptureFactory().createDisplayCaptureSource(device, &constraints);
+        sourceOrError = RealtimeMediaSourceCenter::singleton().displayCaptureFactory().createDisplayCaptureSource(device, constraints);
         break;
     case WebCore::CaptureDevice::DeviceType::Speaker:
     case WebCore::CaptureDevice::DeviceType::Unknown:

Modified: trunk/Source/WebKit/UIProcess/UserMediaPermissionRequestManagerProxy.cpp (272433 => 272434)


--- trunk/Source/WebKit/UIProcess/UserMediaPermissionRequestManagerProxy.cpp	2021-02-05 20:18:09 UTC (rev 272433)
+++ trunk/Source/WebKit/UIProcess/UserMediaPermissionRequestManagerProxy.cpp	2021-02-05 20:27:19 UTC (rev 272434)
@@ -240,8 +240,10 @@
     ALWAYS_LOG(LOGIDENTIFIER, request.userMediaID(), ", video: ", request.videoDevice().label(), ", audio: ", request.audioDevice().label());
 
     if (auto callback = request.decisionCompletionHandler()) {
+        m_page.willStartCapture(request, [callback = WTFMove(callback)]() mutable {
+            callback(true);
+        });
         m_grantedRequests.append(makeRef(request));
-        callback(true);
         return;
     }
 

Modified: trunk/Source/WebKit/UIProcess/WebPageProxy.cpp (272433 => 272434)


--- trunk/Source/WebKit/UIProcess/WebPageProxy.cpp	2021-02-05 20:18:09 UTC (rev 272433)
+++ trunk/Source/WebKit/UIProcess/WebPageProxy.cpp	2021-02-05 20:27:19 UTC (rev 272434)
@@ -10441,13 +10441,13 @@
 
 WebCore::CaptureSourceOrError WebPageProxy::createRealtimeMediaSourceForSpeechRecognition()
 {
-    if (preferences().captureAudioInGPUProcessEnabled())
-        return CaptureSourceOrError { "Not implemented for GPU process" };
-
     auto captureDevice = SpeechRecognitionCaptureSource::findCaptureDevice();
     if (!captureDevice)
         return CaptureSourceOrError { "No device is available for capture" };
 
+    if (preferences().captureAudioInGPUProcessEnabled())
+        return CaptureSourceOrError { SpeechRecognitionRemoteRealtimeMediaSource::create(m_process->ensureSpeechRecognitionRemoteRealtimeMediaSourceManager(), *captureDevice) };
+
 #if PLATFORM(IOS_FAMILY)
     return CaptureSourceOrError { SpeechRecognitionRemoteRealtimeMediaSource::create(m_process->ensureSpeechRecognitionRemoteRealtimeMediaSourceManager(), *captureDevice) };
 #else

Modified: trunk/Source/WebKit/WebProcess/Speech/SpeechRecognitionRealtimeMediaSourceManager.cpp (272433 => 272434)


--- trunk/Source/WebKit/WebProcess/Speech/SpeechRecognitionRealtimeMediaSourceManager.cpp	2021-02-05 20:18:09 UTC (rev 272433)
+++ trunk/Source/WebKit/WebProcess/Speech/SpeechRecognitionRealtimeMediaSourceManager.cpp	2021-02-05 20:27:19 UTC (rev 272434)
@@ -182,13 +182,13 @@
 {
     m_sandboxExtensionForTCCD = SandboxExtension::create(WTFMove(sandboxHandleForTCCD));
     if (!m_sandboxExtensionForTCCD)
-        LOG_ERROR("Failed to create sandbox extension for tccd");
+        RELEASE_LOG_ERROR(Media, "Failed to create sandbox extension for tccd");
     else
         m_sandboxExtensionForTCCD->consume();
 
     m_sandboxExtensionForMicrophone = SandboxExtension::create(WTFMove(sandboxHandleForMicrophone));
     if (!m_sandboxExtensionForMicrophone)
-        LOG_ERROR("Failed to create sandbox extension for microphone");
+        RELEASE_LOG_ERROR(Media, "Failed to create sandbox extension for microphone");
     else
         m_sandboxExtensionForMicrophone->consume();
 }
@@ -212,7 +212,7 @@
 {
     auto result = SpeechRecognitionCaptureSource::createRealtimeMediaSource(device);
     if (!result) {
-        LOG_ERROR("Failed to create realtime source");
+        RELEASE_LOG_ERROR(Media, "Failed to create realtime source");
         send(Messages::SpeechRecognitionRemoteRealtimeMediaSourceManager::RemoteCaptureFailed(identifier), 0);
         return;
     }

Modified: trunk/Source/WebKit/WebProcess/cocoa/RemoteRealtimeMediaSource.cpp (272433 => 272434)


--- trunk/Source/WebKit/WebProcess/cocoa/RemoteRealtimeMediaSource.cpp	2021-02-05 20:18:09 UTC (rev 272433)
+++ trunk/Source/WebKit/WebProcess/cocoa/RemoteRealtimeMediaSource.cpp	2021-02-05 20:27:19 UTC (rev 272434)
@@ -46,11 +46,11 @@
 using namespace PAL;
 using namespace WebCore;
 
-Ref<RealtimeMediaSource> RemoteRealtimeMediaSource::create(const CaptureDevice& device, const MediaConstraints& constraints, String&& name, String&& hashSalt, UserMediaCaptureManager& manager, bool shouldCaptureInGPUProcess)
+Ref<RealtimeMediaSource> RemoteRealtimeMediaSource::create(const CaptureDevice& device, const MediaConstraints* constraints, String&& name, String&& hashSalt, UserMediaCaptureManager& manager, bool shouldCaptureInGPUProcess)
 {
-    auto source = adoptRef(*new RemoteRealtimeMediaSource(RealtimeMediaSourceIdentifier::generate(), device.type(), WTFMove(name), WTFMove(hashSalt), manager, shouldCaptureInGPUProcess));
+    auto source = adoptRef(*new RemoteRealtimeMediaSource(RealtimeMediaSourceIdentifier::generate(), device, constraints, WTFMove(name), WTFMove(hashSalt), manager, shouldCaptureInGPUProcess));
     manager.addSource(source.copyRef());
-    source->createRemoteMediaSource(device, constraints);
+    source->createRemoteMediaSource();
     return source;
 }
 
@@ -70,14 +70,17 @@
     return RealtimeMediaSource::Type::None;
 }
 
-RemoteRealtimeMediaSource::RemoteRealtimeMediaSource(RealtimeMediaSourceIdentifier identifier, CaptureDevice::DeviceType deviceType, String&& name, String&& hashSalt, UserMediaCaptureManager& manager, bool shouldCaptureInGPUProcess)
-    : RealtimeMediaSource(sourceTypeFromDeviceType(deviceType), WTFMove(name), String::number(identifier.toUInt64()), WTFMove(hashSalt))
+RemoteRealtimeMediaSource::RemoteRealtimeMediaSource(RealtimeMediaSourceIdentifier identifier, const CaptureDevice& device, const MediaConstraints* constraints, String&& name, String&& hashSalt, UserMediaCaptureManager& manager, bool shouldCaptureInGPUProcess)
+    : RealtimeMediaSource(sourceTypeFromDeviceType(device.type()), WTFMove(name), String::number(identifier.toUInt64()), WTFMove(hashSalt))
     , m_identifier(identifier)
     , m_manager(manager)
-    , m_deviceType(deviceType)
+    , m_device(device)
     , m_shouldCaptureInGPUProcess(shouldCaptureInGPUProcess)
 {
-    switch (m_deviceType) {
+    if (constraints)
+        m_constraints = *constraints;
+
+    switch (m_device.type()) {
     case CaptureDevice::DeviceType::Microphone:
 #if PLATFORM(IOS_FAMILY)
         RealtimeMediaSourceCenter::singleton().audioCaptureFactory().setActiveSource(*this);
@@ -97,14 +100,9 @@
     }
 }
 
-void RemoteRealtimeMediaSource::createRemoteMediaSource(const CaptureDevice& device, const MediaConstraints& constraints)
+void RemoteRealtimeMediaSource::createRemoteMediaSource()
 {
-    if (m_shouldCaptureInGPUProcess) {
-        m_device = device;
-        m_constraints = constraints;
-    }
-
-    connection()->sendWithAsyncReply(Messages::UserMediaCaptureManagerProxy::CreateMediaSourceForCaptureDeviceWithConstraints(identifier(), device, deviceIDHashSalt(), constraints), [this, protectedThis = makeRef(*this)](bool succeeded, auto&& errorMessage, auto&& settings, auto&& capabilities) {
+    connection()->sendWithAsyncReply(Messages::UserMediaCaptureManagerProxy::CreateMediaSourceForCaptureDeviceWithConstraints(identifier(), m_device, deviceIDHashSalt(), m_constraints), [this, protectedThis = makeRef(*this)](bool succeeded, auto&& errorMessage, auto&& settings, auto&& capabilities) {
         if (!succeeded) {
             didFail(WTFMove(errorMessage));
             return;
@@ -123,7 +121,7 @@
     if (m_shouldCaptureInGPUProcess)
         WebProcess::singleton().ensureGPUProcessConnection().removeClient(*this);
 
-    switch (m_deviceType) {
+    switch (m_device.type()) {
     case CaptureDevice::DeviceType::Microphone:
 #if PLATFORM(IOS_FAMILY)
         RealtimeMediaSourceCenter::singleton().audioCaptureFactory().unsetActiveSource(*this);
@@ -184,7 +182,7 @@
     if (!connection()->send(Messages::UserMediaCaptureManagerProxy::Clone { m_identifier, identifier }, 0))
         return *this;
 
-    auto cloneSource = adoptRef(*new RemoteRealtimeMediaSource(identifier, deviceType(), String { m_settings.label().string() }, deviceIDHashSalt(), m_manager, m_shouldCaptureInGPUProcess));
+    auto cloneSource = adoptRef(*new RemoteRealtimeMediaSource(identifier, m_device, &m_constraints, String { m_settings.label().string() }, deviceIDHashSalt(), m_manager, m_shouldCaptureInGPUProcess));
     cloneSource->setSettings(RealtimeMediaSourceSettings { m_settings });
     m_manager.addSource(cloneSource.copyRef());
     return cloneSource;
@@ -329,7 +327,7 @@
         return;
 
     m_manager.didUpdateSourceConnection(*this);
-    createRemoteMediaSource(m_device, m_constraints);
+    createRemoteMediaSource();
     // FIXME: We should update the track according current settings.
     if (isProducingData())
         startProducingData();

Modified: trunk/Source/WebKit/WebProcess/cocoa/RemoteRealtimeMediaSource.h (272433 => 272434)


--- trunk/Source/WebKit/WebProcess/cocoa/RemoteRealtimeMediaSource.h	2021-02-05 20:18:09 UTC (rev 272433)
+++ trunk/Source/WebKit/WebProcess/cocoa/RemoteRealtimeMediaSource.h	2021-02-05 20:27:19 UTC (rev 272434)
@@ -54,7 +54,7 @@
 #endif
 {
 public:
-    static Ref<WebCore::RealtimeMediaSource> create(const WebCore::CaptureDevice&, const WebCore::MediaConstraints&, String&& name, String&& hashSalt, UserMediaCaptureManager&, bool shouldCaptureInGPUProcess);
+    static Ref<WebCore::RealtimeMediaSource> create(const WebCore::CaptureDevice&, const WebCore::MediaConstraints*, String&& name, String&& hashSalt, UserMediaCaptureManager&, bool shouldCaptureInGPUProcess);
     ~RemoteRealtimeMediaSource();
 
     WebCore::RealtimeMediaSourceIdentifier identifier() const { return m_identifier; }
@@ -72,7 +72,7 @@
     void remoteAudioSamplesAvailable(const WTF::MediaTime&, const WebCore::PlatformAudioData&, const WebCore::AudioStreamDescription&, size_t);
 
 private:
-    RemoteRealtimeMediaSource(WebCore::RealtimeMediaSourceIdentifier, WebCore::CaptureDevice::DeviceType, String&& name, String&& hashSalt, UserMediaCaptureManager&, bool shouldCaptureInGPUProcess);
+    RemoteRealtimeMediaSource(WebCore::RealtimeMediaSourceIdentifier, const WebCore::CaptureDevice&, const WebCore::MediaConstraints*, String&& name, String&& hashSalt, UserMediaCaptureManager&, bool shouldCaptureInGPUProcess);
 
     // RealtimeMediaSource
     void startProducingData() final;
@@ -88,7 +88,7 @@
     const WebCore::RealtimeMediaSourceSettings& settings() final { return m_settings; }
     const WebCore::RealtimeMediaSourceCapabilities& capabilities() final;
     void whenReady(CompletionHandler<void(String)>&&) final;
-    WebCore::CaptureDevice::DeviceType deviceType() const final { return m_deviceType; }
+    WebCore::CaptureDevice::DeviceType deviceType() const final { return m_device.type(); }
     Ref<RealtimeMediaSource> clone() final;
 
 #if ENABLE(GPU_PROCESS)
@@ -96,7 +96,7 @@
     void gpuProcessConnectionDidClose(GPUProcessConnection&) final;
 #endif
 
-    void createRemoteMediaSource(const WebCore::CaptureDevice&, const WebCore::MediaConstraints&);
+    void createRemoteMediaSource();
     void didFail(String&& errorMessage);
     void setAsReady();
     void setCapabilities(WebCore::RealtimeMediaSourceCapabilities&&);
@@ -107,8 +107,10 @@
     WebCore::RealtimeMediaSourceCapabilities m_capabilities;
     WebCore::RealtimeMediaSourceSettings m_settings;
 
+    WebCore::CaptureDevice m_device;
+    WebCore::MediaConstraints m_constraints;
+
     std::unique_ptr<WebCore::ImageTransferSessionVT> m_imageTransferSession;
-    WebCore::CaptureDevice::DeviceType m_deviceType { WebCore::CaptureDevice::DeviceType::Unknown };
 
     Deque<ApplyConstraintsHandler> m_pendingApplyConstraintsCallbacks;
     bool m_shouldCaptureInGPUProcess { false };
@@ -116,8 +118,6 @@
     bool m_hasRequestedToEnd { false };
     String m_errorMessage;
     CompletionHandler<void(String)> m_callback;
-    WebCore::CaptureDevice m_device;
-    WebCore::MediaConstraints m_constraints;
 };
 
 } // namespace WebKit

Modified: trunk/Source/WebKit/WebProcess/cocoa/UserMediaCaptureManager.cpp (272433 => 272434)


--- trunk/Source/WebKit/WebProcess/cocoa/UserMediaCaptureManager.cpp	2021-02-05 20:18:09 UTC (rev 272433)
+++ trunk/Source/WebKit/WebProcess/cocoa/UserMediaCaptureManager.cpp	2021-02-05 20:27:19 UTC (rev 272434)
@@ -144,9 +144,6 @@
 
 CaptureSourceOrError UserMediaCaptureManager::AudioFactory::createAudioCaptureSource(const CaptureDevice& device, String&& hashSalt, const MediaConstraints* constraints)
 {
-    if (!constraints)
-        return { };
-
 #if !ENABLE(GPU_PROCESS)
     if (m_shouldCaptureInGPUProcess)
         return CaptureSourceOrError { "Audio capture in GPUProcess is not implemented"_s };
@@ -158,7 +155,7 @@
         DeprecatedGlobalSettings::setShouldManageAudioSessionCategory(true);
 #endif
 
-    return RemoteRealtimeMediaSource::create(device, *constraints, { }, WTFMove(hashSalt), m_manager, m_shouldCaptureInGPUProcess);
+    return RemoteRealtimeMediaSource::create(device, constraints, { }, WTFMove(hashSalt), m_manager, m_shouldCaptureInGPUProcess);
 }
 
 void UserMediaCaptureManager::AudioFactory::setShouldCaptureInGPUProcess(bool value)
@@ -168,15 +165,12 @@
 
 CaptureSourceOrError UserMediaCaptureManager::VideoFactory::createVideoCaptureSource(const CaptureDevice& device, String&& hashSalt, const MediaConstraints* constraints)
 {
-    if (!constraints)
-        return { };
-
 #if !ENABLE(GPU_PROCESS)
     if (m_shouldCaptureInGPUProcess)
         return CaptureSourceOrError { "Video capture in GPUProcess is not implemented"_s };
 #endif
 
-    return RemoteRealtimeMediaSource::create(device, *constraints, { }, WTFMove(hashSalt), m_manager, m_shouldCaptureInGPUProcess);
+    return RemoteRealtimeMediaSource::create(device, constraints, { }, WTFMove(hashSalt), m_manager, m_shouldCaptureInGPUProcess);
 }
 
 #if PLATFORM(IOS_FAMILY)
@@ -188,10 +182,7 @@
 
 CaptureSourceOrError UserMediaCaptureManager::DisplayFactory::createDisplayCaptureSource(const CaptureDevice& device, const MediaConstraints* constraints)
 {
-    if (!constraints)
-        return { };
-
-    return RemoteRealtimeMediaSource::create(device, *constraints, { }, { }, m_manager, false);
+    return RemoteRealtimeMediaSource::create(device, constraints, { }, { }, m_manager, false);
 }
 
 void UserMediaCaptureManager::didUpdateSourceConnection(RemoteRealtimeMediaSource& source)
_______________________________________________
webkit-changes mailing list
[email protected]
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to