Title: [286650] trunk/Source
Revision
286650
Author
[email protected]
Date
2021-12-08 03:05:54 -0800 (Wed, 08 Dec 2021)

Log Message

Let RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit switch to VPIO unit if VPIO is running
https://bugs.webkit.org/show_bug.cgi?id=233316

Reviewed by Eric Carlson.

Source/WebCore:

Add support for an external provider of speaker data to the audio capture unit.
We use this in WebKit to render the MediaStreamTracks of the process doing capture through VPIO when running.
In case the audio capture unit speaker format is not matching the producer format, we have to reconfigure and align with the producer format.
In case, audio is being rendered by the audio capture unit but capture is stopped, we continue running the audio capture unit.
This allows to mute/unmute microphone without any audio glitch.
We make sure to stop the audio capture unit when it is no longer used for rendering or capture.

In case the unit is suspended and we restart an audio source, we will unsuspend the unit, which will unmute sources.
We do not want to start audio sources within the start of an audio source.
For that reason, we unmute the sources asynchronously.

Manually tested.

* WebCore.xcodeproj/project.pbxproj:
* platform/mediastream/mac/BaseAudioSharedUnit.cpp:
* platform/mediastream/mac/BaseAudioSharedUnit.h:
* platform/mediastream/mac/CoreAudioCaptureSource.cpp:
* platform/mediastream/mac/CoreAudioCaptureSource.h:

Source/WebKit:

We track which process is the last to start audio capture.
In that case, we register its RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit to the capture unit.
This allows RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit to switch between RemoteIO if capture unit does not run
or the capture unit if running.
This allows to remove the x5 gain workaround.

* GPUProcess/GPUConnectionToWebProcess.cpp:
* GPUProcess/GPUConnectionToWebProcess.h:
* GPUProcess/GPUProcess.cpp:
* GPUProcess/GPUProcess.h:
* GPUProcess/webrtc/RemoteAudioMediaStreamTrackRendererInternalUnitManager.cpp:
* GPUProcess/webrtc/RemoteAudioMediaStreamTrackRendererInternalUnitManager.h:
* UIProcess/Cocoa/UserMediaCaptureManagerProxy.cpp:
* UIProcess/Cocoa/UserMediaCaptureManagerProxy.h:

Modified Paths

Diff

Modified: trunk/Source/WebCore/ChangeLog (286649 => 286650)


--- trunk/Source/WebCore/ChangeLog	2021-12-08 10:08:35 UTC (rev 286649)
+++ trunk/Source/WebCore/ChangeLog	2021-12-08 11:05:54 UTC (rev 286650)
@@ -1,3 +1,29 @@
+2021-12-08  Youenn Fablet  <[email protected]>
+
+        Let RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit switch to VPIO unit if VPIO is running
+        https://bugs.webkit.org/show_bug.cgi?id=233316
+
+        Reviewed by Eric Carlson.
+
+        Add support for an external provider of speaker data to the audio capture unit.
+        We use this in WebKit to render the MediaStreamTracks of the process doing capture through VPIO when running.
+        In case the audio capture unit speaker format is not matching the producer format, we have to reconfigure and align with the producer format.
+        In case, audio is being rendered by the audio capture unit but capture is stopped, we continue running the audio capture unit.
+        This allows to mute/unmute microphone without any audio glitch.
+        We make sure to stop the audio capture unit when it is no longer used for rendering or capture.
+
+        In case the unit is suspended and we restart an audio source, we will unsuspend the unit, which will unmute sources.
+        We do not want to start audio sources within the start of an audio source.
+        For that reason, we unmute the sources asynchronously.
+
+        Manually tested.
+
+        * WebCore.xcodeproj/project.pbxproj:
+        * platform/mediastream/mac/BaseAudioSharedUnit.cpp:
+        * platform/mediastream/mac/BaseAudioSharedUnit.h:
+        * platform/mediastream/mac/CoreAudioCaptureSource.cpp:
+        * platform/mediastream/mac/CoreAudioCaptureSource.h:
+
 2021-12-08  Myles C. Maxfield  <[email protected]>
 
         [MacCatalyst] Various build fixes

Modified: trunk/Source/WebCore/WebCore.xcodeproj/project.pbxproj (286649 => 286650)


--- trunk/Source/WebCore/WebCore.xcodeproj/project.pbxproj	2021-12-08 10:08:35 UTC (rev 286649)
+++ trunk/Source/WebCore/WebCore.xcodeproj/project.pbxproj	2021-12-08 11:05:54 UTC (rev 286650)
@@ -1227,6 +1227,7 @@
 		41F062140F5F192600A07EAC /* InspectorDatabaseResource.h in Headers */ = {isa = PBXBuildFile; fileRef = 41F062120F5F192600A07EAC /* InspectorDatabaseResource.h */; };
 		41F1D21F0EF35C2A00DA8753 /* ScriptCachedFrameData.h in Headers */ = {isa = PBXBuildFile; fileRef = 41F1D21D0EF35C2A00DA8753 /* ScriptCachedFrameData.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		41FABD2D1F4DFE4A006A6C97 /* DOMCacheEngine.h in Headers */ = {isa = PBXBuildFile; fileRef = 41FABD2B1F4DFE42006A6C97 /* DOMCacheEngine.h */; settings = {ATTRIBUTES = (Private, ); }; };
+		41FCCC3B2746675600892AD6 /* CoreAudioCaptureSource.h in Headers */ = {isa = PBXBuildFile; fileRef = 3F3BB5831E709EE400C701F2 /* CoreAudioCaptureSource.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		41FCD6B923CE015500C62567 /* SampleBufferDisplayLayer.h in Headers */ = {isa = PBXBuildFile; fileRef = 414598BE23C8AAB8002B9CC8 /* SampleBufferDisplayLayer.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		41FCD6BB23CE027700C62567 /* LocalSampleBufferDisplayLayer.h in Headers */ = {isa = PBXBuildFile; fileRef = 414598C023C8AD78002B9CC8 /* LocalSampleBufferDisplayLayer.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		427DA71D13735DFA007C57FB /* JSServiceWorkerInternals.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 427DA71B13735DFA007C57FB /* JSServiceWorkerInternals.cpp */; };
@@ -33633,6 +33634,7 @@
 				5120BBAF1F1CECE700EFEBF1 /* CookieStorageObserver.h in Headers */,
 				3F8020351E9E47BF00DEC61D /* CoreAudioCaptureDevice.h in Headers */,
 				3F8020371E9E47C500DEC61D /* CoreAudioCaptureDeviceManager.h in Headers */,
+				41FCCC3B2746675600892AD6 /* CoreAudioCaptureSource.h in Headers */,
 				07AFF4221EFB144900B545B3 /* CoreAudioCaptureSourceIOS.h in Headers */,
 				CD7D33481C7A16BF00041293 /* CoreVideoSoftLink.h in Headers */,
 				862F129E18C1576F005C54AF /* CountedUserActivity.h in Headers */,

Modified: trunk/Source/WebCore/platform/mediastream/mac/BaseAudioSharedUnit.cpp (286649 => 286650)


--- trunk/Source/WebCore/platform/mediastream/mac/BaseAudioSharedUnit.cpp	2021-12-08 10:08:35 UTC (rev 286649)
+++ trunk/Source/WebCore/platform/mediastream/mac/BaseAudioSharedUnit.cpp	2021-12-08 11:05:54 UTC (rev 286650)
@@ -84,6 +84,8 @@
     if (m_suspended)
         resume();
 
+    setIsProducingMicrophoneSamples(true);
+
     if (++m_producingCount != 1)
         return;
 
@@ -177,10 +179,25 @@
     if (m_producingCount && --m_producingCount)
         return;
 
+    if (m_isRenderingAudio) {
+        setIsProducingMicrophoneSamples(false);
+        return;
+    }
+
     stopInternal();
     cleanupAudioUnit();
 }
 
+void BaseAudioSharedUnit::setIsRenderingAudio(bool value)
+{
+    m_isRenderingAudio = value;
+    if (m_isRenderingAudio || m_producingCount)
+        return;
+
+    stopInternal();
+    cleanupAudioUnit();
+}
+
 void BaseAudioSharedUnit::reconfigure()
 {
     ASSERT(isMainThread());
@@ -210,8 +227,13 @@
 
     ASSERT(!m_producingCount);
 
-    forEachClient([](auto& client) {
-        client.setMuted(false);
+    callOnMainThread([weakThis = WeakPtr { this }] {
+        if (!weakThis || weakThis->m_suspended)
+            return;
+
+        weakThis->forEachClient([](auto& client) {
+            client.setMuted(false);
+        });
     });
 
     return 0;

Modified: trunk/Source/WebCore/platform/mediastream/mac/BaseAudioSharedUnit.h (286649 => 286650)


--- trunk/Source/WebCore/platform/mediastream/mac/BaseAudioSharedUnit.h	2021-12-08 10:08:35 UTC (rev 286649)
+++ trunk/Source/WebCore/platform/mediastream/mac/BaseAudioSharedUnit.h	2021-12-08 11:05:54 UTC (rev 286650)
@@ -32,6 +32,7 @@
 #include <wtf/HashSet.h>
 #include <wtf/Lock.h>
 #include <wtf/MediaTime.h>
+#include <wtf/WeakPtr.h>
 #include <wtf/text/WTFString.h>
 
 namespace WebCore {
@@ -41,7 +42,7 @@
 class CoreAudioCaptureSource;
 class PlatformAudioData;
 
-class BaseAudioSharedUnit {
+class BaseAudioSharedUnit : public CanMakeWeakPtr<BaseAudioSharedUnit> {
 public:
     BaseAudioSharedUnit();
     virtual ~BaseAudioSharedUnit() = default;
@@ -98,6 +99,12 @@
     const String& persistentID() const { return m_capturingDevice ? m_capturingDevice->first : emptyString(); }
     uint32_t captureDeviceID() const { return m_capturingDevice ? m_capturingDevice->second : 0; }
 
+    void setIsRenderingAudio(bool);
+
+protected:
+    void setIsProducingMicrophoneSamples(bool value) { m_isProducingMicrophoneSamples = value; }
+    bool isProducingMicrophoneSamples() const { return m_isProducingMicrophoneSamples; }
+
 private:
     OSStatus startUnit();
 
@@ -106,6 +113,7 @@
     int m_sampleRate;
     bool m_suspended { false };
     bool m_needsReconfiguration { false };
+    bool m_isRenderingAudio { false };
 
     int32_t m_producingCount { 0 };
 
@@ -114,6 +122,8 @@
     HashSet<CoreAudioCaptureSource*> m_clients;
     Vector<CoreAudioCaptureSource*> m_audioThreadClients WTF_GUARDED_BY_LOCK(m_audioThreadClientsLock);
     Lock m_audioThreadClientsLock;
+
+    bool m_isProducingMicrophoneSamples { true };
 };
 
 } // namespace WebCore

Modified: trunk/Source/WebCore/platform/mediastream/mac/CoreAudioCaptureSource.cpp (286649 => 286650)


--- trunk/Source/WebCore/platform/mediastream/mac/CoreAudioCaptureSource.cpp	2021-12-08 10:08:35 UTC (rev 286649)
+++ trunk/Source/WebCore/platform/mediastream/mac/CoreAudioCaptureSource.cpp	2021-12-08 11:05:54 UTC (rev 286650)
@@ -46,6 +46,7 @@
 #include <pal/spi/cf/CoreAudioSPI.h>
 #include <sys/time.h>
 #include <wtf/Algorithms.h>
+#include <wtf/Lock.h>
 #include <wtf/MainThread.h>
 #include <wtf/NeverDestroyed.h>
 
@@ -77,6 +78,10 @@
     static BaseAudioSharedUnit& singleton()  { return unit(); }
     CoreAudioSharedUnit();
 
+    void registerSpeakerSamplesProducer(CoreAudioSpeakerSamplesProducer&);
+    void unregisterSpeakerSamplesProducer(CoreAudioSpeakerSamplesProducer&);
+    bool isRunning() const { return m_ioUnitStarted; }
+
 private:
     static size_t preferredIOBufferSize();
 
@@ -103,7 +108,7 @@
     OSStatus processMicrophoneSamples(AudioUnitRenderActionFlags&, const AudioTimeStamp&, UInt32, UInt32, AudioBufferList*);
 
     static OSStatus speakerCallback(void*, AudioUnitRenderActionFlags*, const AudioTimeStamp*, UInt32, UInt32, AudioBufferList*);
-    OSStatus provideSpeakerData(AudioUnitRenderActionFlags&, const AudioTimeStamp&, UInt32, UInt32, AudioBufferList*);
+    OSStatus provideSpeakerData(AudioUnitRenderActionFlags&, const AudioTimeStamp&, UInt32, UInt32, AudioBufferList&);
 
     void unduck();
 
@@ -119,7 +124,6 @@
     uint64_t m_latestMicTimeStamp { 0 };
 
     CAAudioStreamDescription m_speakerProcFormat;
-    RefPtr<AudioSampleBufferList> m_speakerSampleBuffer;
 
     double m_DTSConversionRatio { 0 };
 
@@ -133,7 +137,6 @@
     void checkTimestamps(const AudioTimeStamp&, uint64_t, double);
 
     String m_ioUnitName;
-    uint64_t m_speakerProcsCalled { 0 };
 #endif
 
     uint64_t m_microphoneProcsCalled { 0 };
@@ -140,6 +143,9 @@
     uint64_t m_microphoneProcsCalledLastTime { 0 };
     Timer m_verifyCapturingTimer;
     static constexpr Seconds verifyCaptureInterval = 10_s;
+
+    Lock m_speakerSamplesProducerLock;
+    CoreAudioSpeakerSamplesProducer* m_speakerSamplesProducer WTF_GUARDED_BY_LOCK(m_speakerSamplesProducerLock) { nullptr };
 };
 
 CoreAudioSharedUnit& CoreAudioSharedUnit::unit()
@@ -268,6 +274,9 @@
 
 OSStatus CoreAudioSharedUnit::configureMicrophoneProc()
 {
+    if (!isProducingMicrophoneSamples())
+        return noErr;
+
     AURenderCallbackStruct callback = { microphoneCallback, this };
     auto err = PAL::AudioUnitSetProperty(m_ioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, inputBus, &callback, sizeof(callback));
     if (err) {
@@ -306,16 +315,22 @@
         return err;
     }
 
-    AudioStreamBasicDescription speakerProcFormat = { };
-
+    AudioStreamBasicDescription speakerProcFormat;
     UInt32 size = sizeof(speakerProcFormat);
-    err = PAL::AudioUnitGetProperty(m_ioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, outputBus, &speakerProcFormat, &size);
-    if (err) {
-        RELEASE_LOG_ERROR(WebRTC, "CoreAudioSharedUnit::configureSpeakerProc(%p) unable to get input stream format, error %d (%.4s)", this, (int)err, (char*)&err);
-        return err;
+    {
+        Locker locker { m_speakerSamplesProducerLock };
+        if (m_speakerSamplesProducer)
+            speakerProcFormat = m_speakerSamplesProducer->format().streamDescription();
+        else {
+            err = PAL::AudioUnitGetProperty(m_ioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, outputBus, &speakerProcFormat, &size);
+            if (err) {
+                RELEASE_LOG_ERROR(WebRTC, "CoreAudioSharedUnit::configureSpeakerProc(%p) unable to get input stream format, error %d (%.4s)", this, (int)err, (char*)&err);
+                return err;
+            }
+            speakerProcFormat.mSampleRate = sampleRate();
+        }
     }
 
-    speakerProcFormat.mSampleRate = sampleRate();
     err = PAL::AudioUnitSetProperty(m_ioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, outputBus, &speakerProcFormat, size);
     if (err) {
         RELEASE_LOG_ERROR(WebRTC, "CoreAudioSharedUnit::configureSpeakerProc(%p) unable to get input stream format, error %d (%.4s)", this, (int)err, (char*)&err);
@@ -322,7 +337,6 @@
         return err;
     }
 
-    m_speakerSampleBuffer = AudioSampleBufferList::create(speakerProcFormat, preferredIOBufferSize() * 2);
     m_speakerProcFormat = speakerProcFormat;
 
     return err;
@@ -336,42 +350,21 @@
 }
 #endif
 
-OSStatus CoreAudioSharedUnit::provideSpeakerData(AudioUnitRenderActionFlags& /*ioActionFlags*/, const AudioTimeStamp& timeStamp, UInt32 /*inBusNumber*/, UInt32 inNumberFrames, AudioBufferList* ioData)
+OSStatus CoreAudioSharedUnit::provideSpeakerData(AudioUnitRenderActionFlags& flags, const AudioTimeStamp& timeStamp, UInt32 /*inBusNumber*/, UInt32 inNumberFrames, AudioBufferList& ioData)
 {
-    // Called when the audio unit needs data to play through the speakers.
-#if !LOG_DISABLED
-    ++m_speakerProcsCalled;
-#endif
-
-    if (m_speakerSampleBuffer->sampleCapacity() < inNumberFrames) {
-        if (m_activeSources.isEmpty())
-            return 0;
-        RELEASE_LOG_ERROR(WebRTC, "CoreAudioSharedUnit::provideSpeakerData: speaker sample buffer size (%d) too small for amount of sample data requested (%d)!", m_speakerSampleBuffer->sampleCapacity(), (int)inNumberFrames);
-        // FIXME: This fails the capture, we should thus either reconfigure the audio unit or notify all clients that capture is failing.
-        return kAudio_ParamError;
+    if (!m_speakerSamplesProducerLock.tryLock()) {
+        AudioSampleBufferList::zeroABL(ioData, static_cast<size_t>(inNumberFrames * m_speakerProcFormat.bytesPerFrame()));
+        flags = kAudioUnitRenderAction_OutputIsSilence;
+        return noErr;
     }
 
-    if (m_activeSources.isEmpty())
-        return 0;
-
-    double adjustedHostTime = m_DTSConversionRatio * timeStamp.mHostTime;
-    uint64_t sampleTime = timeStamp.mSampleTime;
-#if !LOG_DISABLED
-    checkTimestamps(timeStamp, sampleTime, adjustedHostTime);
-#endif
-    m_speakerSampleBuffer->setTimes(adjustedHostTime, sampleTime);
-
-    AudioBufferList& bufferList = m_speakerSampleBuffer->bufferList();
-    for (uint32_t i = 0; i < bufferList.mNumberBuffers; ++i)
-        bufferList.mBuffers[i] = ioData->mBuffers[i];
-
-    bool firstSource = true;
-    for (auto& source : m_activeSources) {
-        source->pullSamples(bufferList, inNumberFrames, adjustedHostTime, sampleTime, firstSource ? AudioSampleDataSource::Copy : AudioSampleDataSource::Mix);
-        firstSource = false;
+    Locker locker { AdoptLock, m_speakerSamplesProducerLock };
+    if (!m_speakerSamplesProducer) {
+        AudioSampleBufferList::zeroABL(ioData, static_cast<size_t>(inNumberFrames * m_speakerProcFormat.bytesPerFrame()));
+        flags = kAudioUnitRenderAction_OutputIsSilence;
+        return noErr;
     }
-
-    return noErr;
+    return m_speakerSamplesProducer->produceSpeakerSamples(inNumberFrames, ioData, timeStamp.mSampleTime, timeStamp.mHostTime, flags);
 }
 
 OSStatus CoreAudioSharedUnit::speakerCallback(void *inRefCon, AudioUnitRenderActionFlags* ioActionFlags, const AudioTimeStamp* inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList* ioData)
@@ -379,7 +372,7 @@
     ASSERT(ioActionFlags);
     ASSERT(inTimeStamp);
     auto dataSource = static_cast<CoreAudioSharedUnit*>(inRefCon);
-    return dataSource->provideSpeakerData(*ioActionFlags, *inTimeStamp, inBusNumber, inNumberFrames, ioData);
+    return dataSource->provideSpeakerData(*ioActionFlags, *inTimeStamp, inBusNumber, inNumberFrames, *ioData);
 }
 
 OSStatus CoreAudioSharedUnit::processMicrophoneSamples(AudioUnitRenderActionFlags& ioActionFlags, const AudioTimeStamp& timeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList* /*ioData*/)
@@ -441,7 +434,6 @@
     }
 
     m_microphoneSampleBuffer = nullptr;
-    m_speakerSampleBuffer = nullptr;
 #if !LOG_DISABLED
     m_ioUnitName = emptyString();
 #endif
@@ -478,6 +470,8 @@
 
 OSStatus CoreAudioSharedUnit::startInternal()
 {
+    setIsProducingMicrophoneSamples(true);
+
     OSStatus err;
     if (!m_ioUnit) {
         err = setupAudioUnit();
@@ -491,8 +485,20 @@
 
     unduck();
 
+    {
+        Locker locker { m_speakerSamplesProducerLock };
+        if (m_speakerSamplesProducer)
+            m_speakerSamplesProducer->captureUnitIsStarting();
+    }
+
     err = PAL::AudioOutputUnitStart(m_ioUnit);
     if (err) {
+        {
+            Locker locker { m_speakerSamplesProducerLock };
+            if (m_speakerSamplesProducer)
+                m_speakerSamplesProducer->captureUnitHasStopped();
+        }
+
         RELEASE_LOG_ERROR(WebRTC, "CoreAudioSharedUnit::start(%p) AudioOutputUnitStart failed with error %d (%.4s)", this, (int)err, (char*)&err);
         cleanupAudioUnit();
         ASSERT(!m_ioUnit);
@@ -531,6 +537,11 @@
         RELEASE_LOG_ERROR(WebRTC, "CoreAudioSharedUnit::stop(%p) AudioOutputUnitStop failed with error %d (%.4s)", this, (int)err, (char*)&err);
         return;
     }
+    {
+        Locker locker { m_speakerSamplesProducerLock };
+        if (m_speakerSamplesProducer)
+            m_speakerSamplesProducer->captureUnitHasStopped();
+    }
 
     m_ioUnitStarted = false;
 }
@@ -573,6 +584,35 @@
     return err;
 }
 
+void CoreAudioSharedUnit::registerSpeakerSamplesProducer(CoreAudioSpeakerSamplesProducer& producer)
+{
+    setIsRenderingAudio(true);
+
+    CoreAudioSpeakerSamplesProducer* oldProducer;
+    {
+        Locker locker { m_speakerSamplesProducerLock };
+        oldProducer = m_speakerSamplesProducer;
+        m_speakerSamplesProducer = &producer;
+    }
+    if (oldProducer && oldProducer != &producer)
+        oldProducer->captureUnitHasStopped();
+
+    if (hasAudioUnit() && producer.format() != m_speakerProcFormat)
+        reconfigure();
+}
+
+void CoreAudioSharedUnit::unregisterSpeakerSamplesProducer(CoreAudioSpeakerSamplesProducer& producer)
+{
+    {
+        Locker locker { m_speakerSamplesProducerLock };
+        if (m_speakerSamplesProducer != &producer)
+            return;
+        m_speakerSamplesProducer = nullptr;
+    }
+
+    setIsRenderingAudio(false);
+}
+
 static CaptureSourceOrError initializeCoreAudioCaptureSource(Ref<CoreAudioCaptureSource>&& source, const MediaConstraints* constraints)
 {
     if (constraints) {
@@ -674,6 +714,21 @@
     CoreAudioSharedUnit::unit().devicesChanged(devices);
 }
 
+void CoreAudioCaptureSourceFactory::registerSpeakerSamplesProducer(CoreAudioSpeakerSamplesProducer& producer)
+{
+    CoreAudioSharedUnit::unit().registerSpeakerSamplesProducer(producer);
+}
+
+void CoreAudioCaptureSourceFactory::unregisterSpeakerSamplesProducer(CoreAudioSpeakerSamplesProducer& producer)
+{
+    CoreAudioSharedUnit::unit().unregisterSpeakerSamplesProducer(producer);
+}
+
+bool CoreAudioCaptureSourceFactory::isAudioCaptureUnitRunning()
+{
+    return CoreAudioSharedUnit::unit().isRunning();
+}
+
 CoreAudioCaptureSource::CoreAudioCaptureSource(String&& deviceID, String&& label, String&& hashSalt, uint32_t captureDeviceID, BaseAudioSharedUnit* overrideUnit)
     : RealtimeMediaSource(RealtimeMediaSource::Type::Audio, WTFMove(label), WTFMove(deviceID), WTFMove(hashSalt))
     , m_captureDeviceID(captureDeviceID)

Modified: trunk/Source/WebCore/platform/mediastream/mac/CoreAudioCaptureSource.h (286649 => 286650)


--- trunk/Source/WebCore/platform/mediastream/mac/CoreAudioCaptureSource.h	2021-12-08 10:08:35 UTC (rev 286649)
+++ trunk/Source/WebCore/platform/mediastream/mac/CoreAudioCaptureSource.h	2021-12-08 11:05:54 UTC (rev 286650)
@@ -102,9 +102,20 @@
     BaseAudioSharedUnit* m_overrideUnit { nullptr };
 };
 
+class CoreAudioSpeakerSamplesProducer {
+public:
+    virtual ~CoreAudioSpeakerSamplesProducer() = default;
+    // Main thread
+    virtual const CAAudioStreamDescription& format() = 0;
+    virtual void captureUnitIsStarting() = 0;
+    virtual void captureUnitHasStopped() = 0;
+    // Background thread.
+    virtual OSStatus produceSpeakerSamples(size_t sampleCount, AudioBufferList&, uint64_t sampleTime, double hostTime, AudioUnitRenderActionFlags&) = 0;
+};
+
 class CoreAudioCaptureSourceFactory : public AudioCaptureFactory {
 public:
-    static CoreAudioCaptureSourceFactory& singleton();
+    WEBCORE_EXPORT static CoreAudioCaptureSourceFactory& singleton();
 
     void beginInterruption();
     void endInterruption();
@@ -112,6 +123,10 @@
 
     void devicesChanged(const Vector<CaptureDevice>&);
 
+    WEBCORE_EXPORT void registerSpeakerSamplesProducer(CoreAudioSpeakerSamplesProducer&);
+    WEBCORE_EXPORT void unregisterSpeakerSamplesProducer(CoreAudioSpeakerSamplesProducer&);
+    WEBCORE_EXPORT bool isAudioCaptureUnitRunning();
+
 private:
     CaptureSourceOrError createAudioCaptureSource(const CaptureDevice&, String&&, const MediaConstraints*) override;
     CaptureDeviceManager& audioCaptureDeviceManager() final;

Modified: trunk/Source/WebKit/ChangeLog (286649 => 286650)


--- trunk/Source/WebKit/ChangeLog	2021-12-08 10:08:35 UTC (rev 286649)
+++ trunk/Source/WebKit/ChangeLog	2021-12-08 11:05:54 UTC (rev 286650)
@@ -1,3 +1,25 @@
+2021-12-08  Youenn Fablet  <[email protected]>
+
+        Let RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit switch to VPIO unit if VPIO is running
+        https://bugs.webkit.org/show_bug.cgi?id=233316
+
+        Reviewed by Eric Carlson.
+
+        We track which process is the last to start audio capture.
+        In that case, we register its RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit to the capture unit.
+        This allows RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit to switch between RemoteIO if capture unit does not run
+        or the capture unit if running.
+        This allows to remove the x5 gain workaround.
+
+        * GPUProcess/GPUConnectionToWebProcess.cpp:
+        * GPUProcess/GPUConnectionToWebProcess.h:
+        * GPUProcess/GPUProcess.cpp:
+        * GPUProcess/GPUProcess.h:
+        * GPUProcess/webrtc/RemoteAudioMediaStreamTrackRendererInternalUnitManager.cpp:
+        * GPUProcess/webrtc/RemoteAudioMediaStreamTrackRendererInternalUnitManager.h:
+        * UIProcess/Cocoa/UserMediaCaptureManagerProxy.cpp:
+        * UIProcess/Cocoa/UserMediaCaptureManagerProxy.h:
+
 2021-12-08  Myles C. Maxfield  <[email protected]>
 
         [MacCatalyst] Various build fixes

Modified: trunk/Source/WebKit/GPUProcess/GPUConnectionToWebProcess.cpp (286649 => 286650)


--- trunk/Source/WebKit/GPUProcess/GPUConnectionToWebProcess.cpp	2021-12-08 10:08:35 UTC (rev 286649)
+++ trunk/Source/WebKit/GPUProcess/GPUConnectionToWebProcess.cpp	2021-12-08 11:05:54 UTC (rev 286650)
@@ -199,7 +199,14 @@
         m_process.setTCCIdentity();
     }
 #endif
-
+#if PLATFORM(COCOA) && ENABLE(MEDIA_STREAM)
+    void startProducingData(RealtimeMediaSource::Type type) final
+    {
+        if (type != RealtimeMediaSource::Type::Audio)
+            return;
+        m_process.startCapturingAudio();
+    }
+#endif
 #if HAVE(IOSURFACE_SET_OWNERSHIP_IDENTITY)
     std::optional<task_id_token_t> webProcessIdentityToken() const final
     {
@@ -901,6 +908,20 @@
     m_captureOrigin = originData.securityOrigin();
 }
 
+#if PLATFORM(COCOA) && ENABLE(MEDIA_STREAM)
+void GPUConnectionToWebProcess::startCapturingAudio()
+{
+    gpuProcess().processIsStartingToCaptureAudio(*this);
+}
+
+void GPUConnectionToWebProcess::processIsStartingToCaptureAudio(GPUConnectionToWebProcess& process)
+{
+    m_isLastToCaptureAudio = this == &process;
+    if (m_audioMediaStreamTrackRendererInternalUnitManager)
+        m_audioMediaStreamTrackRendererInternalUnitManager->notifyLastToCaptureAudioChanged();
+}
+#endif
+
 #if !PLATFORM(COCOA)
 bool GPUConnectionToWebProcess::setCaptureAttributionString() const
 {

Modified: trunk/Source/WebKit/GPUProcess/GPUConnectionToWebProcess.h (286649 => 286650)


--- trunk/Source/WebKit/GPUProcess/GPUConnectionToWebProcess.h	2021-12-08 10:08:35 UTC (rev 286649)
+++ trunk/Source/WebKit/GPUProcess/GPUConnectionToWebProcess.h	2021-12-08 11:05:54 UTC (rev 286650)
@@ -138,6 +138,12 @@
     bool allowsDisplayCapture() const { return m_allowsDisplayCapture; }
 #endif
 
+#if PLATFORM(COCOA) && ENABLE(MEDIA_STREAM)
+    void startCapturingAudio();
+    void processIsStartingToCaptureAudio(GPUConnectionToWebProcess&);
+    bool isLastToCaptureAudio() const { return m_isLastToCaptureAudio; }
+#endif
+
 #if ENABLE(APP_PRIVACY_REPORT)
     void setTCCIdentity();
 #endif
@@ -292,9 +298,10 @@
 #if PLATFORM(COCOA) && ENABLE(MEDIA_STREAM)
     std::unique_ptr<UserMediaCaptureManagerProxy> m_userMediaCaptureManagerProxy;
     std::unique_ptr<RemoteAudioMediaStreamTrackRendererInternalUnitManager> m_audioMediaStreamTrackRendererInternalUnitManager;
+    bool m_isLastToCaptureAudio { false };
+
     Ref<RemoteSampleBufferDisplayLayerManager> m_sampleBufferDisplayLayerManager;
-#endif
-#if PLATFORM(COCOA) && ENABLE(MEDIA_STREAM)
+
     std::unique_ptr<RemoteMediaRecorderManager> m_remoteMediaRecorderManager;
 #endif
 #if ENABLE(MEDIA_STREAM)

Modified: trunk/Source/WebKit/GPUProcess/GPUProcess.cpp (286649 => 286650)


--- trunk/Source/WebKit/GPUProcess/GPUProcess.cpp	2021-12-08 10:08:35 UTC (rev 286649)
+++ trunk/Source/WebKit/GPUProcess/GPUProcess.cpp	2021-12-08 11:05:54 UTC (rev 286650)
@@ -507,6 +507,14 @@
     completionHandler(GPUConnectionToWebProcess::objectCountForTesting());
 }
 
+#if PLATFORM(COCOA) && ENABLE(MEDIA_STREAM)
+void GPUProcess::processIsStartingToCaptureAudio(GPUConnectionToWebProcess& process)
+{
+    for (auto& connection : m_webProcessConnections.values())
+        connection->processIsStartingToCaptureAudio(process);
+}
+#endif
+
 } // namespace WebKit
 
 #endif // ENABLE(GPU_PROCESS)

Modified: trunk/Source/WebKit/GPUProcess/GPUProcess.h (286649 => 286650)


--- trunk/Source/WebKit/GPUProcess/GPUProcess.h	2021-12-08 10:08:35 UTC (rev 286649)
+++ trunk/Source/WebKit/GPUProcess/GPUProcess.h	2021-12-08 11:05:54 UTC (rev 286650)
@@ -111,6 +111,10 @@
 
     void webProcessConnectionCountForTesting(CompletionHandler<void(uint64_t)>&&);
 
+#if PLATFORM(COCOA) && ENABLE(MEDIA_STREAM)
+    void processIsStartingToCaptureAudio(GPUConnectionToWebProcess&);
+#endif
+
 private:
     void lowMemoryHandler(Critical, Synchronous);
 

Modified: trunk/Source/WebKit/GPUProcess/webrtc/RemoteAudioMediaStreamTrackRendererInternalUnitManager.cpp (286649 => 286650)


--- trunk/Source/WebKit/GPUProcess/webrtc/RemoteAudioMediaStreamTrackRendererInternalUnitManager.cpp	2021-12-08 10:08:35 UTC (rev 286649)
+++ trunk/Source/WebKit/GPUProcess/webrtc/RemoteAudioMediaStreamTrackRendererInternalUnitManager.cpp	2021-12-08 11:05:54 UTC (rev 286650)
@@ -43,19 +43,18 @@
 #include "SharedRingBufferStorage.h"
 #include <WebCore/CAAudioStreamDescription.h>
 #include <WebCore/CARingBuffer.h>
+#include <WebCore/CoreAudioCaptureSource.h>
 #include <WebCore/WebAudioBufferList.h>
 #endif
 
-#if PLATFORM(IOS_FAMILY)
-#include <WebCore/AudioSessionIOS.h>
-#endif
-
 namespace WebKit {
 
-class RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit : public CanMakeWeakPtr<RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit> {
+class RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit
+    : public CanMakeWeakPtr<RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit>
+    , public WebCore::CoreAudioSpeakerSamplesProducer {
     WTF_MAKE_FAST_ALLOCATED;
 public:
-    Unit(AudioMediaStreamTrackRendererInternalUnitIdentifier, Ref<IPC::Connection>&&, CompletionHandler<void(const WebCore::CAAudioStreamDescription&, size_t)>&&);
+    Unit(AudioMediaStreamTrackRendererInternalUnitIdentifier, Ref<IPC::Connection>&&, bool shouldRegisterAsSpeakerSamplesProducer, CompletionHandler<void(const WebCore::CAAudioStreamDescription&, size_t)>&&);
     ~Unit();
 
     void start(const SharedMemory::Handle&, const WebCore::CAAudioStreamDescription&, uint64_t numberOfFrames, IPC::Semaphore&&);
@@ -64,12 +63,17 @@
     OSStatus render(size_t sampleCount, AudioBufferList&, uint64_t sampleTime, double hostTime, AudioUnitRenderActionFlags&);
     void notifyReset();
 
+    void setShouldRegisterAsSpeakerSamplesProducer(bool);
+
 private:
     void storageChanged(SharedMemory*, const WebCore::CAAudioStreamDescription&, size_t);
 
-#if PLATFORM(IOS_FAMILY)
-    void categoryDidChange(WebCore::AudioSession::CategoryType);
-#endif
+    // CoreAudioSpeakerSamplesProducer
+    const WebCore::CAAudioStreamDescription& format() final { return m_description; }
+    void captureUnitIsStarting() final;
+    void captureUnitHasStopped() final;
+    // Background thread.
+    OSStatus produceSpeakerSamples(size_t sampleCount, AudioBufferList&, uint64_t sampleTime, double hostTime, AudioUnitRenderActionFlags&) final;
 
     AudioMediaStreamTrackRendererInternalUnitIdentifier m_identifier;
     Ref<IPC::Connection> m_connection;
@@ -82,11 +86,8 @@
     std::unique_ptr<WebCore::CARingBuffer> m_ringBuffer;
 #endif
     bool m_isPlaying { false };
-#if PLATFORM(IOS_FAMILY)
-    float m_volume { 1.0 };
-    WebCore::AudioStreamDescription::PCMFormat m_format { WebCore::AudioStreamDescription::PCMFormat::Float32 };
-    WebCore::AudioSessionIOS::CategoryChangedObserver m_categoryChangeObserver;
-#endif
+    WebCore::CAAudioStreamDescription m_description;
+    bool m_shouldRegisterAsSpeakerSamplesProducer { false };
 };
 
 RemoteAudioMediaStreamTrackRendererInternalUnitManager::RemoteAudioMediaStreamTrackRendererInternalUnitManager(GPUConnectionToWebProcess& gpuConnectionToWebProcess)
@@ -101,7 +102,7 @@
 void RemoteAudioMediaStreamTrackRendererInternalUnitManager::createUnit(AudioMediaStreamTrackRendererInternalUnitIdentifier identifier, CompletionHandler<void(const WebCore::CAAudioStreamDescription&, size_t)>&& callback)
 {
     ASSERT(!m_units.contains(identifier));
-    m_units.add(identifier, makeUniqueRef<RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit>(identifier, m_gpuConnectionToWebProcess.connection(), WTFMove(callback)));
+    m_units.add(identifier, makeUniqueRef<RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit>(identifier, m_gpuConnectionToWebProcess.connection(), m_gpuConnectionToWebProcess.isLastToCaptureAudio(), WTFMove(callback)));
 }
 
 void RemoteAudioMediaStreamTrackRendererInternalUnitManager::deleteUnit(AudioMediaStreamTrackRendererInternalUnitIdentifier identifier)
@@ -131,6 +132,13 @@
         unit->setAudioOutputDevice(deviceId);
 }
 
+void RemoteAudioMediaStreamTrackRendererInternalUnitManager::notifyLastToCaptureAudioChanged()
+{
+    // FIXME: When supporting multiple units to different speakers, we should only select the unit matching the VPIO output device.
+    for (auto& unit : m_units.values())
+        unit->setShouldRegisterAsSpeakerSamplesProducer(m_gpuConnectionToWebProcess.isLastToCaptureAudio());
+}
+
 static WebCore::AudioMediaStreamTrackRendererInternalUnit::RenderCallback renderCallback(RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit& unit)
 {
     return [&unit](auto sampleCount, auto& list, auto sampleTime, auto hostTime, auto& flags) {
@@ -145,13 +153,11 @@
     };
 }
 
-RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit::Unit(AudioMediaStreamTrackRendererInternalUnitIdentifier identifier, Ref<IPC::Connection>&& connection, CompletionHandler<void(const WebCore::CAAudioStreamDescription&, size_t)>&& callback)
+RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit::Unit(AudioMediaStreamTrackRendererInternalUnitIdentifier identifier, Ref<IPC::Connection>&& connection, bool shouldRegisterAsSpeakerSamplesProducer, CompletionHandler<void(const WebCore::CAAudioStreamDescription&, size_t)>&& callback)
     : m_identifier(identifier)
     , m_connection(WTFMove(connection))
     , m_localUnit(WebCore::AudioMediaStreamTrackRendererInternalUnit::createLocalInternalUnit(renderCallback(*this), resetCallback(*this)))
-#if PLATFORM(IOS_FAMILY)
-    , m_categoryChangeObserver([this](auto&, auto category) { categoryDidChange(category); })
-#endif
+    , m_shouldRegisterAsSpeakerSamplesProducer(shouldRegisterAsSpeakerSamplesProducer)
 {
     m_localUnit->retrieveFormatDescription([weakThis = WeakPtr { *this }, this, callback = WTFMove(callback)](auto&& description) mutable {
         if (!weakThis || !description) {
@@ -163,9 +169,6 @@
         m_frameChunkSize = std::max(WebCore::AudioUtilities::renderQuantumSize, tenMsSampleSize);
         callback(*description, m_frameChunkSize);
     });
-#if PLATFORM(IOS_FAMILY)
-    WebCore::AudioSessionIOS::addAudioSessionCategoryChangedObserver(m_categoryChangeObserver);
-#endif
 }
 
 RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit::~Unit()
@@ -178,6 +181,19 @@
     m_connection->send(Messages::GPUProcessConnection::ResetAudioMediaStreamTrackRendererInternalUnit { m_identifier }, 0);
 }
 
+void RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit::setShouldRegisterAsSpeakerSamplesProducer(bool value)
+{
+    if (m_shouldRegisterAsSpeakerSamplesProducer == value)
+        return;
+
+    m_shouldRegisterAsSpeakerSamplesProducer = value;
+    if (!m_isPlaying)
+        return;
+
+    if (m_shouldRegisterAsSpeakerSamplesProducer)
+        WebCore::CoreAudioCaptureSourceFactory::singleton().registerSpeakerSamplesProducer(*this);
+}
+
 void RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit::start(const SharedMemory::Handle& handle, const WebCore::CAAudioStreamDescription& description, uint64_t numberOfFrames, IPC::Semaphore&& semaphore)
 {
     if (m_isPlaying)
@@ -188,9 +204,14 @@
     m_isPlaying = true;
     m_ringBuffer = WebCore::CARingBuffer::adoptStorage(makeUniqueRef<ReadOnlySharedRingBufferStorage>(handle), description, numberOfFrames).moveToUniquePtr();
     m_renderSemaphore = WTFMove(semaphore);
-#if PLATFORM(IOS_FAMILY)
-    m_format = description.format();
-#endif
+    m_description = description;
+
+    if (m_shouldRegisterAsSpeakerSamplesProducer) {
+        WebCore::CoreAudioCaptureSourceFactory::singleton().registerSpeakerSamplesProducer(*this);
+        if (WebCore::CoreAudioCaptureSourceFactory::singleton().isAudioCaptureUnitRunning())
+            return;
+    }
+
     m_localUnit->start();
 }
 
@@ -197,6 +218,7 @@
 void RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit::stop()
 {
     m_isPlaying = false;
+    WebCore::CoreAudioCaptureSourceFactory::singleton().unregisterSpeakerSamplesProducer(*this);
     m_localUnit->stop();
 }
 
@@ -205,7 +227,7 @@
     m_localUnit->setAudioOutputDevice(deviceId);
 }
 
-OSStatus RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit::render(size_t sampleCount, AudioBufferList& list, uint64_t, double, AudioUnitRenderActionFlags&)
+OSStatus RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit::render(size_t sampleCount, AudioBufferList& list, uint64_t, double, AudioUnitRenderActionFlags& flags)
 {
     ASSERT(!isMainRunLoop());
 
@@ -214,11 +236,9 @@
     if (m_ringBuffer->fetchIfHasEnoughData(&list, sampleCount, m_readOffset)) {
         m_readOffset += sampleCount;
         status = noErr;
-
-#if PLATFORM(IOS_FAMILY)
-        if (m_volume != 1)
-            WebCore::AudioSampleBufferList::applyGain(list, m_volume, m_format);
-#endif
+    } else {
+        WebCore::AudioSampleBufferList::zeroABL(list, static_cast<size_t>(sampleCount * m_description.bytesPerFrame()));
+        flags = kAudioUnitRenderAction_OutputIsSilence;
     }
 
     auto requestedSamplesCount = m_generateOffset;
@@ -229,14 +249,26 @@
     return status;
 }
 
-#if PLATFORM(IOS_FAMILY)
-void RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit::categoryDidChange(WebCore::AudioSession::CategoryType type)
+void RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit::captureUnitIsStarting()
 {
-    m_volume = type == WebCore::AudioSession::CategoryType::PlayAndRecord ? 5 : 1;
-    RELEASE_LOG(WebRTC, "RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit::categoryDidChange %d, setting volume to %f", (int)type, m_volume);
+    // Capture unit is starting and audio will be rendered through it and not by our lcoal unit so stop the local unit.
+    if (m_isPlaying)
+        m_localUnit->stop();
 }
-#endif
 
+void RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit::captureUnitHasStopped()
+{
+    // Capture unit has stopped and audio will no longer be rendered through it so start the local unit.
+    if (m_isPlaying)
+        m_localUnit->start();
+}
+
+// Background thread.
+OSStatus RemoteAudioMediaStreamTrackRendererInternalUnitManager::Unit::produceSpeakerSamples(size_t sampleCount, AudioBufferList& list, uint64_t sampleTime, double hostTime, AudioUnitRenderActionFlags& flags)
+{
+    return render(sampleCount, list, sampleTime, hostTime, flags);
+}
+
 } // namespace WebKit
 
 #endif // ENABLE(GPU_PROCESS) && ENABLE(MEDIA_STREAM)

Modified: trunk/Source/WebKit/GPUProcess/webrtc/RemoteAudioMediaStreamTrackRendererInternalUnitManager.h (286649 => 286650)


--- trunk/Source/WebKit/GPUProcess/webrtc/RemoteAudioMediaStreamTrackRendererInternalUnitManager.h	2021-12-08 10:08:35 UTC (rev 286649)
+++ trunk/Source/WebKit/GPUProcess/webrtc/RemoteAudioMediaStreamTrackRendererInternalUnitManager.h	2021-12-08 11:05:54 UTC (rev 286650)
@@ -60,6 +60,8 @@
     bool hasUnits() { return !m_units.isEmpty(); }
     class Unit;
 
+    void notifyLastToCaptureAudioChanged();
+
 private:
     // Messages
     void createUnit(AudioMediaStreamTrackRendererInternalUnitIdentifier, CompletionHandler<void(const WebCore::CAAudioStreamDescription&, size_t)>&& callback);

Modified: trunk/Source/WebKit/UIProcess/Cocoa/UserMediaCaptureManagerProxy.cpp (286649 => 286650)


--- trunk/Source/WebKit/UIProcess/Cocoa/UserMediaCaptureManagerProxy.cpp	2021-12-08 10:08:35 UTC (rev 286649)
+++ trunk/Source/WebKit/UIProcess/Cocoa/UserMediaCaptureManagerProxy.cpp	2021-12-08 11:05:54 UTC (rev 286650)
@@ -350,6 +350,7 @@
 #if ENABLE(APP_PRIVACY_REPORT)
     m_connectionProxy->setTCCIdentity();
 #endif
+    m_connectionProxy->startProducingData(proxy->source().type());
     proxy->start();
 }
 

Modified: trunk/Source/WebKit/UIProcess/Cocoa/UserMediaCaptureManagerProxy.h (286649 => 286650)


--- trunk/Source/WebKit/UIProcess/Cocoa/UserMediaCaptureManagerProxy.h	2021-12-08 10:08:35 UTC (rev 286649)
+++ trunk/Source/WebKit/UIProcess/Cocoa/UserMediaCaptureManagerProxy.h	2021-12-08 11:05:54 UTC (rev 286650)
@@ -65,6 +65,7 @@
 #if ENABLE(APP_PRIVACY_REPORT)
         virtual void setTCCIdentity() { }
 #endif
+        virtual void startProducingData(WebCore::RealtimeMediaSource::Type) { }
     };
     explicit UserMediaCaptureManagerProxy(UniqueRef<ConnectionProxy>&&);
     ~UserMediaCaptureManagerProxy();
_______________________________________________
webkit-changes mailing list
[email protected]
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to