Another option is accessing the WebAudio context through the SDL_Audio 
wrapper (if the OpenAL wrapper is not an option). I had to do this on iOS 
Safari in order to 'unlock' audio on the first user input event.

Here's some copy-pasta from my code, this accesses the SDL wrapper to 
initialize audio, and then accesses the embedded WebAudio context to create 
and play an empty buffer (in an EM_ASM block). I think once you have the 
WebAudio context you can do other things with it.

EM_BOOL SoundMgr::emscOnInputEvent(void* userData) {
    // on iOS, WebAudio can only be initialized from a touch event,
    // so we initialize soloud from within here, and tell the soundMgr
    // that it is ok now to play audio 
    //
    // https://paulbakaus.com/tutorials/html5/web-audio-on-ios/
    SoundMgr* self = (SoundMgr*) userData;
    if (!self->audioValid) {
        EM_ASM(
            SDL.openAudioContext();
            if (SDL.webAudioAvailable()) {
                var buffer = SDL.audioContext.createBuffer(1, 1, 22050);
                var source = SDL.audioContext.createBufferSource();
                source.buffer = buffer;
                source.connect(SDL.audioContext.destination);
                if (typeof source.start === "function") {
                    source.start(0);
                }
                else if (typeof source.noteOn == "function") {
                    source.onNote(0);
                }
            }
        );
        self->soloud.init();

        // load all the delay load items that have piled up
        for (const auto& item : self->delayLoadItems) {
            self->initSource(item.sourceId, item.stream);
        }
        self->delayLoadItems.Clear();
        self->audioValid = true;
    }
    return false;
}
#endif

Am Mittwoch, 23. August 2017 01:32:07 UTC+8 schrieb Brion Vibber:
>
> Web Audio is the JavaScript-side API that's being used; OpenAL serves as a 
> C-side API in front of it.
>
> If you can get the Web Audio AudioContext object, you should be able to 
> capture that via context.createMediaStreamDestination():
>
> https://developer.mozilla.org/en-US/docs/Web/API/AudioContext/createMediaStreamDestination
>
> https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamAudioDestinationNode
>
> The node's "stream" property will be a MediaStream that you can send into 
> RTC (at which point my knowledge of this system gets a lot fuzzier :D)
>
> -- brion
>
>
> On Tue, Aug 22, 2017 at 10:21 AM, Kai Kuehne <[email protected] 
> <javascript:>> wrote:
>
>> Hi,
>> I'm using the MediaCapture-API to capture the video rendered
>> to a Canvas by emscripten to send it to another browser via
>>  a RTCPeerConnection.
>>
>> How do I access the audio that is being played by emscripten
>> to also send it across the connection?
>>
>> I found a ticket [1] that mentions that one can export the AL object
>> and access the audio that way. Is this still relevant?
>>
>> To be honest, since I'm still in the early stages of development,
>> I don't know whether the audio is currently being handled by
>> OpenAL or WebAudio.
>>
>> Thanks!
>>
>> https://github.com/kripken/emscripten/issues/3599
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "emscripten-discuss" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"emscripten-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to