Hi Jason, Thanks for quick tutorial :-)
I've just checked in what will be the interface for getting audio data from a ImageStream, bascially we have two new pure virtual base classes osg::AudioStream (that handles the reading of the audio data) and osg::AudioSink(Interface) that will be subclassed to integrate the audio library that will render the audio - this is where OpenAL/SDL etc would come in. One attaches the AudioSink to the AudioStream to wire up the reading to the playing. The AudioStream objects will be provide by the ffmpeg plugin, with all the image stream available in the read movie, being attaches as list of AudioStream to the osg::ImageStream that ffmpeg reads. osg::AudioStream could also be used elsewhere. I have some work left to do on the internals of the ffmpeg plugin to get it provide the concrete AudioStream classes and to attach the audio streams to the final image stream, this should be complete early next week. Cheers, Robert. On Fri, Feb 27, 2009 at 8:13 PM, Jason Daly <jd...@ist.ucf.edu> wrote: > Robert Osfield wrote: > > Hi Jason, > > On Thu, Feb 26, 2009 at 9:10 PM, Jason Daly <jd...@ist.ucf.edu> wrote: > > > OpenAL is an obvious possibility for this. The OpenAL-Soft implementation > at http://kcat.strangesoft.net/openal.html supports almost all of the > platforms that OSG supports, and there are also hardware implementations of > OpenAL for some sound cards (mostly Creative Labs). There is also a > dedicated OSX implementation. > > You wouldn't necessarily need to go so far as to integrate osgAudio yet. > Simple streaming from ffmpeg could probably be implemented with just a few > lines of OpenAL code. > > > Do you have any links to a tutorial that illustrates what these few > lines of code might be? > > > No, but off the top of my head: > > > // Opening the device looks like this: > > ALCdevice * device = alcOpenDevice(NULL); > ALCcontext * context = alcCreateContext(device, NULL); > > alcMakeContextCurrent(context); > alcProcessContext(context); > > // Set the distance model to NONE, so there is no distance > // attenuation > alDistanceModel(AL_NONE); > > // Initialize the listener's state > ALfloat orientation[6]; > alListener3f(AL_POSITION, 0.0, 0.0, 0.0); > alListener3f(AL_VELOCITY, 0.0, 0.0, 0.0); > alListenerf(AL_GAIN, 1.0); > memset(zero, 0, sizeof(ALfloat) * 6); > alListenerfv(AL_ORIENTATION, zero); > > ... > > > // Setting up playback with a double-buffered streaming mechanism looks like > this: > > ALuint buffers[2]; > ALuint source; > > alGenBuiffers(2, &buffers); > alGenSources(1, &source); > > alBufferData(buffers[0], AL_FORMAT_MONO16, audioData, dataLength, 48000); > alBufferData(buffers[1], AL_FORMAT_MONO16, audioData, dataLength, 48000); > > ... > > > // This is how you keep the buffers full > > alGetSourcei(source, AL_BUFFERS_PROCESSED, &processed); > if (processed > 0) > { > // Swap the buffers > buffer[0] = buffer[1]; > > // Fill the "back" buffer again > alBufferData(buffers[1], AL_FORMAT_MONO16, audioData, dataLength, > 48000); > } > > > > OK, maybe it's more than a few lines, but it's pretty straightforward. If I > had more time, I'd contribute it myself. I'm sorry that I can't spare the > time right now. > > --"J" > > > _______________________________________________ > osg-users mailing list > osg-users@lists.openscenegraph.org > http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org > > _______________________________________________ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org