Hi,

> ---------------------------------------------------------------------------
> pcm_dev_format_s struc {rate, format, buffer_frag_size}

[minus buffer_frag_size, as indicated in your later e-mail]

> pcm_sample_format_s�struc {rate, format, bytesize}
> ---------------------------------------------------------------------------
> pcm_dev_open(*pcm_dev_format_s) return handle
> pcm_dev_close(handle)
> ?( pcm_dev_getformat(handle) return *pcm_dev_format_s )

If I understand you API correctly, the OPL emu layer (or whichever other
emulation layer might end up being written) will be using the callback
function, which will provide this information anyway.
The sound formats of other resources are fixed, so from this point of view
there's no need to be able to retreive this information, except for
debugging purposes.
The only case where this could be releavant would be for 'incompatible'
samples, i.e. samples with the LCM of their sample rates being above what
the hardware offers (or the LCM of their sample rate and the
preconfiguresd sample rate S set being > S). In these cases, it would be
possible to transform them and cache the results.
But I don't think that we'll encounter this problem in SCI unless we allow
external samples, and those could be restricted to fit our needs.

> pcm_sample_play(handle, *databuffer, *pcm_sample_format_s) return shandle

The question of 'who takes responsibility for the buffer' should be
answered in the API docs here. My suggestion would be that either the
caller takes this responsibility, since we'll probably be using statical
buffers for OPL emu output, or that this can be chosen freely.
In the former case, the caller must be able to determine when a sound has
finished playing, so that it may release the resource lock for the
resource containing the sample. While this could be done with the
pcm_sample_getpos() call returning -1 (or whatever), this would require
continual polling to get this point. OTOH, this is also the 'safest' way
to do this, since it can be done in synchronity with the main
process/thread.

> pcm_sample_stop(handle, shandle)
> pcm_sample_getpos(handle, shandle) return int32


> pcm_callback_buffersetup(handle, *make_buffer_function(*pcm_dev_format))

The callback also needs a buffer to write to.

> pcm_callback_stop(handle)

Implies that there is only one callback function; if we needed more than
that, we'd have to do multiplexing on a lower level. But I don't see how
we could need more than one of these, so this looks appropriate.

> pcm_volume...?

Should be handled at the mixer level, i.e. should be present here- some
sound hardware does not provide a hardware mixer, so we must not rely on
the sound driver to provide this functionality.

(You could check for this at run-time, of course, using the hw mixer
where available and falling back to emulation where not, similar to what
the gfx layer does with mouse pointers and stippled lines).


> Things noted: Only one callback_buffer thingy

Should be OK for us, as noted.

>               top pcm device layer has to do mixing/conversion
>               ALSA rocks!

No doubt about that.


Well, I like it; as far as I can tell, this handles our requirements quite
adequately. There is one problem I see as far as driver development is
concerned, though, and that problem is the callback funcion.
On systems where we have direct hardware control, it'll be easy to invoke
this from an IRQ handler. Unfortunately, we don't support any of these 
systems at this time. Do sound APIs provide this mechanism in general?
It'd certainly be helpful... From what I gathered, ALSA supports
poll()ing on the pcm handles, which would allow us to do this from a
clone()d thread (it's A_L_SA, after all...) if ALSA does not do this
natively. But I don't know about OSS or the sound APIs provided by other
OSses...

llap,
 Christoph


Reply via email to