Hi,

> > > Then the midi output layer is not called directly by the midi device
> > > drivers. The midi output is responsible for parsing
> > > the blocks and do the deltatime waiting.


Right now, waiting is done by the sound server, because this is the layer
implementing the communication mechanisms. Using the draft architecture,
communication would become unidirectional, with bidirectional
communication being a feature to enhance synchronity. We'd still have to
decide whether to do timing in the sound server or in the driver- and,
considering a simple example (the proposed SDL PCM callback driver and
the UNIX sound server using ossraw), we can see that we cannot handle
timing in the MIDI output layer- ossraw wouldn't know whether to return
immediately (SDL PCM callback) or to wait (possibly polling for events on
the input pipes, something I always considered quite convenient in the
current implementation). And I don't see why EVERY MIDI output layer
should contain code to distinguish between these uses.

Of course, I have to admit that the SDL PCM callback server is only a
proposal at this point...


llap,
 Christoph


Reply via email to