On Mon, 14 Jan 2002, Paul Davis wrote:

> >I'm not at all sure why the callback mechanism is such an issue.  Windows uses
> >callbacks for their standard sound layer as well as with DirectSound.  I'm not
> >sure why the callback model is so difficult to incorporate into an application
>
> imagine a standard tracker-style program. it has a UI of some kind
> that defines a pattern of sound it should play.  it can choose the
> size of the chunks it wants to generate.  with a "push" model (aka
> "read/write" model) for audio i/o, all it has to do is write the chunk
> the the audio interface and wait for the write(2) to return, then move
> on to compute the next chunk.
>
> with a callback system, this isn't possible. instead, it now has to
> keep track of where it is in the pattern each time it gets called to
> process `nframes', and it has no control over the size of
> `nframes'. if the pattern length(s) don't match `nframes' as a nice
> round divisor or multiple, this can get tricky. i know this because i
> ported rythmnlab to use JACK; rythmnlab is a polymetric pattern audio
> sequencer, and figuring out how to compute the next nframes at any
> point in time was not easy (for me, at least). perhaps a program
> written with a callback model from the start would be easy, but
> rythmnlab was not, and i got quite a headache from this stuff :)
>
Maybe an argument to start moving to the callback model sooner than later ;-)
heh

> secondly, there are many, many existing applications that have been
> written on the assumption that they can call write() or read() (or the
> ALSA equivalents), and just go to sleep till the audio interface
> driver wakes them up again. in several programs (Csound would be a
> classic example), this design is absolutely fundamental to the
> operation of the program. Changing such programs is never impossible
> (Michael Gogins got Csound working as a VST plugin a few months ago),
> but is often quite hard, and developers may well find themselves
> saying "why am i doing this?"
>
> >suit high end apps while not making it too difficult for low end apps.  It
> >really is time that something was done.  What can I do to help?
>
> write code.
>
>       - we need work on supporting other data types (MIDI would be
>         very interesting, and quite hard)
>       - port an existing linux audio app to use JACK. i am particularly
>       interested in SpiralLoops, but it has the same design as
>       Csound (using blocking write(2) to schedule itself) which
>       makes it not very easy to do.
>
I'm not sure if I understand why this would help to position jack as the
standard linux sound server.  It seems like we need to get some kind of
discussion going between arts and jack developers as arts is in the position
that jack would like to share.  No doubt that jack has something to offer in
terms of lower latency.  Since you are the primary developer of jack, what do
you think?

The MIDI implementation would no doubt be above my head for quite some time but
I wouldn't mind working on some documentation or an example command line jack
wave player(if such a thing doesn't already exist).

> write explanatory documentation.
>
> >Where can these comparisons be found btw?
>
> Karl MacMillan's paper presented at ICMC last fall. He can probably
> post the URL/reference. It compared latency performance of audio API's
> in several different desktop OS's.
>
> --p
>

Chris


_______________________________________________
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel

Reply via email to