On Wed, Nov 14 2001 at 10:49:39am -0500, Andy Wingo wrote: > [...] if data is read the callback is called, with a straight-up > function call. so no jumps are used in that sense.
But the point is that data is read/written at every tick, which means every jack client must be run inside every tick; that is, if a tick is 3ms long and you have 10 clients running, you must have 10 context switches at every 3ms; I believe this is exactly what JACK tries to avoid; if it wasn't so, to have low latency you'd have to sacrifice a *lot* of processing power. From the LAAGA draft documentation ( http://www.eca.cx/laaga/spec/index.html ): > Let's say that we have 5 processes producing audio, and one of them > handles the audio hardware i/o. Now to avoid buffer underruns, during one > hardware interrupt cycle, all 5 processes must have enough processor time > to produce the next block of audio data. The more processes are involved, > the higher the process switching overhead becomes. > [...] > One way to avoid the possible IPC troubles is to locate all audio > producing code into the audio engine process (audio engine refers to the > process, or more specifically the thread, responsible for audio hardware > i/o). In other words audio producers/consumers (=clients) are loaded as > plugins to the engine (=server). But this approach, too, has its problems. > > The biggest problem of this approach is the increased client side > complexity. Client applications must be divided into realtime critical > (audio producing/consuming) and non-realtime parts (user interfaces, > network and file i/o, etc). The critical parts are loaded to the server > process as plugins, while the non-critical part run in a separate lower > priority process. Some kind of IPC is also needed between the realtime and > non-realtime parts of the client. And to make things even more difficult, > care must be taken that this communication never blocks the realtime > critical process. And from an ancient message by Paul on why alsa-lib wasn't enough: > Establishing connections: alsa-lib offers one model for this, based on the > shared memory IPC mechanisms. Such a solution cannot scale to setups with > lots of clients running at low latencies because of the overhead of > context switching and the associated memory/cache performance loss. So, it seems to me things are a little more complicated than that... See ya, Nelson
