On Thu, 11 Jul 2002, Martijn Sipkema wrote: > Oh, you mean a mixer that doesn\'t only mix jack ports? > Is that a common situation?
Yes. I really can't say how common this is. It does seem that majority of the bigger audio apps seem to have a relatively complex internal signal flow. It is common to access all audio data producers (files, soundcard devices, ausio server, network connections, etc) and consumers (basicly the same list) through generic interfaces. And as all nodes are equal, a configuration setting is needed to determine how many samples the graph will process at a time. Hmm, you could of course pick a random input node at every iteration, check how many samples it has available and then let the engine process that many samples and restart the lottery, but this doesn't exactly sound very good. The bad side is that this design, although elegant (abstracting is one weak spot all cs people share ;)), is not optimal for low-latency operation. As the numerous (and volumenous) discussion here on lad have shown, it's actually needed to select one node to be the master node. For instance to implement proper JACK support, you just can't add implement a JACK source/sink object and connect it to your graph, but you need to have the JACK callback control the whole graph. But even if you do this, all assumptions and restriction defined by your internal interfaces remain. It's good to remember that most apps will continue to support the read/write audio APIs. I don't think the Csound project will drop support for all other audio APIs once it's ported to JACK. ;) For instance, in theory it's perfectly possible to construct an ecasound setup which simultaneously accesses ALSA, JACK, OSS and aRts nodes. This of course doesn't make any sense, but it is possible and I'd like to keep it this way. >> If using a large FIFO between a non-rt and rt threads, then the best >> solution is to make the FIFO nonblocking using atomic operations. This is > Actually using mutexes (or better spinlocks as Victor noted) is a portable > way to build an atomic operation. POSIX does not offer atomic_add stuff > (yet) AFAIK. Unfortunately no, POSIX doesn't provide them. But with the current Linux kernel it's the only robust solution. Spinlocks as used in RTLinux are not of much use to us as we can't disable hw-interrupts at will in user-space. Mutual exclusion doesn't work either as we can't guarantee that the non-rt thread can release the mutex in time. >> With full-blown priority inheritance and mutual exclusion between the >> threads, the rt-thread would then block for seconds in the above >> example and who knows about the worst-case upper bound! > I don\'t fully understand this. Let's say that the non-thread causes a page fault before it manages to release the mutex -> no way to calculate the upper bound. >> But as is is, theoretically speaking the worst-case time is \'infinity\'. ;) > That is a long time. :) > I think even Linux can do better than that for worst case latency. But I'm not sure if this is something we want to limit. Letting Linux use non-deterministic algorithms has many benefits. Currently my view is that by clearly dividing audio software into rt and non-rt parts, we can get the best of both worlds. Let's say you have a JACK setup with clients (suprise, surprise) A, B and C. A and B are fully deterministic (DSP-processing only) while C is a file input client. If the disk i/o subsystem stalls for a long period of time (buffers are being flushed, etc), C will still finish its process() on time but it must deliver a silent buffer of audio. The rest of the setup will continue operation without timeouts. This is IMHO what we should aim at. -- http://www.eca.cx Audio software for Linux!
