>hadn't assumed that the latency between application and hardware would >be the bottleneck, but rather that the latency between applications >would be. > >I guess I should have been more specific: I was thinking that the >greatest benefits might be in zero-copying I/O for IPC, to support a >chain of filters/effects a la those in commercial programs like "Reason" >or similar. > >Of course, that's irrelevant if all such viable systems use explicit >shared memory to pass data through different effects. (I personally >think that message passing or pipes are easier abstractions than shared >memory, but I realize that that's an unpopular view.)
Well, yes, JACK (http://jackit.sf.net/) does this with shm. If message passing was zero copy, JACK could use that too, but its hard to see any actual benefit since the underlying mechanism is the same: there's a chunk of memory somewhere, and multiple ways to reference it. However, its a bit irrelevant to JACK, because the implementation details are 100% hidden from applications. If pipes were zero copy, same thing would apply. I think that the choice of abstraction comes down to the metaphor thats at the root of the IPC model. There are designs in which a message passing system would make more sense than shm because the internal design is based on the metaphor of passing buffers of data around. By contrast, JACK's metaphor is of a series of "ports" where data can be read from or written to, and here, the "chunk of memory to read/write on" is a better fit. > I guess that the >kind of application I had in mind was something like GNU Octal, but >development on that seems to have stagnated. JACK is an attempt to provide the kind of functionality that Octal (amongst other things) do, but at the inter-process, not inter-thread-of-some-kind level. >> scheduling latency is really the only issue i've every had >> with the kernel. > >I just read one of the KURT papers >(http://www.ittc.ku.edu/kurt/papers/KURT-vienna-paper.ps). I agree; >dynamically introducing a different scheduling policy when applications >need it (or, as a first cut, ask for it) seems like the Right Thing to >do. Be sure to read Luca's report on firm timers too (the URL was posted here recently). This looks really good, and arguably a better solution than KURT, at least for x86 systems. >One interesting (but unfortunately likely irrelevant to audio) thing >that some dynamic optimization research has been aiming for lately is a >sort of bidding/guaranteeing mechanism. For example, an application [ ... ] yes, i suspect it is likely irrelevant. the problem with resource allocation mechanisms in the realm of scheduling is that they really go to the absolute heart of the OS design. its not really like filesystem design, which if handled properly is a blackbox from the point of the kernel (e.g. Linux' VFS). scheduling latency and scheduling resolution is something that you either decide is going to be flexible, and then pay a (small) price for in the common case, or something that you lock in at compile time, presumably having chosen the ideal for the intended use. linus so far has been firmly in favor of the latter, but there are some signs he may be softening on this. there was some interesting work done on pluggable schedulers, and i myself did an implementation of scheduler groups with their own schedulers a couple of years ago. i lost it in a disk crash, and never had the interest to pick it up again. --p
