> >> For instance if you have > >> a mixer element in the signal graph, it is just easier if all the inputs > >> deliver the same amount of data at every iteration. > > Hmm, why? I can see that it is a requirement that at every iteration there > > is the same data available at the input(s) as is requested on the output(s). > > I don\\\'t see what makes a mixer that much easier to implement if the amount > > of data to process is the same for every iteration. > > If all JACK inputs have x samples of audio and non-JACK input y samples, > \'x != y\' and you need to mix to a JACK output with x samples of space, you > have a problem. Redesign is needed to make sure that this never happens.
Oh, you mean a mixer that doesn\'t only mix jack ports? Is that a common situation? > If using a large FIFO between a non-rt and rt threads, then the best > solution is to make the FIFO nonblocking using atomic operations. This is > an essential technique when making robust user-space audio apps. Real-life > implementation cans be found from ardour (disk butler subsystem), > ecasound (classes AUDIO_IO_BUFFERED_PROXY and AUDIO_IO_PROXY_SERVER) > and EVO (formerly known as linuxsampler). Actually using mutexes (or better spinlocks as Victor noted) is a portable way to build an atomic operation. POSIX does not offer atomic_add stuff (yet) AFAIK. > >> other subsystems block without deterministic worst-case bounds. No amount > >> of priority (given by priority inheritance) will save your butt if the > >> disk head is physically in the wrong place when you need it. On a > > When the disk is not able to supply the samples in time, then there is a > > problem :) > > Using a fast disk and buffering will normally be sufficient. > > Aa, but that is a different issue. The question is about response time, > not bandwidth. When using a large enough buffer, the question is again only bandwidth. > For instance ecasound\'s current disk i/o subsystem (run > in a non-rt thread) sometimes stalls for multiple seconds (!) on my > machine (two IDE-disks on the same bus), but still the audio processing > keeps on going without xruns. The disk i/o system just has to buffer > huge amounts of data to cover even the longest delays. Of course if you > are really running out of disk i/o capacity, then fancy locking mechanisms > won\'t save you. That\'s right. > With full-blown priority inheritance and mutual exclusion between the > threads, the rt-thread would then block for seconds in the above > example and who knows about the worst-case upper bound! I don\'t fully understand this. > >> The correct solution is to partition your audio code into real-time > >> capable and non-realtime parts and make sure that the non-real-time part > >> is never ever able to block the real-time part. In essence this very close > > As I see it, it isn\\\'t a problem when the non-realtime part blocks the > > real-time part, as long as there is a worst case bounded block time for it. > > But as is is, theoretically speaking the worst-case time is \'infinity\'. ;) That is a long time. :) I think even Linux can do better than that for worst case latency. [...] > I admit, this is a real problem. If the sampling_rate/interrupt_period is > fractional, the only way for a JACK driver to keep up is to set > JACK\'s buffersize to ceil(srate/iperiod) and then alternate between > process(nframes) and process(nframes-1). It is also (almost) impossible to know what \'nframes\' will be. So determining an upper bound and just using what is available is the best solution for this kind of hardware. > Ok, I guess here\'s the first real case against const-nframe. I thought i had already mentioned this several time :) > On the hand > at least with ALSA you\'d be in trouble anyway as ALSA will wake your > driver only when period_count samples are available. If you set > period_count to floor(srate/iperiod) you will be woken up on every > interrupt but you will slowly fall behind and eventually issue two > process() calls per iteration (as you described). Perhaps this could be changed/added in ALSA? [...] > So like Paul said, do we need to support these soundcards...? For > JACK-style operation both the above scenario are really, really bad. I don\'t have hardware that behaves like this :) But still, the yamahas are a common hardware. Is the korg card the 1212? I think it would still be nice to support these. > >> Not a problem as there\\\'s no 2^x limitation. > > Isn\\\'t there? for FFT? > > The trick is that with majority of available soundcards the user is able > to set period_count 2^x samples. JACK clients using FFT are free to raise > on error if a non 2^x buffersize is active. This is pretty good situation > from both developer and user point of view. I would like it if the application just used a larger latency for the FFT but still worked. For the general case FFT (if I am right) the latency is at least the FFT size + hardware buffer size (and i think there is something to be said for an even larger latency to be easier on the cpu and not have to calculate the whole FFT in one period). > Basicly same approach is now in used in regards to the nframes issue. If > you try to use ecasound with a JACK driver (well, if there was such an > driver) with non-const-nframes, ecasound would just raise an error and > exit. In a way this makes sense as until I rewrite ecasound (which will > take months of work if I decide to do it) to properly handle > non-const-nframes, the user is better of using some other app that is > optimized for the the driver. In a JACK setup, an appliacation that almost > works efficiently enough, is not of much use. Is adding an intermediate buffer for drivers with non-const nframes much work? --martijn Powered by ASHosting
