Hi, I have a program which, at 2-periods per buffer with 128 samples per period size and 48khz sampling rate, takes 9ms to pass input through to its outputs. However, with the same hardware and settings, I notice that JACK+Ardour2 take only 6ms (this is through the program, not through a direct monitoring mode-- effects can be applied to the input). The amount of extra latency I see in my program increases with the buffer size: it is one buffer extra in our program. I have a few questions about this on which I haven't been able to find sufficient material with Google. One difference between JACK and our software I notice is that JACK is using ALSA's mmap mode. I am unclear on the advantages of mmap's begin/commit access vs. using the plain vanilla readi/writei. If mmap does offer a latency advantage and not just a CPU advantage, how does the timing of that work as compared to read/write? Where's my extra buffer worth of latency coming from?
Cheers, Louis _______________________________________________ Linux-audio-dev mailing list [email protected] http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
