On 11/12/2012 02:54 AM, Pierre-Louis Bossart wrote:

Not sure what you mean with "sink" and "ring buffer". When mixing, data
goes from the sink-input / "client-server" buffer into the DMA buffer
directly.

Please look at protocol-native.c. I am not sure why there is this
division of latency in two, for low-latency you can probably decrease
the client buffer some.

         /* So, the user asked us to adjust the latency of the stream
          * buffer according to the what the sink can provide. The
          * tlength passed in shall be the overall latency. Roughly
          * half the latency will be spent on the hw buffer,

This refers to the DMA buffer.

the other
          * half of it in the async buffer queue we maintain for each
          * client.

This refers to the client-server buffer.


            * In between we'll have a safety space of size
          * 2*minreq.

This does not refer to any additional buffer. It's trying to account for to the latencies involved in getting data from the client process to the hardware. In addition, this is what Arun is currently trying to correct/improve, I believe.


          * Why the 2*minreq? When the hw buffer is completely
          * empty and needs to be filled, then our buffer must have
          * enough data to fulfill this request immediately and thus
          * have at least the same tlength as the size of the hw
          * buffer. It additionally needs space for 2 times minreq
          * because if the buffer ran empty and a partial fillup
          * happens immediately on the next iteration we need to be
          * able to fulfill it and give the application also minreq
          * time to fill it up again for the next request Makes 2 times
          * minreq in plus.. */

         if (tlength_usec > minreq_usec*2)
             sink_usec = (tlength_usec - minreq_usec*2)/2;
         else
             sink_usec = 0;

         pa_log_debug("Adjust latency mode enabled, configuring sink
latency to half of overall latency.");

and events up to 4ms apart.  Has anyone tried the changes we pushed
recently at the kernel level to properly handle the ring buffer pointer
and delay? I believe some of the underruns may be due to the ~1ms
inaccuracy that we had before these changes.  If your driver is already
giving you a 25% precision error no wonder things are broken?

I wasn't aware of this, but a 1 ms precision error shouldn't be a complete deal-breaker I believe? Perhaps worsen the latency by a ms or two, but not much worse than that?

Right now we have bigger issues, such as why nobody is responding to
messages such as this one [1] :-(

Quite frankly I did not understand the problem you are facing and what
these measurements show. Maybe you're on to something but it's hard to
provide feedback here.

If I interpret the values correctly, we have 10 ms of scheduling latency. I e, PulseAudio request the thread to wake up and execute (we have RT priority) at a specific point in time, but the kernel does other things for 10 ms before allowing PulseAudio's userspace code to execute. I'm not certain that this is the case: it could be that I interpret the values wrong, that the tracer is broken (or just gives a lot of overhead), or something else. But I haven't got any LKML responses at all, so I'll see if I can investigate this further myself.


--
David Henningsson, Canonical Ltd.
https://launchpad.net/~diwic
_______________________________________________
pulseaudio-discuss mailing list
[email protected]
http://lists.freedesktop.org/mailman/listinfo/pulseaudio-discuss

Reply via email to