On Wed, 2013-09-25 at 02:49 +1000, Patrick Shirkey wrote: > Hi, > > A quick update for those who are following this thread. > > We are tracing the audio latency when running a combination of JACK and PA. > > We are currently looking at the PA Stream Buffer as a potential bottleneck. > > During testing I have seen latency as low as 4ms round trip but also as > high as 1300ms and the results are not stable on my hda_intel sound > device.
I think you earlier said you are using an x68 desktop for testing. What I'd try to do is to prevent deep C-states. Indeed, a package you run pulseaudio/jack/other related process is able to enter a deep C-state, there is an exit latency associated with it. To put the long story short, there is the /dev/cpu_dma_latency file, where you can write the latency you can tolerate (in ms). The kernel will translate this to the deepest C-state the processor can enter. You can write 0 there, which will mean that CPU won't ever enter any C-state and will busy-loop when idle. Bad for power consumption. But you can just experiment if this helps to lessen the latency divination that you observe. You can write a larger number, then CPU will enter C1 at least, which is already a lot better for PM. You can verify which C-states you hit with the 'turbostat' tool or powertop. The former comes, I think, from kernel-tools package in Fedora. Play with latency number and use them to check which C-states this corresponds to. Ah, and there is a trick. You should open /dev/cpu_dma_latency, write your latency (as ascii or binary, both are ok), and _do not close it_. As soon as you close it, the kernel will switch to the default latency constraint. Also, advanced drivers usually use the kernel PMQoS infrastructure and instruct the system when they cannot tolerate high latency. When I do 'git grep PM_QOS_CPU_DMA_LATENCY' in the kernel, I do not see the HDA driver doing this. Anyway, this may not solve the issue, but I'd suggest to try out if it at least partially helps. And I am very interested to hear if it does or not, or may be you already tried this out. -- Best Regards, Artem Bityutskiy _______________________________________________ General mailing list [email protected] https://lists.tizen.org/listinfo/general
