On 28.03.2016 16:16, Pierre-Louis Bossart wrote:
On 3/22/16 4:11 AM, Georg Chini wrote:
Hi,

when a sink is started, there is some delay before the first sample is
really played.
This delay is a constant part of the sink latency that will be always
present, so the
minimum sink latency cannot go below that start delay.
Would it be acceptable to adjust the latency range for the device after
each unsuspend
to reflect that?
USB devices (those I have access to) for example have a startup delay in
the range of
10ms, but have a latency range that starts at 0.5ms which does not make
a lot of sense
in my opinion.
The startup delay is not constant, so the minimum possible latency would
vary.

On the source side the startup delay is not relevant since it does not
delay the signal.

Sorry I missed this thread last week.
At the risk of being pedantic, maybe you should consider two different concepts. - cold latency: the time it takes for the audio device to render the first sample when first opened - continuous latency: the time it takes to hear a sample after it's written to the ring buffer.


Yes, but it looks like the alsa USB driver assumes that a sample can be heard immediately after it left the buffer, which is not true. So there is some additional continuous latency that is not reported. This is visible as a "startup delay" - the time between the moment the first sample is written to the buffer and the moment when it is reported that the
first chunk of audio has been played.

The cold latency is mostly what happens in the .prepare step at the driver level. It's very hard to estimate by software since you can't observe the analog output. It can also vary depending on platform states.

Well, I can observe the analog output. In my setup I have an oscilloscope connected to input and output of the loopback, that is why I detected the difference between
configured and real latency at all.

The ALSA driver tells you however when it started pushing samples out after the .prepare step, the status.trigger_time value tells you what your starting point should be. All the smoothers should start from this value, not from the time you started pushing the samples in the sink.

The observable latency is the time that it takes from pushing a sample into the sink to the moment it reaches the analog output. This is the same time for the first sample
as for all following samples, see above.
For the smoother itself you can use any arbitrary chosen time after the trigger_time as starting point, you only have to remember the number of frames that have already been played at that time. I experimented with using the start time supplied by the
driver and could not see a significant advantage over the first approach.

The latency specified by PulseAudio is only related to the continuous latency - the total buffering. There is no need to take the startup delay into account as long as you use the trigger_time as your t0. Some devices don't report trigger_time accurately but in the case of USB there were patches to fix this last year, look for the subject
"ALSA: usb: update trigger timestamp on first non-zero URB submitted"
The trigger_time tells you, when the first audio is submitted to the USB bus, but not when it really reaches the speakers. There is all that USB processing in between.


_______________________________________________
pulseaudio-discuss mailing list
pulseaudio-discuss@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/pulseaudio-discuss

Reply via email to