On Tue, Mar 03, 2026 at 02:38:11PM +0100, Kurt Kanzenbach wrote:
> > It would be great, if you shared the numbers. Did Miroslav already test 
> > this?
> 
> Great question. I did test with ptp4l and synchronization looks fine <
> below 10ns back to back as expected. I did not test with ntpperf,
> because I was never able to reproduce the NTP regression to the same
> extent as Miroslav reported. Therefore, Miroslav is on Cc in case he
> wants to test it. Let's see.

I ran the same test with I350 as before and there still seems to be a
regression, but interestingly it's quite different to the previous versions of
the patch. It's like there is a load-sensitive on/off switch.

Without the patch:

               |          responses            |        response time (ns)
rate   clients |  lost invalid   basic  xleave |    min    mean     max stddev
150000   15000   0.00%   0.00%   0.00% 100.00%    +4188  +36475 +193328  16179
157500   15750   0.02%   0.00%   0.02%  99.96%    +6373  +42969 +683894  22682
165375   16384   0.03%   0.00%   0.00%  99.97%    +7911  +43960 +692471  24454
173643   16384   0.06%   0.00%   0.00%  99.94%    +8323  +45627 +707240  28452
182325   16384   0.06%   0.00%   0.00%  99.94%    +8404  +47292 +722524  26936
191441   16384   0.00%   0.00%   0.00% 100.00%    +8930  +51738 +223727  14272
201013   16384   0.05%   0.00%   0.00%  99.95%    +9634  +53696 +776445  23783
211063   16384   0.00%   0.00%   0.00% 100.00%   +14393  +54558 +329546  20473
221616   16384   2.59%   0.00%   0.05%  97.36%   +23924 +321205 +518192  21838
232696   16384   7.00%   0.00%   0.10%  92.90%   +33396 +337709 +575661  21017
244330   16384  10.82%   0.00%   0.15%  89.03%   +34188 +340248 +556237  20880

With the patch:
150000   15000   5.11%   0.00%   0.00%  94.88%    +4426 +460642 +640884  83746
157500   15750  11.54%   0.00%   0.26%  88.20%   +14434 +543656 +738355  30349
165375   16384  15.61%   0.00%   0.31%  84.08%   +35822 +515304 +833859  25596
173643   16384  19.58%   0.00%   0.37%  80.05%   +20762 +568962 +900100  28118
182325   16384  23.46%   0.00%   0.42%  76.13%   +41829 +547974 +804170  27890
191441   16384  27.23%   0.00%   0.46%  72.31%   +15182 +557920 +798212  28868
201013   16384  30.51%   0.00%   0.49%  69.00%   +15980 +560764 +805576  29979
211063   16384   0.06%   0.00%   0.00%  99.94%   +12668  +80487 +410555  62182
221616   16384   2.94%   0.00%   0.05%  97.00%   +21587 +342769 +517566  23359
232696   16384   6.94%   0.00%   0.10%  92.96%   +16581 +336068 +484574  18453
244330   16384  11.45%   0.00%   0.14%  88.41%   +23608 +345023 +564130  19177

At 211063 requests per second and higher the performance looks the
same. But at the lower rates there is a clear drop. The higher
mean response time (difference between server TX and RX timestamps)
indicates more of the provided TX timestamps are hardware timestamps
and the chrony server timestamp statistics confirm that.

So, my interpretation is that like with the earlier version of the
patch it's trading performance for timestamp quality at lower rates,
but unlike the earlier version it's not losing performance at the
higher rates. That seems acceptable to me. Admins of busy servers
might need to decide if they should keep HW timestamping enabled. In
theory, chrony could have an option to do that automatically.

Thanks,

-- 
Miroslav Lichvar

Reply via email to