Hi Miroslav,

find my inline answers.

Il giorno mer 26 apr 2023 alle ore 11:15 Miroslav Lichvar <
mlich...@redhat.com> ha scritto:

> On Wed, Apr 26, 2023 at 10:43:52AM +0200, Luigi 'Comio' Mantellini wrote:
> > Hi Miroslav,
> >
> > Sync is already sent using a constant interval as required by Standard
> even
> > if it is lower than the nominal frequency for the actual implementation
> > (stop and rearm).
>
> It's not constant. It's randomized by the scheduling of the ptp4l
> process. If you see two servers sending sync messages at the same
> time, you can expect their transmissions to slowly drift away from
> each other.
>

You are right, but I don't understand why there are advantages to having a
drift. If you have two servers (on the same CPU) you still haven't any
collision on TX syncs because the settimer will be called at a different
time on each server (and on each master port). Having servers on different
nodes will give a drift because there is not any shared clock in this case.
BTW in a scenario you can have just a couple of servers to handle two
domains, and collision is improbable and must not be a problem.
In addition, using periodic timers will permit you to identify easily
missing ticks for scheduling issues. This is very useful during debugging
of your node when your CPU is not dedicated.


>
> > The sync is forged by master as a one-to-many message that does not
> > saturate the link, the randomization is required for the messages from
> > Clients to Master (Delay-Req) in order to avoid congestion on Master.
> > General messages are randomized also.
>
> The same thing needs to be considered for multiple servers in
> different domains on the same communication path.
>

Considering servers on the same CPU, having periodic timers doesn't imply
that TX sync are colliding because each server will arm the timers in a
different time (for each port).
I still do not understand why you speak about congestion/collision.
Messages that can cause CPU or Network overload are already randomized.
SYNC doesn't require to be randomized at all.


> > Speaking about messages collision in TX path, this is not an issue here
> > because your HW timestamp will (should) set the correct TS value at the
> end
> > of TX queue. On the RX side, the TS is applied just before the RX queue.
>
> If all NICs and switches/routers had perfect HW timestamping, I might
> agree.
>
> > As reported by Richard, the conformance is not impacted by the proposed
> fix.
>
> You have still not explained what is the issue you are trying to fix.
>
> Is there some specification that requires the average sync interval to
> be within 0.1% of the logSyncInterval? I hope it's not someone
> complaining about a test report not having a round number.
>

During test and qualification the SW implementation has been compared with
HW implementations on Telecom grade devices. A warning message has been
reported to me from a customer regarding the TX tolerance.
You are right saying that 0.1% is not an issue but I proposed my patch in
order to improve the precision and also add a debug messages when we lost
ticks in transmission (eg. for CPU overload).

ciao


-- 
*Luigi 'Comio' Mantellini*
My Professional Profile <http://www.linkedin.com/in/comio>

*"UNIX is very simple, it just needs a genius to understand its
simplicity." [cit.]*
_______________________________________________
Linuxptp-devel mailing list
Linuxptp-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linuxptp-devel

Reply via email to