Hi Miroslav Lichvar,

Thanks for your review. Please find our comments inline.

Thanks,
Amar B S

-----Original Message-----
From: Miroslav Lichvar [mailto:mlich...@redhat.com] 
Sent: 10 May 2021 16:58
To: Amar Subramanyam <asubraman...@altiostar.com>
Cc: linuxptp-devel@lists.sourceforge.net; Ramana Reddy <rre...@altiostar.com>; 
Karthikkumar Valoor <kval...@altiostar.com>
Subject: Re: [Linuxptp-devel] [PATCH] Sync issues observed when ptp4l is ran 
with jbod and client only mode (clientOnly=1 and boundary_clock_jbod=1)

CAUTION: This email originated from outside of Altiostar. Do not click on links 
or open attachments unless you recognize the sender and you are sure the 
content is safe. You will never be asked to reset your Altiostar password via 
email.


On Fri, May 07, 2021 at 11:37:06AM +0000, Amar Subramanyam wrote:
>> Continuous BMCA triggering in port 2 causes a SYNCHRONIZATION FAULT in 
>> port1.This causes port1 to jump from SLAVE to UNCALIBRATED and vice versa 
>> repeatedly.

> I'm sorry if I sound like a broken record, but what exactly is the cause of 
> the fault? Is it the clock check seeing timestamps from two clocks that are 
> not synchronized? Do the faults disappear if you set --sanity_freq_limit=0?

What happens here exactly is, due to the continuous triggering of  BMCA, the 
value of mono_interval (the interval between two successive calls of 
clock_check_sample ()) gets increased and SYNCRONIZATION FAULT occurs with the 
default value of sanity_freq_limit (which is 200000000). Modifying the 
configuration with --sanity_freq_limit=0 will prevent the FAULT from occurring 
, but it will not address the root cause of BMCA getting triggered continuously 
even though there is no change in successive announce messages in the port. So 
we believe that setting --sanity_freq_limit=0 is a work around and doesn't 
directly solve the issue.
Hence the changes we introduced are such that BMCA is not triggered at all when 
there is no change in successive announce messages.

> That's the only fault I see in my test and in your original report.
> Your patch doesn't seem to prevent that fault in my test, so I'm confused.

With the changes in our patch, BMCA will not be triggered for LISTENING Port 
unless there is a change in successive announce messages. Thus mono_interval 
will not be increasing suddenly to a high value and even with the default value 
of sanity_freq_limit (200000000), ptp4l will function properly.

>> Noted. Please find the updated description and function name below, we will 
>> send out the modified patch after full review.
>>
>> + * Get port SLAVE state for client only mode.
>>  + * @param c  The clock instance.
>>  + * @return   Return 0 if any port is in SLAVE state, 1 otherwise.
>>  + */
>>  +int clock_get_port_slave_state(struct clock *c);

> There might be a better name for this function. Maybe something related to 
> its purpose rather than what it does.

Is the name "clock_get_port_client_state" fine?. Could you please propose any 
new suggestions?

>> Hence, we are clearing the Announce receipt timer for port2 (LISTENING port) 
>> in the function bc_event() upon Announce receipt timeout, only when 
>> boundary_clock_jbod=1 and clientOnly=1 is configured and atleast one port 
>> (Port1 here) is in SLAVE/UNCALIBRATED state.

> Ok, but if this optimization is useful in the jbod mode, it should be useful 
> even in the non-jbod mode, right? Most of the port code shouldn't care about 
> jbod.

Yes, as you suggested this change is useful in both jbod and non-jbod mode to 
avoid unnecessary triggering of BMCA. But there is no impact seen in non jbod 
case as there is only one port. Whereas there is clear impact on the SLAVE port 
in jbod, slaveOnly case, as explained earlier. We didn't want to introduce any 
new variables to the non jbod mode, hence we restricted our change to the jbod 
mode alone.

--
Miroslav Lichvar



_______________________________________________
Linuxptp-devel mailing list
Linuxptp-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linuxptp-devel

Reply via email to