Hi Tony,

thank you for clarifying your view on this. Please find my notes in-line below 
under the GIM>> tag.








Regards,


Greg Mirsky






Sr. Standardization Expert
预研标准部/有线研究院/有线产品经营部 Standard Preresearch Dept./Wireline Product R&D 
Institute/Wireline Product Operation Division









E: gregory.mir...@ztetx.com 
www.zte.com.cn








Original Mail



Sender: TonyLi
To: gregory mirsky10211915;
CC: lsr;
Date: 2021/05/25 09:52
Subject: Re: [Lsr] LSR WG Adoption Poll for "Flexible Algorithms: Bandwidth, 
Delay, Metrics and Constraints" - draft-hegde-lsr-flex-algo-bw-con-02






Hi Greg,





Firstly, I am not suggesting it be changed to a nanosecond, but, perhaps, 10 
nanoseconds or 100 nanoseconds.




Ok.  The specific precision isn’t particularly relevant to me.  The real 
questions are whether microseconds are the right base or not, and whether we 
should shift to floating point for additional range or add more bits.



To Tony's question, the delay is usually calculated from the timestamps 
collected at measurement points (MP). Several formats of a timestamp, but most 
protocols I'm familiar with, use 64 bit-long, e.g., NTP or PTP, where 32 bits 
represent seconds and 32 bits - a fraction of a second. As you can see, the 
nanosecond-level resolution is well within the capability of protocols like 
OWAMP/TWAMP/STAMP. As for use cases that may benefit from higher resolution of 
the packet delay metric, I can think of URLLC in the MEC environment. I was 
told that some applications have an RTT budget of in the tens microseconds 
range.




It’s very true that folks have carried around nanosecond timestamps for a long 
time now.  No question there. My question is whether it is actually useful. 
While NTP has that precision in its timestamps, the actual precision  of NTP’s 
synchronization algorithms aren’t quite that strong.  In effect, many of those 
low order bits are wasted.

GIM>> What I see from deployment of active measurement protocols, e.g., TWAMP 
and STAMP, is the strong interest in using PTP, i.e. IEEE 1588v2, timestamp 
format in the data plane. And the requirement (actually, there are different 
profiles) for the quality of clock synchronization for 5G is, as I understand 
it, achievable with PTP. I have no information if that is the case with NTP.


That’s not a big deal, but when we make the base more precise, we lose range.  
If we go with 32 bits of nanoseconds, we limit ourselves to a link delay of ~4 
seconds. Tolerable, but it will certainly disappoint Vint and his 
inter-planetary Internet. :-)

GIM>> Agree. I would propose to consider 100 usec as a unit, which gets close 
to 7 minutes.


We could go with 64 bits of nanoseconds, but then we’ll probably only rarely 
use the high order bits, so that seems wasteful of bandwidth.

Or we can go to floating point. This will greatly increase the range, at the 
expense of having fewer significant bits in the mantissa.




Personally, I would prefer to stay with 32 bits, but I’m flexible after that.

GIM>> I think that we can stay with a 32 bit-long field and get better 
resolution at the same time.


Tony
_______________________________________________
Lsr mailing list
Lsr@ietf.org
https://www.ietf.org/mailman/listinfo/lsr

Reply via email to