Guy Harris wrote:
> On Aug 6, 2018, at 2:19 PM, Guy Harris <[email protected]> wrote:
> 
>> On Aug 6, 2018, at 4:20 AM, Martin Burnicki <[email protected]> 
>> wrote:
>>
>>> As far as I have seen, proto_tree_add_double() seems to add a double
>>> value to the output tree,
>>
>> *All* doubles use the same format string.  The part of the format string for 
>> the number is

This doesn't seem to be quite correct. At least the peer clock
precision, which is an 8 bit integer in the packet is displayed with a
fixed floating point format with 6 fractional digits, e.g. (from
different packets):

Peer Clock Precision: 0.000977 sec
Peer Clock Precision: 0.000004 sec

This is very clear, IMO.

>>      "%." G_STRINGIFY(DBL_DIG) "g"
>>
>> DBL_DIG is 15 with macOS/Xcode, so that should be %.15g, which should show 
>> more than 4 digits.
> 
> At least on macOS, a root delay the octets of which are 0x00 0x00 0x00 0x01 
> displays, in the dissected NTP packet, as
> 
>       Root Delay: 1.52587890625e-05 seconds
> 
> Does it not do that on your machine?

I've investigated a little bit more, and in fact it turned out that I've
been using an old version of Wireshark which displayed the root
dispersion in a fixed format 0.0000 s.

Same seems to be the case for a customer with whom I've discussed a root
dispersion issue, and who told me the root dispersion he saw was 0 in
the Wireshark capture, even though it was not really 0, but just very small.

After I've updated Wireshark to the current version I can confirm that
both the root delay and the root dispersion are now shown in an
"automatic" floating point format, e.g. (from different packets):

  Root Delay: 0 seconds
  Root Delay: 0.0009765625 seconds
  Root Delay: 0.0035552978515625 seconds

  Root Dispersion: 7.62939453125e-005 seconds
  Root Dispersion: 0.0001983642578125 seconds
  Root Dispersion: 0.944107055664063 seconds
  Root Dispersion: 0.0693359375 seconds

This avoids displaying a value as 0 when in fact it is very small, but
not 0. However, on the other hand

 - it's harder to compare values when you quickly inspect different packets

 - it may pretend a much higher resolution (10 or even more fractional
digits of a second) even if the resolution in the protocol is limited.
E.g. 1 LSB of the root dispersion is ~15 microseconds, so it doesn't
make much sense to display femtoseconds with random numbers from the
scaling.

So IMO it would make more sense to display such values with a fixed
floating point format similar to the Peer Clock Precision field, e.g.

0.000000 instead of 0
0.944107 instead of 0.944107055664063
0.000076 instead of 7.62939453125e-005

Just my thoughts, though.

Martin
-- 
Martin Burnicki

Senior Software Engineer

MEINBERG Funkuhren GmbH & Co. KG
Email: [email protected]
Phone: +49 5281 9309-414
Linkedin: https://www.linkedin.com/in/martinburnicki/

Lange Wand 9, 31812 Bad Pyrmont, Germany
Amtsgericht Hannover 17HRA 100322
Geschäftsführer/Managing Directors: Günter Meinberg, Werner Meinberg,
Andre Hartmann, Heiko Gerstung
Websites: https://www.meinberg.de  https://www.meinbergglobal.com
Training: https://www.meinberg.academy

___________________________________________________________________________
Sent via:    Wireshark-dev mailing list <[email protected]>
Archives:    https://www.wireshark.org/lists/wireshark-dev
Unsubscribe: https://www.wireshark.org/mailman/options/wireshark-dev
             mailto:[email protected]?subject=unsubscribe

Reply via email to