On Feb 23, 2022, at 1:08 PM, Carsten Bormann <[email protected]> wrote:
This discussion was more about lat/lon.
cti (label 7) is a byte string, so there is no such problem there.
Lat/long needs double precision float to be useful to beings the size of humans
on a planet of earth's size. Half-precision has so little precision on a planet
of earth's size that it is useless for location.
That is not what this is about.
Saying “You need 16 bits to express the unsigned integer weight of a street
vehicle in kg” does not mean that I can’t encode the weight of my bike in 8
bits.
What the current text in EAT says is equivalent to "you can’t ever use 8 bits
because some vehicles need 16". That is not smart.
I mis-remembered “iat” as “cti”. Iat is a time stamp. A uint64 can represent
everything needed for a time stamp, 1 second precision and +/- 500 million
years. Float adds no value.
Right, but that is OK: It is a restriction at the data model level. You are
asking the application not to feed floats into the generic encoder.
(Note that we strengthened the little wall between integer and floating point
for RFC 8949, after seeing that RFC 7049 was confusing people a lot.)
So EAT
- disallows half-precision in location, but allows doubles to relieve a decoder
from implementing half-precision
Bad. Half-precision (binary16) is not a different type at the data model
level, it is just an efficient way to represent certain numbers in your data
model.
Binary16 has a significand [sic] precision of 11 bits. Binary32 has 24 bits.
So if you feed random binary32 lat/lon (~ meter precision) to your generic
encoder, every 8192th will be encoded in half-precision (*).
Since lat/lon values are probably approximately evenly distributed, that by
itself is not a reason to implement half-precision.
The fact that CBOR preferred encoding does employ half-precision is.
Deviating from that by disallowing binary16 is expensive: You no longer can
employ generic encoders.
Don’t do that.
- disallows float for iat to relieve a decoder from implementing float,
assuming it is not implementing location (many EAT implementations will not
support location)
Good(**). That selection of integer second timestamps is a part of your data
model.
You can do that, and no generic encoder will suddenly turn your integer iat
number into a float.
(Again, with RFC 7049, it possibly could have, but we fixed that.)
Grüße, Carsten
(*) Note that a similar thing happens in JSON: numbers that happen to be whole
numbers will be encoded without fractional parts (10.0 becomes 10).
In certain decoder implementations, that makes them a different “type" than
10.0 or 1e1 or 0.1e2 etc., causing breakage.
But the JSON data model only has numbers, not integers as a separate,
incompatible type, so these decoders are broken with applications that simply
expect numbers.
(Find ISO 6093:1985 for some fascinating history in this space :-)
(**) ((I’m not making any representation on whether applications need to know
with a precision of smaller than a second when your EAT was issued.
I really can’t imagine that, but that is proof by lack of imagination; I do
assess that the cost of enabling that is higher than the benefits.
But that is true until someone has a convincing counterexample…
Consolation is that these people can introduce a float iat = “fiat” :-)))