On 2018-03-18 16:47, Mike Samuel wrote:
Interop with systems that use 64b ints is not a .001% issue.

Certainly not but using "Number" for dealing with such data would never be 
considered by for example the IETF.

This discussion (at least from my point of view), is about creating stuff that 
fits into standards.

Anders


On Sun, Mar 18, 2018, 11:40 AM Anders Rundgren <[email protected] 
<mailto:[email protected]>> wrote:

    On 2018-03-18 15:47, Michał Wadas wrote:
     > JSON supports arbitrary precision numbers that can't be properly
     > represented as 64 bit floats. This includes numbers like eg. 1e9999 or 
1/1e9999.

    rfc7159:
         Since software that implements
         IEEE 754-2008 binary64 (double precision) numbers [IEEE754] is
         generally available and widely used, good interoperability can be
         achieved by implementations that expect no more precision or range
         than these provide, in the sense that implementations will
         approximate JSON numbers within the expected precision

    If interoperability is not an issue you are free to do whatever you feel 
useful.
    Targeting a 0.001% customer base with standards, I gladly leave to others 
to cater for.

    The de-facto standard featured in any number of applications, is putting 
unusual/binary/whatever stuff in text strings.

    Anders

     >
     >
     > On Sun, 18 Mar 2018, 15:30 Anders Rundgren, <[email protected] 
<mailto:[email protected]> <mailto:[email protected] 
<mailto:[email protected]>>> wrote:
     >
     >     On 2018-03-18 15:08, Richard Gibson wrote:
     >>     On Sunday, March 18, 2018, Anders Rundgren <[email protected] 
<mailto:[email protected]> <mailto:[email protected] 
<mailto:[email protected]>>> wrote:
     >>
     >>         On 2018-03-16 20:24, Richard Gibson wrote:
     >>>         Though ECMAScript JSON.stringify may suffice for certain 
Javascript-centric use cases or otherwise restricted subsets thereof as addressed by 
JOSE, it is not suitable for producing canonical/hashable/etc. JSON, which requires a 
fully general solution such as [1]. Both its number serialization [2] and string 
serialization [3] specify aspects that harm compatibility (the former having arbitrary 
branches dependent upon the value of numbers, the latter being capable of producing 
invalid UTF-8 octet sequences that represent unpaired surrogate code points—unacceptable 
for exchange outside of a closed ecosystem [4]). JSON is a general 
/language-agnostic/interchange format, and ECMAScript JSON.stringify is *not*a JSON 
canonicalization solution.
     >>>
     >>>         [1]: _http://gibson042.github.io/canonicaljson-spec/_
     >>>         [2]: 
http://ecma-international.org/ecma-262/7.0/#sec-tostring-applied-to-the-number-type
     >>>         [3]: 
http://ecma-international.org/ecma-262/7.0/#sec-quotejsonstring
     >>>         [4]: https://tools.ietf.org/html/rfc8259#section-8.1
     >>
     >>         Richard, I may be wrong but AFAICT, our respective 
canoncalization schemes are in fact principally IDENTICAL.
     >>
     >>
     >>     In that they have the same goal, yes. In that they both achieve 
that goal, no. I'm not married to choices like exponential notation and uppercase 
escapes, but a JSON canonicalization scheme MUST cover all of JSON.
     >
     >     Here it gets interesting...  What in JSON cannot be expressed 
through JS and JSON.stringify()?
     >
     >>         That the number serialization provided by JSON.stringify() is 
unacceptable, is not generally taken as a fact.  I also think it looks a bit weird, 
but that's just a matter of esthetics.  Compatibility is an entirely different issue.
     >>
     >>
     >>     I concede this point. The modified algorithm is sufficient, but 
note that a canonicalization scheme will remain static even if ECMAScript changes.
     >
     >     Agreed.
     >
     >>
     >>         Sorting on Unicode Code Points is of course "technically 100% 
right" but strictly put not necessary.
     >>
     >>
     >>     Certain scenarios call for different systems to _independently_ 
generate equivalent data structures, and it is a necessary property of canonical 
serialization that it yields identical results for equivalent data structures. JSON 
does not specify significance of object member ordering, so member ordering does not 
distinguish otherwise equivalent objects, so canonicalization MUST specify member 
ordering that is deterministic with respect to all valid data.
     >
     >     Violently agree but do not understand (I guess I'm just dumb...) why 
(for example) sorting on UCS2/UTF-16 Code Units would not achieve the same goal 
(although the result would differ).
     >
     >>
     >>         Your claim about uppercase Unicode escapes is incorrect, there 
is no such requirement:
     >>
     >> https://tools.ietf.org/html/rfc8259#section-7
     >>
     >>     I don't recall ever making a claim about uppercase Unicode escapes, 
other than observing that it is the preferred form for examples in the JSON RFCs... 
what are you talking about?
     >
     >     You're right, I found it it in the 
https://gibson042.github.io/canonicaljson-spec/#changelog
     >
     >     Thanx,
     >     Anders
     >
     >     _______________________________________________
     >     es-discuss mailing list
     > [email protected] <mailto:[email protected]> 
<mailto:[email protected] <mailto:[email protected]>>
     > https://mail.mozilla.org/listinfo/es-discuss
     >

    _______________________________________________
    es-discuss mailing list
    [email protected] <mailto:[email protected]>
    https://mail.mozilla.org/listinfo/es-discuss


_______________________________________________
es-discuss mailing list
[email protected]
https://mail.mozilla.org/listinfo/es-discuss

Reply via email to