Re: [dev] Re: json

2019-06-15 Thread Mattias Andrée
On Sat, 15 Jun 2019 22:11:13 +0200
Wolf  wrote:

> Hello,
> 
> On , Mattias Andrée wrote:
> > Wouldn't it just complicate matters if you needed to specify whether a
> > number is an integer or a real value;  
> 
> Could you not just consider sequence of [0-9]+ to be an integer and
> anything with other characters either invalid or float? Not sure, I'm in
> no means a parser expert, so I might be missing something fundamental
> here.

A bit: floats can be significantly larger than ints, (and negative
numbers) of course. However the syntax could be simple: \+?[0-9]+
could be unsigned int, -[0-9]+ could be signed int, and
[+-]?[0-9]*.[0-9]+ and [+-]?[0-9]+.[0-9]* could be float. But still,
only using `long double` or arbitrary precision floating-point
numbers is simpler in my mind, in usage, in specification, and in
implementation.

I would also prefer if hexadecimal encoding of floats where
supported (to eliminate loss of precision).

> 
> > Additionally, I think you would want to support for arbitrary
> > precision. Again, the software can just check if arbitrary precision
> > is required, no need to complicate the syntax.  
> 
> Agreed, arbitrary precision would be nice, and currently is probably
> done via strings if it's needed. But not sure if it's something you want
> to have in the standard though as a separate type. Passing them via
> string is probably good enough for specialized applications that do need
> them.
> 
> > What should a library do with parsed numbers that are too large or too
> > precise?  
> 
> Report an error and provide flag to do best-possible parsing if user
> wants the number anyway knowing it will not be precise. Not do a silent
> guesstimate.
> 
> 
> 
> Basically, I think the fact that following returns false is stupid:
> 
> +   $ ruby -rjson -e 'puts({ num: (?9 * 100).to_i }.to_json)' \
>   | node -p 'var fs = require("fs");
>   JSON.stringify(JSON.parse(fs.readFileSync(0, "utf-8")));' \
>   | ruby -rjson -e 'puts (?9 * 100).to_i == JSON.parse(STDIN.read)["num"]'
> false
> 
> That means that despite all libraries in the chain fully implementing
> the JSON standard, not silently corrupting the data during the
> round-trip is not guaranteed.
> 
> W.




Re: [dev] Re: json

2019-06-15 Thread Wolf
Hello,

On , Mattias Andrée wrote:
> Wouldn't it just complicate matters if you needed to specify whether a
> number is an integer or a real value;

Could you not just consider sequence of [0-9]+ to be an integer and
anything with other characters either invalid or float? Not sure, I'm in
no means a parser expert, so I might be missing something fundamental
here.

> Additionally, I think you would want to support for arbitrary
> precision. Again, the software can just check if arbitrary precision
> is required, no need to complicate the syntax.

Agreed, arbitrary precision would be nice, and currently is probably
done via strings if it's needed. But not sure if it's something you want
to have in the standard though as a separate type. Passing them via
string is probably good enough for specialized applications that do need
them.

> What should a library do with parsed numbers that are too large or too
> precise?

Report an error and provide flag to do best-possible parsing if user
wants the number anyway knowing it will not be precise. Not do a silent
guesstimate.



Basically, I think the fact that following returns false is stupid:

+   $ ruby -rjson -e 'puts({ num: (?9 * 100).to_i }.to_json)' \
| node -p 'var fs = require("fs");
JSON.stringify(JSON.parse(fs.readFileSync(0, "utf-8")));' \
| ruby -rjson -e 'puts (?9 * 100).to_i == JSON.parse(STDIN.read)["num"]'
false

That means that despite all libraries in the chain fully implementing
the JSON standard, not silently corrupting the data during the
round-trip is not guaranteed.

W.
-- 
There are only two hard things in Computer Science:
cache invalidation, naming things and off-by-one errors.


signature.asc
Description: PGP signature


Re: [dev] Re: json

2019-06-15 Thread Mattias Andrée
`long double` is able to exactly represent all values exactly
representable in `uint64_t`, `int64_t` and `double` (big float
can be used for other languages). Wouldn't it just complicate
matters if you needed to specify whether a number is an integer
or a real value; if there is any need for it, the software can
just check which it best fits. Additionally, I think you would
want to support for arbitrary precision. Again, the software
can just check if arbitrary precision is required, no need to
complicate the syntax. What should a library do with parsed
numbers that are too large or too precise? In most cases, the
program know what size and precision is required.


Regards,
Mattias Andrée

On Sat, 15 Jun 2019 20:37:34 +0200
Wolf  wrote:

> On , sylvain.bertr...@gmail.com wrote:
> > json almost deserves a promotion to suckless format.  
> 
> Except for not putting any limits on sizes of integers. I think it would
> be better to have size the implementation must support to be json
> complient. And also having separate int and float types. Because let's compare
> what happens in ruby:
> 
>   JSON.parse(?9 * 100))
>   => 
> 
>   
> 
> and in firefox (JavaScript):
> 
>   var x = ''; for (var i = 0; i < 100; ++i) { x += '9'; }; JSON.parse(x);
>   => 1e+100  
> 
> So, yy interoperability I guess?
> 
> W.




Re: [dev] Re: json

2019-06-15 Thread Wolf
On , sylvain.bertr...@gmail.com wrote:
> json almost deserves a promotion to suckless format.

Except for not putting any limits on sizes of integers. I think it would
be better to have size the implementation must support to be json
complient. And also having separate int and float types. Because let's compare
what happens in ruby:

JSON.parse(?9 * 100))
=> 


and in firefox (JavaScript):

var x = ''; for (var i = 0; i < 100; ++i) { x += '9'; }; JSON.parse(x);
=> 1e+100

So, yy interoperability I guess?

W.
-- 
There are only two hard things in Computer Science:
cache invalidation, naming things and off-by-one errors.


signature.asc
Description: PGP signature