On Mar 9, 2009, at 2:12 PM, Chris Anderson wrote:
On Mon, Mar 9, 2009 at 10:22 AM, Jens Alfke <[email protected]>
wrote:
On Mar 8, 2009, at 4:51 PM, Antony Blakey wrote:
OLPC has a start on canonical JSON:
http://wiki.laptop.org/go/Canonical_JSON.
Thanks for the link! I'd done a bit of searching for prior-art, but
hadn't
run across that.
It's pretty close to my description. It looks good, except that
• They say that arbitrary byte sequences are allowed in strings.
This is
really problematic (it makes it impossible to reliably convert JSON
to an
in-memory JavaScript string object!) and contradicts the JSON spec,
whose
third paragraph says that "a string is a sequence of zero or more
Unicode
characters".
• As I did, they say keys should be sorted in "lexicographic
order". Like
me, they probably meant "code-point order".
The ban on floating-point numbers is sensible. I'd draw the line at
banning
integers, though, as that tends to really complicate round-trip
transformations with in-memory objects. (Every int field has to be
shadowed
as a string.)
Quoting myself from the old thread - is a printf style formatter a
reasonable compromise for floats?
The problem is floating point/text is a really difficult problem. I've
butchered FP text conversion in Lotus Notes myself, it's way more
complicated and subtle then I ever though it could be. While there are
standardized algorithms, the reality is they are so many languages and
libraries that have slight variations it's really hard to know when
you got it right. You may have written a ton of tests and everything
passes, but did you try it with "0.13421143112e-12" Because the bias
tables in algorithm X is slightly different from algorithm Y, and none
of your tests detect it. Yada yada.
If the numbers were sent as the raw hexidemical bits in the double,
then there is no problem to fidelity. However, you can't look at that
as a human and tell what the FP value is, and no json parser will
understand it either. I understand why they punted and said "none
allowed".
-Damien
The requirement not to use JSON numbers might be too stringent. The
other option is to have the signature state which form of number it's
working with. Basically this would be a function from the number to a
string.
Eg: all numbers in this signature were converted to strings using the
printf format "%4.2f" or somesuch.
This would allow signers to specify the precision that must be
maintained for them to consider the document representative of what
they chose to sign. So you might end up signing a document that would
be valid if transport lost some precision, but invalid if it lost too
much precision.
As long it rounds to "3.33" I'm good. If the JSON makes it all the
way
to the other end, and still contains 3.3333333333 that may be better,
but it doesn't effect the signature.
--
Chris Anderson
http://jchris.mfdz.com