Re: Revisiting Decimal (generic algorithms)

2009-01-31 Thread Sam Ruby

Brendan Eich wrote:


This variation preserves wrappers, so a Decimal converter function (when 
invoked) and constructor (via new, and to hold a .prototype home for 
methods). The committee plunks for more of this primitive/wrapper 
business, since we have wrappers and primitives for numbers and other 
types, and backward compatibility requires keeping them. Operators work 
mostly as implemented already by Sam (results here 
http://intertwingly.net/blog/2008/08/27/ES-Decimal-Updates, with some 
out-of-date results; notably typeof 1.1m should be decimal not 
object -- and not number).


More up to date results can be found here:

http://intertwingly.net/stories/2008/09/20/estest.html

Which was discussed here:

https://mail.mozilla.org/pipermail/es-discuss/2008-December/008316.html

Sam and I are going to work on adapting Sam's SpiderMonkey 
implementation, along with our existing ES3.1-based JSON codec and 
trace-JITting code, to try this out. More details as we get into the work.


Since the bug is about usability, we have to prototype and test on real 
users, ideally a significant number of users. We crave comments and 
ideas from es-discuss too, of course.


I'd like to highlight one thing: Mike and I agreed to no visible 
cohorts with the full knowledge that it would be a significant 
usability issue.  We did so in order to get decimal in 3.1. In the 
context of Harmony, I feel that is is important that we fully factor in 
usability concerns.  Prototyping and testing on real users, ideally with 
a significant number of users, is an excellent way to proceed.



/be


- Sam Ruby

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-30 Thread Brendan Eich

On Jan 18, 2009, at 4:48 PM, Brendan Eich wrote:


In any case, I think we first need to decide what the semantics
would be *after* any desugaring of multimethods.


The goal is DWIM, which is why we've circled around these implicit  
or low-cost-if-explicit approaches.


Of course DWIM is ill-defined, but bug 5856 and dups suggest much of  
the problem comes from the language supporting numeric literals  
written in base 10 with certain precision or significance, but then  
mistreating them via conversion to binary and inevitable operation  
using only binary operators.




1. changing the number type to decimal by fiat;
2. adding a use decimal pragma;
3. trying to keep literals generic.

The high-cost explicit alternative is to tell 'em use the m  
suffix! That probably will not work out well in the real world.  
It's a syntax tax hike: it will require all user agents to be  
upgraded (unlike use decimal), and yet people will still forget to  
use the suffix.


I'm still interested in better use decimal design ideas.


Allen made another proposal, which Waldemar mentioned in his notes  
from the TC39 meeting:


4. All literals lex as decimal, string to number likewise converts to  
decimal; but contagion is to binary, Math.sin/PI/etc. remain binary.  
JSON would parse to decimal in this proposal.


This variation may require opt-in as Waldemar pointed out: people  
write 1e400 to mean Infinity.


This variation preserves wrappers, so a Decimal converter function  
(when invoked) and constructor (via new, and to hold a .prototype home  
for methods). The committee plunks for more of this primitive/wrapper  
business, since we have wrappers and primitives for numbers and other  
types, and backward compatibility requires keeping them. Operators  
work mostly as implemented already by Sam (results here, with some out- 
of-date results; notably typeof 1.1m should be decimal not object  
-- and not number).


Sam and I are going to work on adapting Sam's SpiderMonkey  
implementation, along with our existing ES3.1-based JSON codec and  
trace-JITting code, to try this out. More details as we get into the  
work.


Since the bug is about usability, we have to prototype and test on  
real users, ideally a significant number of users. We crave comments  
and ideas from es-discuss too, of course.


/be___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-30 Thread Brendan Eich

On Jan 30, 2009, at 6:28 PM, Brendan Eich wrote:

According to http://en.wikipedia.org/wiki/Polymorphism_(computer_science) 
 (hey, it's referenced):


There are two fundamentally different kinds of polymorphism,  
originally informally described by Christopher Strachey in 1967. If  
the range of actual types that can be used is finite and the  
combinations must be specified individually prior to use, it is  
called Ad-hoc polymorphism. If all code is written without mention  
of any specific type and thus can be used transparently with any  
number of new types, it is called parametric polymorphism. John C.  
Reynolds (and later Jean-Yves Girard) formally developed this notion  
of polymorphism as an extension to the lambda calculus (called the  
polymorphic lambda calculus, or System F).


So multimethods use parametric polymorphism.


Correction: multimethods are ad-hoc too, since you have write a  
particular type combination. For dyadic operators, the multiple- 
argument dispatch differs from single (left, receiver) dispatch, but  
the type combinations are still finite and specified.


Not sure this matters. The real argument is about single vs. multiple  
dispatch.



Lars's point about future-proofing, when he wrote ad-hoc  
overloading, seems to me to be about adding extensible dyadic  
operators via double-dispatch now, then adding multiple dispatch in  
some form later and being prevented by compatibility considerations  
from changing operators. Best to ask him directly, though -- I'll do  
that.


Lars meant exactly that -- in any conversation tending toward a future  
version of the language where multimethods or something that addresses  
the bugs (or features from the other point of view) of single- 
dispatch operators might come along, standardizing single dispatch and  
requiring double(-single)-dispatch from left to right, with  
reverse_add and so on, would be future-hostile.


/be___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-18 Thread Brendan Eich

On Jan 16, 2009, at 7:38 PM, David-Sarah Hopwood wrote:


It could be argued that most ES3.x programs are probably not relying
on the exact errors introduced by double-precision IEEE 754, but that
seems risky to me.


Emphatically agreed. People file dups of bug 5856 but they also  
knowingly and unknowingly depend on IEEE 754 behavior in detail.




By that argument, ignoring performance, you could
unconditionally implement all numbers as decimals, and I don't think
many people here would accept that as being compatible.


This was the path favored by Mike Cowlishaw and (sometimes, IIRC) by  
Doug Crockford. It was rejected by at least me (for Mozilla) and  
Maciej (for Apple).




To address the problem raised by Allen, you would probably want to
implicitly define implementations that used different types for
constants, depending on the argument types to a given function
(and it is not clear how that would work for mixed-type arguments).


Another idea for constants that seems strictly more usable than any  
suffix requirement or complicated constant-parameter-based dispatch:  
use decimal. The idea is to change the meaning of literals andn  
operators. Again the problem of built-ins, or really of interfacing  
with the rest of the world not scoped by the lexical pragma, remains.




In any case, I think we first need to decide what the semantics
would be *after* any desugaring of multimethods.


The goal is DWIM, which is why we've circled around these implicit  
or low-cost-if-explicit approaches.


* changing the number type to decimal by fiat;
* adding a use decimal pragma;
* trying to keep literals generic.

The high-cost explicit alternative is to tell 'em use the m suffix!  
That probably will not work out well in the real world. It's a syntax  
tax hike: it will require all user agents to be upgraded (unlike use  
decimal), and yet people will still forget to use the suffix.


I'm still interested in better use decimal design ideas.

/be
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


RE: Revisiting Decimal (generic algorithms)

2009-01-16 Thread Allen Wirfs-Brock
Returning to  Brendan's original question...

The problem I brought up at the Kona meeting was that the Decimal proposal did 
not consistently enable the writing of functions that implement generic numeric 
algorithms.  By this I mean algorithms that can be applied to either Number or 
Decimal arguments and produce a result that is of the same type as its inputs 
for example:

function add(a,b) { return a+b};
// ok in Kona proposal, add(1M,1M)== 2m; add(1.0,1.0)==2.0  (binary)

function max3(a,b,c) {return Math.max(a,b,c)}
// not generic in Kona proposal, max3(1m,2m,3m) == 3 (not 3m)
// no generic Math functions so user must explicitly code either Math.max or 
Decimal.max

function fuzz(a) { return a + 0.1}
//not generic in Kona draft, fuzz(1m) === 1.10008881784197001... 
(but see below)
//Kona spec. uses binary floating point for all mixed mode operations

The second case is fixable with some work by making the Math functions all be 
generic.

Sam says the third case is a bug in the Kona spec. whose fix had already been 
agreed upon at the Redmond meeting.  The fix, as I understand it, is that mixed 
mode arithmetic should be performed using decimal operations. However, that 
does not address my concern.  With that fix in place, the results of fuzz(1m) 
would be something like 1.1000888178419700125232338905334472656250m 
(-- note m).  That is because the literal 0.1 would be lexed as a Number 
(ie, binary floating point) literal, stored as a binary approximation, and that 
binary approximation would be dynamically converted to the decimal floating 
point equivalent of the binary approximation by the add operations.

This problem cannot be fixed simply by tweaking the coercion rules.  It 
probably requires that numeric literals be treated as generic values that are 
only interpreted situationally as either binary or decimal values in the 
context of a particular operations.

The design details of the integrations of multiple numeric data types 
(potentially not just Number and Decimal) and questions such as whether and how 
a dynamically typed language like ECMAScript should support such generic 
algorithms will have long lasting impact on the usability of the language.  My 
perspective in Kona, when we talked about Decimal, was that these are Harmony 
scale issues that must be carefully thought through and that they should not be 
prematurely and irrevocably resolved as a consequence of an accelerated effort 
to include Decimal in ES3.1.

Allen

-Original Message-
From: es-discuss-boun...@mozilla.org [mailto:es-discuss-
boun...@mozilla.org] On Behalf Of Brendan Eich
Sent: Friday, January 09, 2009 2:34 PM
To: Waldemar Horwat; David-Sarah Hopwood; Sam Ruby
Cc: es-discuss
Subject: Re: Revisiting Decimal

Sam's mail cited below has gone without a reply for over a month.
Decimal is surely not a high priority, but this message deserves some
kind of response or we'll have to reconstruct the state of the
argument later, at probably higher cost.

I was not at the Redmond meeting, but I would like to take Sam's word
that the cohort/toString issue was settled there. I heard from Rob
Sayre something to this effect.

But in case we don't have consensus, could any of you guys state the
problem for the benefit of everyone on this list? Sorry if this seems
redundant. It will help, I'm convinced (compared to no responses and
likely differing views of what the problem is, or what the consensus
was, followed months later by even more painful reconstruction of the
state of the argument).

The wrapper vs. primitive issue remains, I believe everyone agrees.

/be

On Dec 4, 2008, at 2:22 PM, Sam Ruby wrote:

 2008/12/4 Brendan Eich bren...@mozilla.com:

 Sam pointed that out too, and directed everyone to his test-
 implementation
 results page:
 http://intertwingly.net/stories/2008/09/20/estest.html
 Indeed we still have an open issue there ignoring the wrapper one:

 [Sam wrote:] I think the only major outstanding semantic issue was
 wrapper
 objects; apart from that, the devil was in the detail of spec
 wording.[End Sam]

 No, the cohort/toString issue remains too (at least).

 With a longer schedule, I would like to revisit that; but as of
 Redmond, we had consensus on what that would look like in the context
 of a 3.1 edition.

 From where I sit, I find myself in the frankly surreal position that
 we are in early December, and there are no known issues of consensus,
 though I respect that David-Sarah claims that there is one on
 wrappers, and I await his providing of more detail.

 /be

 - Sam Ruby

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-16 Thread Brendan Eich

On Jan 16, 2009, at 2:25 PM, Allen Wirfs-Brock wrote:

I think that carry dual encodings (both binary and decimal) for  
each numeric literal might be a reasonable approach as long as we  
only have two types.


Excluding small integer literals, most numeric literals in my  
experience are small enough that carrying 8 + 16 = 24 bytes loses, but  
you're right that this is all implementation detail. Still, the spec  
is informed by implementor feedback, to the point that it can't be  
developed in a vacuum or it might be ignored.



 However choosing  that over maintaining the source form sounds  
like an implementation rather than specification decision.


Speaking for Mozilla, we probably can't tolerate anything like  
carrying around two representations, or source forms, for number  
literals. I'd have to measure non-int literals to say for sure, but  
gut check says no.


I'm not saying multimethods are the only way forward. I'm genuinely  
interested in new thinking about numbers and decimal, because of that  
most-frequently-dup'ed bug:


https://bugzilla.mozilla.org/show_bug.cgi?id=5856

But I do not see a solution for it yet, and your point that we need to  
solve this just to get decimal+double into the language is right on.


/be
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-16 Thread David-Sarah Hopwood
Allen Wirfs-Brock wrote:
[...]
 function fuzz(a) { return a + 0.1}
 //not generic in Kona draft, fuzz(1m) === 1.10008881784197001... 
 (but see below)
 //Kona spec. uses binary floating point for all mixed mode operations
 
[...]
 This problem cannot be fixed simply by tweaking the coercion rules.
 It probably requires that numeric literals be treated as generic values
 that are only interpreted situationally as either binary or decimal values
 in the context of a particular operations.

I am not aware of any precedent for this approach in other languages, and
I'm very skeptical about whether it can be made to work in ECMAScript.
Consider

  function id(x) { return x; }

What is the result and type of id(0.1) in this approach, and why?

-- 
David-Sarah Hopwood
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-16 Thread David-Sarah Hopwood
David-Sarah Hopwood wrote:
 Allen Wirfs-Brock wrote:
 [...]
 function fuzz(a) { return a + 0.1}
 //not generic in Kona draft, fuzz(1m) === 
 1.10008881784197001... (but see below)
 //Kona spec. uses binary floating point for all mixed mode operations

 [...]
 This problem cannot be fixed simply by tweaking the coercion rules.
 It probably requires that numeric literals be treated as generic values
 that are only interpreted situationally as either binary or decimal values
 in the context of a particular operations.
 
 I am not aware of any precedent for this approach in other languages, and
 I'm very skeptical about whether it can be made to work in ECMAScript.
 Consider
 
   function id(x) { return x; }
 
 What is the result and type of id(0.1) in this approach, and why?

 - if binary 0.1, then we would have

 1m + 0.1 !== 1m + id(0.1)

   which breaks referential transparency (in the absence of side-effects)

 - if decimal 0.1m, then we break compatibility with ES3.

 - if the value remains generic, then such values must be supported at
   run-time as a third numeric type besides number and decimal, which
   seems unsupportably complex to me.

-- 
David-Sarah Hopwood ⚥

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-16 Thread Brendan Eich

On Jan 16, 2009, at 4:30 PM, Sam Ruby wrote:


Indeed.  This is the first time I understood (at a high level) the
request.  I'm not saying it wasn't explained before, or even that it
wasn't explained well, but this is the first time I understood it
(again at a high level, questions on details below).


It's good to get this understood widely -- it probably did not come  
through in the notes from Kona (useful as they were). Sorry on my part  
for that, kudos again to Allen.




Like Allen says later, most small integers (i.e., the ones that fit
exactly in a double precision binary value) can simply be retained as
binary64.


Or machine ints -- ALU  FPU still.



I suspect that covers the majority of constants in deployed
javascript.  Now let's consider the rest.

First, Allen's example:

function fuzz(a) { return a + 0.1}

Where fuzz(0.1)===0.2 and fuzz(0.1m)===0.2m

The only way I can see that working is if the constant is initially in
a form that either is readily convertible to source, or stores both
values.  I don't understand how multimethods (on +?) affect this.
If I'm missing something, please let me know (or simply provide a
pointer to where I can educate myself).


I did, see followup links to reading-lists, from which I'll pick a  
specific link:


http://www.artima.com/weblogs/viewpost.jsp?thread=101605



Continuing on, let's tweak this a bit.

function fuzz(a) {var b=0.1; return a+b}

I would suggest that if the expectation would be that this function
behaves the same as the previous one.


It had better!



My interpretation is that this means that internally there are three
data types, one that is double, one that is decimal, and one that
somehow manages to be both.  Internally in that this implementation
detail ideally should not be visible to the application programmer.
Again, I could be wrong (in the need for three data types, not on the
opinion that this should not be visible), but pressing on...


No, Allen allowed for that, but of course this generic type has to  
propagate at runtime through variable and function abstraction.




function is_point_one(a) {var b=0.1; return b===a}

Is the expectation that this would return true for *both* 0.1 and
0.1m?


I don't see how this could work.



 This leads to a rather odd place where it would be possible for
triple equals to not be transitive, i.e. a===b and b===c but not
a!===c.


Er, a!==c ;-).



 That alone is enough to give me pause and question this
approach.


Me too.



Continuing trip down this looking glass, what should typeof(0.1)
return?  You might come to a different conclusion, and again I might
be missing something obvious, but if these Schrödinger's catstants
(sorry for the bad pun) can be assigned to variable, then I would
assert that typeof(0.1) and typeof(0.1m) should both be 'number'.


It should be clear that I won't go this far. My reply to Allen was  
gently suggesting that his suggestion would not fly on implementation  
efficiency grounds, but I think you've poked bigger holes. I'm still  
interested in multimethods, including for operators.




Finally, this has bearing on the previous json discussion.  If it is
possible to defer the binding of a literal value to a particular
variant of floating point (i.e., binary vs decimal), then there no
longer is no need for a JSON parse to prematurely make this
determination.

I suspect that these last two paragraphs will make Kris happy.


The previous paragraphs should induce unhappiness that trumps that  
illusory joy, though.




But I'll stop here.  I may very well be out in the weeds at this
point.  But my initial take is that this approach produces a different
(and somehow more fundamental) set of surprises that the approach than
we had previously agreed on, and furthermore it isn't clear to me that
this approach can be implemented in a way that has negligible
performance impact for applications that never make use of decimal.

But I hope that one or both of you (or anybody else) can point out
something that I'm missing.


Not me, and I see David-Sarah has observed that dual representation  
cannot be confined to literals.


But I'd still like to encourage thinking outside of the narrow ES3-ish  
box in which Decimal has been cast. If not multimethods, some other  
novel (to ES, not to well-researched language design) is needed.


/be
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-16 Thread Brendan Eich

On Jan 16, 2009, at 5:30 PM, David-Sarah Hopwood wrote:


David-Sarah Hopwood wrote:

function id(x) { return x; }

What is the result and type of id(0.1) in this approach, and why?


- if binary 0.1, then we would have

1m + 0.1 !== 1m + id(0.1)

  which breaks referential transparency (in the absence of side- 
effects)


- if decimal 0.1m, then we break compatibility with ES3.

- if the value remains generic, then such values must be supported at
  run-time as a third numeric type besides number and decimal, which
  seems unsupportably complex to me.


Agreed on all points.

Have you looked at multimethods in Cecil?

http://www.cs.washington.edu/research/projects/cecil/pubs/cecil-oo-mm.html
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.8502

Good discussion, let's keep it going.

/be
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-16 Thread Sam Ruby
On Fri, Jan 16, 2009 at 8:30 PM, Brendan Eich bren...@mozilla.com wrote:

 Like Allen says later, most small integers (i.e., the ones that fit
 exactly in a double precision binary value) can simply be retained as
 binary64.

 Or machine ints -- ALU  FPU still.

Agreed.  Those values that could fit in int32 before could continue to do so.

 I suspect that covers the majority of constants in deployed
 javascript.  Now let's consider the rest.

 First, Allen's example:

 function fuzz(a) { return a + 0.1}

 Where fuzz(0.1)===0.2 and fuzz(0.1m)===0.2m

 The only way I can see that working is if the constant is initially in
 a form that either is readily convertible to source, or stores both
 values.  I don't understand how multimethods (on +?) affect this.
 If I'm missing something, please let me know (or simply provide a
 pointer to where I can educate myself).

 I did, see followup links to reading-lists, from which I'll pick a specific
 link:

 http://www.artima.com/weblogs/viewpost.jsp?thread=101605

I must be dense.  My previous understanding of multimethods was that
it depends on the assumption that the type of each argument can be
determined.  That article doesn't change that for me.

 Continuing on, let's tweak this a bit.

 function fuzz(a) {var b=0.1; return a+b}

 I would suggest that if the expectation would be that this function
 behaves the same as the previous one.

 It had better!

So, here's the problem.  At the point of the ';' in the above, what is
the result of typeof(b)?

The problem gets worse rapidly.  The above may seem to be appealing at
first, but it degenerates rapidly.  Consider:

function fuzz(a) {var b=0.05; var c=0.05; var d=b+c; return a+d}

Should this return the same results as the previous fuzz functions?
What is the value of typeof(d)?

 My interpretation is that this means that internally there are three
 data types, one that is double, one that is decimal, and one that
 somehow manages to be both.  Internally in that this implementation
 detail ideally should not be visible to the application programmer.
 Again, I could be wrong (in the need for three data types, not on the
 opinion that this should not be visible), but pressing on...

 No, Allen allowed for that, but of course this generic type has to propagate
 at runtime through variable and function abstraction.

I don't follow.

 function is_point_one(a) {var b=0.1; return b===a}

 Is the expectation that this would return true for *both* 0.1 and
 0.1m?

 I don't see how this could work.

Before proceeding, let me simplify that:

function is_point_one(a) {return a===0.1}

The point of fuzz was that 0.1 as a literal would be interpreted as
a binary64 or as a decimal128 based on what it was combined with.  Why
would this example be any different?

  This leads to a rather odd place where it would be possible for
 triple equals to not be transitive, i.e. a===b and b===c but not
 a!===c.

 Er, a!==c ;-).

  That alone is enough to give me pause and question this
 approach.

 Me too.

 Continuing trip down this looking glass, what should typeof(0.1)
 return?  You might come to a different conclusion, and again I might
 be missing something obvious, but if these Schrödinger's catstants
 (sorry for the bad pun) can be assigned to variable, then I would
 assert that typeof(0.1) and typeof(0.1m) should both be 'number'.

 It should be clear that I won't go this far. My reply to Allen was gently
 suggesting that his suggestion would not fly on implementation efficiency
 grounds, but I think you've poked bigger holes. I'm still interested in
 multimethods, including for operators.

I don't see how this reasonably can be done half way.

And while multimethods are appealing for other reasons, I don't think
they relate to what Allen is suggesting.

- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-16 Thread Brendan Eich

On Jan 16, 2009, at 5:54 PM, Sam Ruby wrote:


http://www.artima.com/weblogs/viewpost.jsp?thread=101605


I must be dense.  My previous understanding of multimethods was that
it depends on the assumption that the type of each argument can be
determined.  That article doesn't change that for me.


Good! :-P

Not static typing, mind you; but typing nonetheless.



My interpretation is that this means that internally there are three
data types, one that is double, one that is decimal, and one that
somehow manages to be both.  Internally in that this implementation
detail ideally should not be visible to the application programmer.
Again, I could be wrong (in the need for three data types, not on  
the

opinion that this should not be visible), but pressing on...


No, Allen allowed for that, but of course this generic type has to  
propagate

at runtime through variable and function abstraction.


I don't follow.


My reading of Allen's message was that the generic type was for  
literals only, and would collapse (as in a superposed wave function)  
into decimal or double on first operational use. but use can be  
delayed through variable or parameter assignment. So the generic or  
both-double-and-decimal type must be used more widely than just for  
literal terms at runtime.




function is_point_one(a) {var b=0.1; return b===a}

Is the expectation that this would return true for *both* 0.1 and
0.1m?


I don't see how this could work.


Before proceeding, let me simplify that:

function is_point_one(a) {return a===0.1}

The point of fuzz was that 0.1 as a literal would be interpreted as
a binary64 or as a decimal128 based on what it was combined with.  Why
would this example be any different?


It wouldn't, but that breaks one of three important properties  
(referential transparency, compatibility, or implementation  
efficiency)  as DSH has pointed out.



It should be clear that I won't go this far. My reply to Allen was  
gently
suggesting that his suggestion would not fly on implementation  
efficiency
grounds, but I think you've poked bigger holes. I'm still  
interested in

multimethods, including for operators.


I don't see how this reasonably can be done half way.


Right.



And while multimethods are appealing for other reasons, I don't think
they relate to what Allen is suggesting.


They do not -- they are the only sane alternative that I know of.

/be
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: JSON numbers (was: Revisiting Decimal)

2009-01-16 Thread Brendan Eich

On Jan 15, 2009, at 7:28 PM, Sam Ruby wrote:

On Thu, Jan 15, 2009 at 9:24 PM, Brendan Eich bren...@mozilla.com  
wrote:


JSON's intended semantics may be arbitrary precision decimal (the  
RFC is

neither explicit nor specific enough in my opinion; it mentions only
range, not precision), but not all real-world JSON codecs use  
arbitrary
precision decimal, and in particular today's JS codecs use IEEE  
double
binary floating point. This approximates by default and creates a  
de-facto

standard that can't be compatibly extended without opt-in.


You might find the next link enlightening or perhaps even a pleasant  
diversion:


http://www.intertwingly.net/stories/2002/02/01/toInfinityAndBeyondTheQuestForSoapInteroperability.html

Quick summary as it applies to this discussion: perfection is
unattainable (duh!) and an implementation which implements JSON
numbers as quad decimal will retain more precision than one that
implements JSON numbers as double binary (duh!).


DuhT^2 ;-).

But more than that: discounting the plain fact that on the web at  
least, SOAP lost to JSON (Google dropped its SOAP APIs a while ago),  
do you draw any conclusions?


My conclusion, crustier and ornier as I age, is that mixed-mode  
arithmetic with implicit conversions and best effort approximation  
is a botch and a blight. That's why I won't have it in JSON, encoding  
*and* decoding.


/be

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-16 Thread David-Sarah Hopwood
Brendan Eich wrote:
 On Jan 16, 2009, at 5:30 PM, David-Sarah Hopwood wrote:
 David-Sarah Hopwood wrote:
 function id(x) { return x; }

 What is the result and type of id(0.1) in this approach, and why?

 - if binary 0.1, then we would have

 1m + 0.1 !== 1m + id(0.1)

   which breaks referential transparency (in the absence of side-effects)

 - if decimal 0.1m, then we break compatibility with ES3.

 - if the value remains generic, then such values must be supported at
   run-time as a third numeric type besides number and decimal, which
   seems unsupportably complex to me.
 
 Agreed on all points.

A final nail in the coffin for the last (three-type) option above:

In ES3, the expression Number(0.1 + 0.1 + 0.1) would give
  Number(0.1) + Number(0.1) + Number(0.1) ==
0.3000444089209850062616169452667236328125

In the three-type option, it would give
  Number(0.3) ==
0.299988897769753748434595763683319091796875

(Decimal expansions are computed using SpiderMonkey's implementation
of toFixed. The point is simply that they are different.)

So the three-type option does not maintain compatibility, at least
if we are concerned with exact values.

It could be argued that most ES3.x programs are probably not relying
on the exact errors introduced by double-precision IEEE 754, but that
seems risky to me. By that argument, ignoring performance, you could
unconditionally implement all numbers as decimals, and I don't think
many people here would accept that as being compatible.

Compatibility could, in principle, be maintained by adding a third
kind of literal for generic values, with a different suffix. However,
I think it is likely that unless generic values used the suffix-free
numeric literal form, they would remain too rarely used to make any
difference to the issue that Allen is concerned about.

 Have you looked at multimethods in Cecil?

I've previously studied Cecil's multimethods and type system in
detail (it's very nicely designed IMHO), but I'm not sure that it
is what we need here. Multimethods address the problem of how to
concisely define type-dependent functions, but the implementations
of those functions still have to be given explicitly for each type
combination on which the behaviour differs (ignoring inheritance
and subtyping, which I don't think are relevant here).

To address the problem raised by Allen, you would probably want to
implicitly define implementations that used different types for
constants, depending on the argument types to a given function
(and it is not clear how that would work for mixed-type arguments).

In any case, I think we first need to decide what the semantics
would be *after* any desugaring of multimethods.

-- 
David-Sarah Hopwood
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: JSON numbers (was: Revisiting Decimal)

2009-01-16 Thread Sam Ruby

Brendan Eich wrote:

On Jan 15, 2009, at 7:28 PM, Sam Ruby wrote:

On Thu, Jan 15, 2009 at 9:24 PM, Brendan Eich bren...@mozilla.com 
wrote:


JSON's intended semantics may be arbitrary precision decimal (the RFC is
neither explicit nor specific enough in my opinion; it mentions only
range, not precision), but not all real-world JSON codecs use 
arbitrary

precision decimal, and in particular today's JS codecs use IEEE double
binary floating point. This approximates by default and creates a 
de-facto

standard that can't be compatibly extended without opt-in.


You might find the next link enlightening or perhaps even a pleasant 
diversion:


http://www.intertwingly.net/stories/2002/02/01/toInfinityAndBeyondTheQuestForSoapInteroperability.html 



Quick summary as it applies to this discussion: perfection is
unattainable (duh!) and an implementation which implements JSON
numbers as quad decimal will retain more precision than one that
implements JSON numbers as double binary (duh!).


DuhT^2 ;-).

But more than that: discounting the plain fact that on the web at least, 
SOAP lost to JSON (Google dropped its SOAP APIs a while ago), do you 
draw any conclusions?


My conclusion, crustier and ornier as I age, is that mixed-mode 
arithmetic with implicit conversions and best effort approximation 
is a botch and a blight. That's why I won't have it in JSON, encoding 
*and* decoding.


My age differs from your by a mere few months.

My point was not SOAP specific, but dealt with interop of such things as 
dates and dollars in a cross-platform setting.


My conclusion is that precision is perceived as a quality of 
implementation issue.  The implementations that preserve the most 
precision are perceived to be of higher quality than those that don't.


I view any choice which views binary64 as preferable to decimal128 as 
choosing *both* botch and blight.


Put another way, if somebody sends you a quantity and you send back the 
same quantity (i.e., merely round-trip the data), the originator will 
see it as being unchanged if their (the originator's) precision is less 
than or equal to the partner in this exchange.  This leads to an natural 
ordering of implementations from most-compatible to least.


A tangible analogy that might make sense to you, and might not.  Ever 
try rsync'ing *to* a Windows box?  Rsync from windows to windows works 
just fine.  Unix to unix also.  As does Windows-Unix-Windows.  But 
Unix-Windows-Unix needs fudge parameters.  Do you really want to the 
the Windows in this equation?  :-)


- Sam Ruby

P.S.  You asked my opinion, and I've given it.  This is something I have
  an opinion on, but not something I view as an egregious error if the
  decision goes the other way.
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-16 Thread Mark S. Miller
On Fri, Jan 16, 2009 at 5:34 PM, Brendan Eich bren...@mozilla.com wrote:

 Have you looked at multimethods in Cecil?

 http://www.cs.washington.edu/research/projects/cecil/pubs/cecil-oo-mm.html
 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.8502


On your recommendation, I have. I really wanted to like it. I really tried
to like it. In the end I was repelled in horror at its complexity.



 Good discussion, let's keep it going.


Indeed. After I made a simple proposal 
https://mail.mozilla.org/pipermail/es-discuss/2009-January/008535.html,
Michael Daumling pointed out that Adobe had made a similar proposal that had
been rejected:

On Fri, Jan 9, 2009 at 7:56 AM, Michael Daumling mdaeu...@adobe.com wrote:

 The discussion about operator overloading quickly went away from the
 JavaScript'ish approach that ExtendScript and your proposal used towards
 generic functions. At some time, the discussion stranded in areas too exotic
 for me. There is a rationale here:
 http://wiki.ecmascript.org/doku.php?id=discussion:operators

The objections listed there are

I think this feature is too weak to be included. Here are some reasons why I
 think that:

- Uncontrollable subtleties in dispatch: Adding eg a == operator to one
class and then comparing an instance x of that class to a value y of
another type means that the result can easily differ depending on whether
the programmer writes x == y or y == x. (If y has an operator == too
then its operator will be preferred in the latter case.) The most the 
 author
of the == operator can do about this is to add types to the operator's
signature so that strict mode catches the bug or the program fails
predictably at run-time.

 I'd argue that this is a feature, not a bug. Whether an operator is
commutative depends on the meaning of that operator on that data type. x *
y should mean the same as y * x if they are scalar numbers but not if
they are matrices.



-
- No inheritance: in almost all cases we would wish that if instances
of A and B are comparable with a certain semantics then instances of their
respective subclasses C and D are too.

 That objection doesn't apply to my proposal. (I'm not sure it does to
Adobe's either.)


-
- No compositionality: As the operators are tied to classes, a program
that wishes to use two separately authored classes A and B cannot define
their relations in terms of operators, the classes must be altered because
they do not know about each other.

 Again, I'd argue that this is a feature, not a bug. Likewise, if I see the
expression x.foo(y) and the meaning of the foo operation does not treat
its operands opaquely, if neither x nor y know about each other's interface,
then I'd expect the operation to fail. If some party outside of x and y
could define a generic foo that could make this operation succeed anyway,
I'd consider that a bug.


-

 Including operators as currently proposed would probably give us a headache
 if we wish to introduce a more powerful feature (probably based on some sort
 of ad-hoc overloading) in the future.

Functional folks often refer to oo polymorphism (or late binding) as
ad-hoc polymorphism, to distinguish it from their parametric polymorphism.
If this is what is meant, then my proposal and Adobe's both provide ad-hoc
polymorphism. If, as I suspect, something else is meant, I await hearing
what it might be.


-- 
   Cheers,
   --MarkM
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal

2009-01-15 Thread Sam Ruby

Kris Zyp wrote:
 



Only if never compared to a double. How do you prevent this?

We already agree that the decimal-double comparison will always be
false.


Not strictly true.

(1m == 1) = true
(1m === 1) = false

It is only fractions which have denominators which are not a pure power 
of two within the precision of double precision floating point that will 
compare unequal.  In particular,


(1.75m == 1.75m) = true
(1.76m == 1.76m) = false

For most people, what that works out to mean is that integers compare 
equal, but fractions almost never do.  It is worth noting that comparing 
fractions that are the result of computations with double precision 
floating point for strict equality rarely works out in practice, one 
typically needs to take into account an epsilon.


 The point is that this is representative of real world code
 that benefits more from the treatment of decimals as numbers.

I agree with your overall argument that the real point of JSON is 
inter-language interoperability, and that when viewed from that 
perspective, and that any JSON support that goes into ECMAScript should 
interpret literals which contain a decimal point as decimal.  But that's 
just an opinion.


At the moment, the present state is that we have discussed at length 
what the results of typeof(1m) and what JSON.parse('[1.1]') should 
return.  And now we are revisiting both without any new evidence.


In the past, I have provided a working implementation, either as a 
standalone JSON interpreter, as a web service, or integrated into 
Firefox.  I could do so again, and provide multiple versions that differ 
only in how they deal with typeof and JSON.parse.


But first, we need to collectively decide what empirical tests would 
help us to make a different conclusion.


- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal

2009-01-15 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
Unfortunately, I don't have enough time to continue to point by point
discussion. If the group feels typeof 1.1m - decimal, then so be
it, we can certainly handle that. My point was to show empirical
evidence that could hopefully be considered in the decision process.

As far as JSON goes, Dojo will encode decimals to numbers, there is
really no coherent alternative (encoding to strings would be even more
bizarre, and I can't think of any other option).

Kris

Brendan Eich wrote:
 On Jan 14, 2009, at 9:32 PM, Kris Zyp wrote:

 Of course, there is no decimal support in ES3, there is no other
 option.

 This is not strictly true:

 http://code.google.com/p/gwt-math/source/browse/trunk/gwt-math/js_originals/bigdecimal.js



 The point is that JSON peers that do math on numbers, to
 interoperate in general, need to parse and stringify to the same
 number type. It may be ok if only ints that fit in a double are
 used by a particular application or widget, but the syntax allows
 for fraction and exponent, which begs representation-type precision
 and radix questions.

 One of the major incentives for JSON is that it is
 interoperability between languages. If other implementations in
 other languages treat JSON's number as decimal than the assertion
 that I understood you were making that JSON number's are being
 universally expected to be treated as binary is not true.

 It's probably a mix, with application-dependent restrictions on
 domain and/or computation so that using either double or decimal
 works, or else buggy lack of such restrictions.


 JSON's numbers are decimal, languages that support decimals
 agree. Dojo _will_ convert JS decimal's to JSON numbers
 regardless of what path ES-Harmony takes with typeof, whether
 it requires a code change or not.

 That will break interoperatability between current
 implementations that use doubles not decimals.

 How so? And how did all the implementations that use decimals to
 interpret JSON numbers not break interoperability?

 Not necessarily. But correctness is not a matter of hopes or
 percentages. It may be fine for JSON to leave it to the app to
 choose number type and/or operations done on the data. But  some
 layer has to care. Some apps probably already depend on json2.js
 and json.js and the like (ES3.1's JSON built-in) using double, not
 decimal. Changing a future JSON codec to use decimal instead of
 double is not a backward-compatible change.


 So you are suggesting that we shouldn't let users pass mix of
 decimals and numbers even if they explicitly attempt to do so?

 No, I'm suggesting unintended mixed-mode bugs will be common if we
 make typeof 1.1m == number.


 It's not beside my point. If signficantly more real world code
 will break due to violating the expected invariant of a constant
 finite set of typeof results (and the expectation that numbers
 regardless of precision will be typeof - number) than those
 that break due to violating the expected invariant of typeof x ==
 typeof x = (x == y = x === y)

 We can't measure this, realistically, but again: the breakage from
 a new typeof result is not dependent on the numeric value of the
 operand, and entails either a missing case, or a possibly
 insufficient default case, while the breakage from your proposal is
  subtly data-dependent.

 Plus, the invariant (while not holy writ) is an important property
 of JS to conserve, all else equal.


 than I think we would be negligent as language designers to
 ignore that consideration.

 It's not a consideration if it can't be quantified, and if it
 introduces value-dependent numeric bugs. Decimal and double are
 different enough that typeof should tell the truth. 1.1m != 1.1,
 1.2m != 1.2, but 1.5m == 1.5.


 I understand the logical concerns, but I would love to see real
 empirical evidence that contradicts my suspicions.

 I gave some already, you didn't reply. Here's one, about
 dojotoolkit/dojo/parser.js:

 But if typeof 1.1m == number, then str2obj around line 52 might
 incorrectly call Number on a decimal string literal that does not
 convert to double (which Number must do, for backward
 compatibility), 

 It won't do to assume your proposal saves effort and demand me to
 prove you wrong. First, no one has access to all the extant typeof
 x == number code to do the analysis and prove the majority of
 such code would work with your proposal. This is akin to proving
 a negative. Second, I've given evidence based on Dojo that shows
 incompatibility if typeof 1.1m == number.

 How about we talk about an alternative: use decimal as a way to
 make all literals, operators, and built-ins decimal never double?

 The problem with this big red switch is that it requires
 conversion from outside the lexical scope in which the pragma is
 enabled, since code outside could easily pass double data into
 functions or variables in the pragma's scope. It requires a
 decimal-based suite of Math, etc., 

Re: Revisiting Decimal

2009-01-15 Thread Bob Ippolito
On Thu, Jan 15, 2009 at 5:49 AM, Kris Zyp k...@sitepen.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1



 Bob Ippolito wrote:
 On Wed, Jan 14, 2009 at 9:32 PM, Kris Zyp k...@sitepen.com wrote:

 -BEGIN PGP SIGNED MESSAGE- Hash: SHA1



 Brendan Eich wrote:
 On Jan 14, 2009, at 7:38 PM, Kris Zyp wrote:
 You need to change this in any case, since even though
 the JSON
 RFC allows arbitrary precision decimal literals,
 real-world decoders only decode into IEEE doubles. You'd
 have to encode decimals as strings and decode them using
 domain-specific (JSON schema based) type knowledge.
 No, every Java JSON library I have seen

 You've seen http://www.json.org/json2.js It and the json.js
 alternative JS implementation are popular. json2.js contains
 String.prototype.toJSON = Number.prototype.toJSON =
 Boolean.prototype.toJSON = function (key) { return
 this.valueOf(); };
 Of course, there is no decimal support in ES3, there is no other
 option.
 parses (at least some, if not all) numbers to Java's
 BigDecimal.

 JSON has nothing to do wth Java, and most implementations do
 not have Java BigDecimal, so I don't know how it can be
 relevant.
 One of the major incentives for JSON is that it is
 interoperability between languages. If other implementations in
 other languages treat JSON's number as decimal than the assertion
 that I understood you were making that JSON number's are being
 universally expected to be treated as binary is not true.
 JSON's numbers are decimal, languages that support decimals
 agree. Dojo _will_ convert JS decimal's to JSON numbers
 regardless of what path ES-Harmony takes with typeof, whether
 it requires a code change or not.

 That will break interoperatability between current
 implementations that use doubles not decimals.
 How so? And how did all the implementations that use decimals to
 interpret JSON numbers not break interoperability?

 I'm pretty sure that interoperability is broken when they do this,
 it's just very subtle and hard to debug. I have the same stance as
 Brendan here, I've even refused to implement the capability to
 directly encode decimal as JSON numbers in my simplejson package
 (the de facto json for Python). If a user of the library controls
 both ends of the wire, they can just as easily use strings to
 represent decimals and work with them exactly how they expect on
 both ends of the wire regardless of what their JSON implementation
 happens to do.

 Imagine the person at the other end of the wire is using something
 like JavaScript or PHP. If the message contains decimals as JSON
 numbers they can not accurately encode or decode those messages
 unless they write their own custom JSON implementation. How do they
 even KNOW if the document is supposed to have decimal precision?
 What if the other end passes too many digits (often the case if one
 side is actually using doubles)? If they are passed around as
 strings then everyone can use the document just fine without any
 compatibility issues. The lack of a de jure number precision and
 the lack of a date/datetime type are definitely my biggest
 grievances with the JSON spec.
 Specifying number representations would be far more grievous in terms
 of creating tight-couplings with JSON data. It is essential that
 implementations are free to use whatever number representation they
 desire in order to facilitate a loose coupled interchange.


For decimals, I definitely disagree here. In languages that support
both float and decimal, it's confusing at best. You can only decode as
one or the other, and if you try and do any math afterwards with the
wrong type it will explode. In Python's case anyway, you can't even
convert a float directly to a decimal without explicitly going through
string first. simplejson raises an exception when you try and encode a
decimal unless you tell it differently, it makes you decide how they
should get represented.

In simplejson it's trivial to transcode decimal to float (or string or
anything else) during encoding, or to get all numbers back as
decimal... but you have to do it explicitly. Loosely coupled doesn't
have to mean lossy.

-bob
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal

2009-01-15 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


Bob Ippolito wrote:
 On Thu, Jan 15, 2009 at 5:49 AM, Kris Zyp k...@sitepen.com wrote:

 -BEGIN PGP SIGNED MESSAGE- Hash: SHA1



 Bob Ippolito wrote:
 On Wed, Jan 14, 2009 at 9:32 PM, Kris Zyp k...@sitepen.com
 wrote:

 -BEGIN PGP SIGNED MESSAGE- Hash: SHA1



 Brendan Eich wrote:
 On Jan 14, 2009, at 7:38 PM, Kris Zyp wrote:
 You need to change this in any case, since even
 though the JSON
 RFC allows arbitrary precision decimal literals,
 real-world decoders only decode into IEEE doubles.
 You'd have to encode decimals as strings and decode
 them using domain-specific (JSON schema based) type
 knowledge.
 No, every Java JSON library I have seen

 You've seen http://www.json.org/json2.js It and the
 json.js alternative JS implementation are popular.
 json2.js contains String.prototype.toJSON =
 Number.prototype.toJSON = Boolean.prototype.toJSON =
 function (key) { return this.valueOf(); };
 Of course, there is no decimal support in ES3, there is no
 other option.
 parses (at least some, if not all) numbers to Java's
 BigDecimal.

 JSON has nothing to do wth Java, and most implementations
 do not have Java BigDecimal, so I don't know how it can
 be relevant.
 One of the major incentives for JSON is that it is
 interoperability between languages. If other implementations
 in other languages treat JSON's number as decimal than the
 assertion that I understood you were making that JSON
 number's are being universally expected to be treated as
 binary is not true.
 JSON's numbers are decimal, languages that support decimals
  agree. Dojo _will_ convert JS decimal's to JSON numbers
 regardless of what path ES-Harmony takes with typeof,
 whether it requires a code change or not.

 That will break interoperatability between current
 implementations that use doubles not decimals.
 How so? And how did all the implementations that use decimals
 to interpret JSON numbers not break interoperability?
 I'm pretty sure that interoperability is broken when they do
 this, it's just very subtle and hard to debug. I have the same
 stance as Brendan here, I've even refused to implement the
 capability to directly encode decimal as JSON numbers in my
 simplejson package (the de facto json for Python). If a user of
 the library controls both ends of the wire, they can just as
 easily use strings to represent decimals and work with them
 exactly how they expect on both ends of the wire regardless of
 what their JSON implementation happens to do.

 Imagine the person at the other end of the wire is using
 something like JavaScript or PHP. If the message contains
 decimals as JSON numbers they can not accurately encode or
 decode those messages unless they write their own custom JSON
 implementation. How do they even KNOW if the document is
 supposed to have decimal precision? What if the other end
 passes too many digits (often the case if one side is actually
 using doubles)? If they are passed around as strings then
 everyone can use the document just fine without any
 compatibility issues. The lack of a de jure number precision
 and the lack of a date/datetime type are definitely my biggest
 grievances with the JSON spec.
 Specifying number representations would be far more grievous in
 terms of creating tight-couplings with JSON data. It is essential
 that implementations are free to use whatever number
 representation they desire in order to facilitate a loose coupled
 interchange.


 For decimals, I definitely disagree here. In languages that support
  both float and decimal, it's confusing at best. You can only
 decode as one or the other, and if you try and do any math
 afterwards with the wrong type it will explode. In Python's case
 anyway, you can't even convert a float directly to a decimal
 without explicitly going through string first. simplejson raises an
 exception when you try and encode a decimal unless you tell it
 differently, it makes you decide how they should get represented.

 In simplejson it's trivial to transcode decimal to float (or string
 or anything else) during encoding, or to get all numbers back as
 decimal... but you have to do it explicitly. Loosely coupled
 doesn't have to mean lossy.

 -bob

Where is the loss coming from? JSON isn't doing any computations or
coercions, and ES would only be experiencing a loss when serializing
binary floats to JSON, but not with decimals. Decoders should be
allowed to be explicit and have control over how they choose to
internally represent the numbers they receive from JSON. Decimals in
string format doesn't change that fact, and is far more confusing.
Kris


- --
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
 
iEYEARECAAYFAklvdr4ACgkQ9VpNnHc4zAyNngCcD7fU1kuQIaQAugtjZZQL7a7X
3lQAnj1RnvhEYFNmtatdmeVN5tBlxuVk
=XS7F
-END PGP SIGNATURE-


Re: Revisiting Decimal

2009-01-15 Thread Brendan Eich

On Jan 15, 2009, at 10:46 AM, Kris Zyp wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Brendan Eich wrote:

On Jan 15, 2009, at 9:47 AM, Kris Zyp wrote:


Where is the loss coming from?


Decimal-using peer S1 encodes

{p:1.1, q:2.2}

Double-using peer C1 decodes, adds, and returns

{p:1.1, q:2.2, r:3.3003}

The sender then checks the result using decimal and finds an error.
Meanwhile the same exchange between S1 and decimal-using peer C2
succeeds without error.

/be


Exactly, C1 introduces the error by it's use of binary.


This is not about blame allocation. The system has a problem because,  
even though JSON leaves numeric representation unspecified, higher  
layers fail to agree. That could be viewed as a JSON shortcoming, or  
it could be the fault of the higher layers. I don't want to debate  
which is to blame here right now (more below).


The point is that all the JS self-hosted JSON implementations I've  
seen, and (crucially) the ES3.1 native JSON implementation, use  
double, not decimal. This constitutes an interoperation hazard and it  
constrains compatible future changes to ES-Harmony -- specifically, we  
can't switch JSON from double to decimal by default, when decoding  
*or* encoding.




The JSON
encoding didn't introduce the error. JSON exactly represented the data
it was given,


JSON the RFC is about syntax. It doesn't say more than An  
implementation may set limits on the range of numbers regarding  
semantics of implementations.


Actual implementations that use double and decimal will not  
interoperate for all values. That means not interoperate.




and the decimal decoding and encoding peer refrains from
introducing errors as well.


Assuming no errors is nice. (What color is the sky in your world? :-P)  
Meanwhile, back on this planet we were debating the best way to reduce  
the likelihood of errors when adding decimal to JS. Back to that debate:




Are you arguing that all JSON interaction relies on other peers
introduces errors according to binary floating computations? I already
disproved that idea by pointing out there are existing implementations
that use decimal and thus don't add such errors.


You didn't prove or disprove anything, you hand-waved. For all you or  
I know, and Bob Ippolito agrees, there are latent, hard-to-find bugs  
already out there.


Our bone of contention specifically was making (typeof 1.1m ==  
number), and I am certain that this increases the likelihood of such  
bugs. It cannot reduce the likelihood of such bugs. Whether it means  
more work for Dojo and other frameworks that would need to adapt to  
the addition of decimal to a future ES spec is secondary, or really  
just TANSTAAFL.




If you are arguing
that there are certain client-server couples have become dependent on
these errors there are couple faults in this logic. First, the error
dependencies experienced in languages are almost always going to be
internalized by devs. Because there may be a number of internal error
expectations in existence does not imply a significant number of
inter-machine rounding error expectation dependency.


To the extent I understand what you mean here, I can only disagree --  
completely!


You're arguing by assertion that rounding errors due to double's  
finite binary precision, which are the most reported JS bug at https://bugzilla.mozilla.org 
, are somehow insignificant when JSON transcoding is in the mix.  
That's a bold assertion.




Second, I am not
asserting that JSON decoding should automatically convert JSON numbers
to binary, only that JSON encoding should serialize decimals to
numbers.


What should JSON.parse use then, if not double (binary)? JSON.parse is  
in ES3.1, and decimal is not.




In your example, if a server is sending these JSON values to
C1 ES-harmony based peer for computation, the computations will still
take place in binary,


Ok, hold that thought.



unless we were to add some mechanism for
explicitly specifying how numbers should be decoded.


Let's say we don't. I showed interoperation failure if two peers, C1  
and C2, fail to decode to the same numeric type. You now seem to be  
agreeing that binary should be the one type JSON number syntax decodes  
into. Great, except you want encoders to stringify decimal literals as  
JSON numbers.


This means 1.1m and 1.1 both encode as '1.1' but do not compare as ==  
or === (whatever happens with typeof and the typeof/==/===   
invariant). It also means


123345678901234567890.1234567890123m

encodes as

'123345678901234567890.1234567890123'

but decodes as binary

12334567890123457

which breaks round-tripping, which breaks interoperation.

Am I flogging a dead horse yet?

Long ago in the '80s there was an RPC competition between Sun and  
Apollo (defunct Massachusetts-based company, but the RPC approach  
ended up in DCE), with both sides attempting to use open specs and  
even open source to build alliances. Bob Lyons of Sun argued  

Re: Revisiting Decimal

2009-01-15 Thread Bob Ippolito
On Thu, Jan 15, 2009 at 12:32 PM, Kris Zyp k...@sitepen.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1



 Brendan Eich wrote:
 On Jan 15, 2009, at 10:46 AM, Kris Zyp wrote:

 -BEGIN PGP SIGNED MESSAGE- Hash: SHA1

 Brendan Eich wrote:
 On Jan 15, 2009, at 9:47 AM, Kris Zyp wrote:

 Where is the loss coming from?

 Decimal-using peer S1 encodes

 {p:1.1, q:2.2}

 Double-using peer C1 decodes, adds, and returns

 {p:1.1, q:2.2, r:3.3003}

 The sender then checks the result using decimal and finds an
 error. Meanwhile the same exchange between S1 and decimal-using
 peer C2 succeeds without error.

 /be

 Exactly, C1 introduces the error by it's use of binary.

 This is not about blame allocation. The system has a problem
 because, even though JSON leaves numeric representation
 unspecified, higher layers fail to agree. That could be viewed as a
 JSON shortcoming, or it could be the fault of the higher layers. I
 don't want to debate which is to blame here right now (more below).


 The point is that all the JS self-hosted JSON implementations I've
 seen, and (crucially) the ES3.1 native JSON implementation, use
 double, not decimal. This constitutes an interoperation hazard and
 it constrains compatible future changes to ES-Harmony --
 specifically, we can't switch JSON from double to decimal by
 default, when decoding *or* encoding.
 How do you switch to double or decimal by default on encoding? The
 input defines it, not any default setting.


 The JSON encoding didn't introduce the error. JSON exactly
 represented the data it was given,

 JSON the RFC is about syntax. It doesn't say more than An
 implementation may set limits on the range of numbers regarding
 semantics of implementations.

 Actual implementations that use double and decimal will not
 interoperate for all values. That means not interoperate.


 and the decimal decoding and encoding peer refrains from
 introducing errors as well.

 Assuming no errors is nice. (What color is the sky in your world?
 :-P) Meanwhile, back on this planet we were debating the best way
 to reduce the likelihood of errors when adding decimal to JS. Back
 to that debate:
 3.3 is exactly the sum of 1.1 and 2.2 without errors as decimal math
 produces here in the blue sky world (I was going off your example).
 Utah may have unusually blue skies though, it is the desert :).

Depending on the algorithm that a double-using client side uses to
print floats as decimal, they may not even be able to retain a decimal
number even without doing any math operations.

 simplejson.dumps(simplejson.loads('{num: 3.3}'))
'{num: 3.2998}'

simplejson uses the repr() representation for encoding floats. I
forget the exact inputs for which it is wrong, but Python's str()
representation for float does not round-trip properly all of the time
on all platforms.

-bob
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal

2009-01-15 Thread Bob Ippolito
On Thu, Jan 15, 2009 at 12:44 PM, Bob Ippolito b...@redivi.com wrote:
 On Thu, Jan 15, 2009 at 12:32 PM, Kris Zyp k...@sitepen.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1



 Brendan Eich wrote:
 On Jan 15, 2009, at 10:46 AM, Kris Zyp wrote:

 -BEGIN PGP SIGNED MESSAGE- Hash: SHA1

 Brendan Eich wrote:
 On Jan 15, 2009, at 9:47 AM, Kris Zyp wrote:

 Where is the loss coming from?

 Decimal-using peer S1 encodes

 {p:1.1, q:2.2}

 Double-using peer C1 decodes, adds, and returns

 {p:1.1, q:2.2, r:3.3003}

 The sender then checks the result using decimal and finds an
 error. Meanwhile the same exchange between S1 and decimal-using
 peer C2 succeeds without error.

 /be

 Exactly, C1 introduces the error by it's use of binary.

 This is not about blame allocation. The system has a problem
 because, even though JSON leaves numeric representation
 unspecified, higher layers fail to agree. That could be viewed as a
 JSON shortcoming, or it could be the fault of the higher layers. I
 don't want to debate which is to blame here right now (more below).


 The point is that all the JS self-hosted JSON implementations I've
 seen, and (crucially) the ES3.1 native JSON implementation, use
 double, not decimal. This constitutes an interoperation hazard and
 it constrains compatible future changes to ES-Harmony --
 specifically, we can't switch JSON from double to decimal by
 default, when decoding *or* encoding.
 How do you switch to double or decimal by default on encoding? The
 input defines it, not any default setting.


 The JSON encoding didn't introduce the error. JSON exactly
 represented the data it was given,

 JSON the RFC is about syntax. It doesn't say more than An
 implementation may set limits on the range of numbers regarding
 semantics of implementations.

 Actual implementations that use double and decimal will not
 interoperate for all values. That means not interoperate.


 and the decimal decoding and encoding peer refrains from
 introducing errors as well.

 Assuming no errors is nice. (What color is the sky in your world?
 :-P) Meanwhile, back on this planet we were debating the best way
 to reduce the likelihood of errors when adding decimal to JS. Back
 to that debate:
 3.3 is exactly the sum of 1.1 and 2.2 without errors as decimal math
 produces here in the blue sky world (I was going off your example).
 Utah may have unusually blue skies though, it is the desert :).

 Depending on the algorithm that a double-using client side uses to
 print floats as decimal, they may not even be able to retain a decimal
 number even without doing any math operations.

 simplejson.dumps(simplejson.loads('{num: 3.3}'))
 '{num: 3.2998}'

 simplejson uses the repr() representation for encoding floats. I
 forget the exact inputs for which it is wrong, but Python's str()
 representation for float does not round-trip properly all of the time
 on all platforms.

I was able to dig up a specific example that is reproducible here on
Mac OS X with Python 2.5.2:

 float(str(1617161771.7650001)) == 1617161771.7650001
False

Firefox 3.0.5 does a better job here:

 parseFloat((1617161771.7650001).toString()) === 1617161771.7650001
true

-bob
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal

2009-01-15 Thread Brendan Eich

On Jan 15, 2009, at 12:32 PM, Kris Zyp wrote:


we can't switch JSON from double to decimal by
default, when decoding *or* encoding.

How do you switch to double or decimal by default on encoding? The
input defines it, not any default setting.


A JSON encoder in a current self-hosted or native ES3.1 implementation  
sees a number (binary double precision) and encodes it using JSON  
number syntax. If we add decimal, you want the encoder to stringify a  
decimal value as a JSON number too. That is a choice -- a design  
decision (and a mistake :-/).


The alternatives are to stringify or to throw, requiring a custom (non- 
default) hook to be used as Bob's simplejson package allows.




3.3 is exactly the sum of 1.1 and 2.2 without errors as decimal math
produces


None of these numbers is exactly representable using binary finite  
precision. Depending on the JSON codec implementation, they may not  
even round trip to string and back -- see Bob's reply.




You are saying there latent hard-to-find bugs because people believe
that JSON somehow implies that the sum of {p:1.1, q:2.2} must be
3.3003 ?


I never wrote any such thing.

Please look at the previous messages again. If C1 uses double to  
decode JSON from S1 but C2 uses decimal, then results can differ  
unexpectedly (from S1's point of view). Likewise, encoding decimal  
using JSON's number syntax also breaks interoperation with  
implementations using double to decode and (re-)encode.




If people are returning 3.3, then the argument
that JSON numbers are universally treated computed as binary is not
valid. Is there a less hand-wavy way of stating that?


I don't know what treated computed as binary means, even if I delete  
one of treated and computed. JSON number syntax may be encoded  
from decimals by some implementations, and decoded into decimals too.  
This is not interoperable with implementations that transcode using  
doubles. Period, full stop.




I thought JSON serialization and typeof results could be considered
separate issues.


You brought up Dojo code examples including Dojo's JSON codec as  
evidence that defining typeof 1.1m == number would relieve you of  
having to change that codec while at the same time preserving  
correctness. I replied showing that the preserving correctness claim  
in that case is false, and the relief from having to evolve the codec  
was an obligation.


We then talked more about JSON than typeof, but the two are related:  
in both JSON *implementations* and your proposed typeof 1.1m ==  
number  typeof 1.1 == number world, incompatible number formats  
are conflated. This is a mistake.




You're arguing by assertion that rounding errors due to double's
finite binary precision, which are the most reported JS bug at
https://bugzilla.mozilla.org, are somehow insignificant when JSON
transcoding is in the mix. That's a bold assertion.

The issue here is relying on another machine to do a computation. I
have trouble believing that all these people that are experiencing
rounding errors are then using these client-side computations for
their server.


Please. No one wrote all these people. We're talking about subtle  
latent and future bugs, likelihoods of such bugs (vs. ruling them out  
by not conflating incompatible number types). Correctness is not a  
matter of wishful thinking or alleged good enough current-code  
behavior.




The compensation for rounding errors that we are
concerned are usually going to be kept as close to the error as
possible. Why would you build a client-server infrastructure around  
it?


People do financial stuff in JS. No medical equipment or rocket  
control yet, AFAIK (I could be wrong). I'm told Google Finance uses  
integral double values to count pennies. It would not be surprising if  
JSON transcoding were already interposed between parts of such a  
system. And it should be possible to do so, of course -- one can  
always encode bignums or bigdecimals in strings.


What's at issue between us is whether the default encoding of decimal  
should use JSON's number syntax. If someone builds client-server  
infrastructure, uses JSON in the middle, and switches from double  
today to decimal tomorrow, what can go wrong if we follow your  
proposal and encode decimals using JSON number syntax? Assume the JSON  
is not in the middle of a closed network where one entity controls the  
version and quality of all peer software. We can't assume otherwise in  
the standard.




What should JSON.parse use then, if not double (binary)? JSON.parse
is in ES3.1, and decimal is not.

It should use double. I presume that if a use decimal pragma or a
switch was available, it might parse to decimal, but the default would
be double, I would think.


Good, we agreed on decoding to double already but it's great to  
confirm this.




which breaks round-tripping, which breaks interoperation.

JSON doesn't round-trip JS, and it never will.


That's a complete straw man. Yes, 

Re: Revisiting Decimal

2009-01-15 Thread Brendan Eich

On Jan 15, 2009, at 3:05 PM, Kris Zyp wrote:


I
more ambivalent on typeof 1.1m than on the what seems to me to be a
more obvious mistake of throwing on JSON serialization of decimals.


Good to hear. Ambivalence should not be a stable state, though. If we  
can get to typeof agreement, let's do so. It seems to me ambivalence  
should mean no, not yes.



[hundreds of cited lines deleted -- please trim when replying. /be]




 If you really endorse receiver
makes it right, give a spirited and explicit defense.

JSON already encodes in decimal.


No, you are assuming your conclusion again. JSON already encodes in  
decimal meaning JSON encoding writes base ten floating point with  
optional exponent numbers does not employ the same meaning of the  
word decimal as the decimal type contemplated for Harmony. The  
latter has semantics in addition to syntax. JSON's RFC says almost  
nothing about semantics.




Do you want a defense of how a
receiver should make right what is already right? We can't argue about
whether JSON should have a more explicit type system. JSON is frozen.


Sorry, seems to me you are ducking again. Please address the  
incompatible change from JSON in self-hosted codes and the native  
ES3.1 codec encoding only doubles using JSON number syntax, vs. your  
proposal where with decimal added to the language we encode decimal  
and double as JSON number syntax.




All decimal use is
opt-in anyway, there is no breakage for existing code when the VM is
upgraded.


What do you mean by opt-in?

If JSON peer Alice starts using decimal and encoding financial data  
sent to existing, double-based peer Bob, does Alice lose money? If no,  
show how. If yes, then show why such a bug is not a fatal flaw in your  
proposal.


/be___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


JSON numbers (was: Revisiting Decimal)

2009-01-15 Thread David-Sarah Hopwood
Brendan Eich wrote:
 On Jan 14, 2009, at 7:38 PM, Kris Zyp wrote:
 
 You need to change this in any case, since even though the JSON
 RFC allows arbitrary precision decimal literals, real-world
 decoders only decode into IEEE doubles. You'd have to encode
 decimals as strings and decode them using domain-specific (JSON
 schema based) type knowledge.
 No, every Java JSON library I have seen
 
 You've seen
 
 http://www.json.org/json2.js
 
 It and the json.js alternative JS implementation are popular. json2.js
 contains
 
 String.prototype.toJSON =
 Number.prototype.toJSON =
 Boolean.prototype.toJSON = function (key) {
 return this.valueOf();
 };
 
 parses (at least some, if not
 all) numbers to Java's BigDecimal.
 
 JSON has nothing to do wth Java, and most implementations do not have
 Java BigDecimal, so I don't know how it can be relevant.

There seems to be a misconception here.

JSON is a general-purpose data description language. It happens to have
a syntax that is *almost* (see below) a subset of ECMAScript's syntax,
but it was explicitly designed to be useful across languages, and for
data formats that are specified independently of programming language.
The de facto bindings of JSON to other languages are therefore just
as relevant as its bindings to ECMAScript, in determining what its
de facto semantics are.

One of the ways in which JSON syntax is not a subset of ECMAScript
syntax, is its definition of numbers. JSON numbers are effectively
arbitrary-precision decimals: if you change a JSON number in a given
document at any decimal place, then you are changing the meaning of
the document, even if the number would round to the same IEEE double
value.

The fact that some language bindings may (with specified options)
implicitly round numbers to the language's IEEE double type when
parsing a JSON document, does not contradict this. In doing so, the
binding is giving an approximation of the meaning of the document.
That approximation might be good enough for a particular use of JSON;
if it isn't, then the application programmer should use a different
binding, or different options to the binding.

None of this dictates a specific answer to what 'typeof aDecimal'
should return, but it needs to be clarified lest we do violence to
JSON's intended semantics by making incorrect assumptions about it.

-- 
David-Sarah Hopwood
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: JSON numbers (was: Revisiting Decimal)

2009-01-15 Thread Brendan Eich

On Jan 15, 2009, at 6:07 PM, David-Sarah Hopwood wrote:


That approximation might be good enough for a particular use of JSON;
if it isn't, then the application programmer should use a different
binding, or different options to the binding.


To cut past all the preaching to the choir: the argument we were  
having, apart from any confusions, is about what the default options  
to the ES Harmony binding should be.




None of this dictates a specific answer to what 'typeof aDecimal'
should return, but it needs to be clarified lest we do violence to
JSON's intended semantics by making incorrect assumptions about it.


JSON's intended semantics may be arbitrary precision decimal (the RFC  
is neither explicit nor specific enough in my opinion; it mentions  
only range, not precision), but not all real-world JSON codecs use  
arbitrary precision decimal, and in particular today's JS codecs use  
IEEE double binary floating point. This approximates by default and  
creates a de-facto standard that can't be compatibly extended without  
opt-in.


/be
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


JSON numbers (was: Revisiting Decimal)

2009-01-15 Thread David-Sarah Hopwood
Brendan Eich wrote:
 Long ago in the '80s there was an RPC competition between Sun and Apollo
 (defunct Massachusetts-based company, but the RPC approach ended up in
 DCE), with both sides attempting to use open specs and even open source
 to build alliances. Bob Lyons of Sun argued eloquently for one standard
 type system for senders and receivers. Paul (I forget his last name) of
 Apollo argued for receiver makes it right to allow the funky
 non-IEEE-754 floating point formats of the day to be used at the
 convenience of the sender. E.g. Cray and DEC VAX senders would not have
 to transcode to the IEEE-754 lingua franca, they could just blat out the
 bytes in a number in some standard byte order.
 
 Bob Lyon's rebuttal as I recall it was two-fold: 1. receiver makes it
 right is really receiver makes it wrong, because the many types and
 conversions are a fertile field for bugs and versionitis problems among
 peers. 2. There should be indeed be a lingua franca -- you should be
 able to put serialized data in a data vault for fifty years and hope to
 read it, so long as we're still around, without having to know which
 sender sent it, what floating point format that obsolete sender used, etc.
 
 The two points are distinct. Complexity makes for versionitis and bug
 habitat, but lack of a single on-the-wire *semantic* standard makes for
 a Tower of Babel scenario, which means data loss. Fat, buggy, and lossy
 are no way to go through college!
 
 See http://www.kohala.com/start/papers.others/rpc.comments.txt for more.

This analogy fails to reflect the design of JSON.

JSON does specify a common encoding and type of numbers:
arbitrary-precision decimal, encoded as a decimal string. This is very
different from blat[ting] out the bytes in a number in some standard byte
order. (The difference is not text vs binary; it's using a standardized
encoding rather than the sender's encoding.)

You can argue, if you want, that the common encoding and type shouldn't
have been arbitrary-precision decimal. But the JSON RFC says what it says,
and for better or worse, it does not mention any binary floating point
types, or have any way to explicitly specify multiple number
representations. ES-Harmony must support JSON as it is designed;
changing JSON is not in scope for TC39, and is quite unlikely to happen
independently.

It is true that if the designer of a JSON-based format wants to simplify
its use across languages, then it would be helpful to implementors of the
format, all else being equal, if it did not assume that numbers will
be preserved with greater precision or range than an IEEE double.

However, if a JSON-based format *needs* numbers that are preserved
to greater precision or range than IEEE double, then JSON numbers
are still a perfectly suitable type to encode them (the only vaguely
reasonable alternative would be to encode the numbers as strings,
which doesn't solve anything; it just makes for a more obscure use
of JSON).

In that case, implementors of the format must find a JSON language
binding that supports a type with the required precision and range --
but such bindings are widely available for languages that have such
types (decimal or otherwise) built-in or supported by a standard library.
Presumably, ES-Harmony will be such a language, and its JSON library
will be extended to have an option to decode JSON numbers to decimals.

(ES-Harmony's proposed decimal type still isn't arbitrary-precision,
of course, but it will have sufficient precision and range to support
a larger set of uses than IEEE double.)

 We are not condemned to repeat history if we pay attention to what went
 before. JSON implementations in future ES specs cannot by default switch
 either encoding or decoding to use decimal instead of number.

Of course not, but they can easily provide a non-default switch to do
so. They can also encode both ES numbers and ES decimals to JSON numbers,
as Kris has already indicated that Dojo plans to do. (This encoding is
lossy in general, but not necessarily lossy for a given use of JSON.)

 JSON does not follow the path of other formats that attempt to dictate
 tight language-type couplings. In all cases, peers can ultimately
 choose how they want to internally handle the data provided by JSON.
 JSON is pure data format, not computation prescription and won't
 dictate how computations are performed.
 
 Yeah, yeah -- I know that and said as much. The argument is not about
 JSON as a syntax spec, it's about what we do in the implementation in
 ES3.1, where we have to make semantic choices. Including types and
 typeof, including future proofing.

AFAICS the most helpful thing in that respect would be to give
programmers some expectation about how decimal is likely to be supported,
so that they will have the maximum amount of time to future-proof their
code, and it will be more likely that it Just Works when decimal starts
to be supported by implementations. (Obviously they will still need to
test on 

Re: JSON numbers (was: Revisiting Decimal)

2009-01-15 Thread Sam Ruby
On Thu, Jan 15, 2009 at 9:24 PM, Brendan Eich bren...@mozilla.com wrote:

 JSON's intended semantics may be arbitrary precision decimal (the RFC is
 neither explicit nor specific enough in my opinion; it mentions only
 range, not precision), but not all real-world JSON codecs use arbitrary
 precision decimal, and in particular today's JS codecs use IEEE double
 binary floating point. This approximates by default and creates a de-facto
 standard that can't be compatibly extended without opt-in.

You might find the next link enlightening or perhaps even a pleasant diversion:

http://www.intertwingly.net/stories/2002/02/01/toInfinityAndBeyondTheQuestForSoapInteroperability.html

Quick summary as it applies to this discussion: perfection is
unattainable (duh!) and an implementation which implements JSON
numbers as quad decimal will retain more precision than one that
implements JSON numbers as double binary (duh!).

- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: JSON numbers (was: Revisiting Decimal)

2009-01-15 Thread David-Sarah Hopwood
David-Sarah Hopwood wrote:
 Brendan Eich wrote:
 We are not condemned to repeat history if we pay attention to what went
 before. JSON implementations in future ES specs cannot by default switch
 either encoding or decoding to use decimal instead of number.
 
 Of course not, but they can easily provide a non-default switch to do
 so.

I meant the preceding sentence to apply to decoding. It is simply
incorrect to say that JSON implementations in future ES specs could
not *encode* ES decimals as JSON numbers.

 They can also encode both ES numbers and ES decimals to JSON numbers,
 as Kris has already indicated that Dojo plans to do. (This encoding is
 lossy in general, but not necessarily lossy for a given use of JSON.)

-- 
David-Sarah Hopwood ⚥

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: JSON numbers (was: Revisiting Decimal)

2009-01-15 Thread Brendan Eich

On Jan 15, 2009, at 7:46 PM, David-Sarah Hopwood wrote:


David-Sarah Hopwood wrote:

Brendan Eich wrote:
We are not condemned to repeat history if we pay attention to what  
went
before. JSON implementations in future ES specs cannot by default  
switch

either encoding or decoding to use decimal instead of number.


Of course not, but they can easily provide a non-default switch to do
so.


I meant the preceding sentence to apply to decoding. It is simply
incorrect to say that JSON implementations in future ES specs could
not *encode* ES decimals as JSON numbers.


It is simply bad manners to assert without proof.

What happens when future codecs send decimals that round badly or lose  
precision fatally to older codecs? You can say those are just bugs to  
be fixed by someone, but that avoids responsibility for avoiding the  
situation in the first place.


You are assuming that approximating decimals encoded in JSON but  
decoded into doubles is acceptable. I don't think that's a safe  
assumption.


/be
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal

2009-01-14 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


Brendan Eich wrote:
 On Jan 9, 2009, at 3:08 PM, Kris Zyp wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1


 The counter-argument is strong:

 typeof x == typeof y = (x == y = x === y)

 but 1.1 != 1.1m for fundamental reasons.
 I understand the counter-argument, but with such an overwhelming
 number of typeof uses having far easier migration with number,

 Migration how? You'll have to change something to use decimal or
 s/1.1/1.1m/. Only once you do that can you be sure about all
 operands being decimal.

And I am sure our users will do that and pass decimals into our
library functions.
 I'm assuming it would be bad in the Dojo code you've looked at if
 1.1 came in from some standard library that returns doubles, and was
 tested against 1.1m via == or === with false result, where previous
 to decimal being added, the result would be true.
I am not aware of any situations in the Dojo codebase where this would
cause a problem. I can't think of any place where we use an
equivalence test and users would expect that decimal behave in the
same way as a double. Do you have any expected pitfalls that I could
look for in Dojo?



 I can't possibly see how the desire to preserve this property is more
 important than better usability for the majority use cases.

 You really need to show some of these use cases from Dojo. I have a
 hard time believing you've ruled out mixed-mode accidents.
Ok, sounds good, I will be glad to be corrected if I misunderstanding
this. Here are some of the places where I believe we would probably
add extra code to handle the case of typeof checks where decimal
values may have been passed in by users, and we would want the
behavior to be the same as a number:
As I have mentioned before, we would need to change our JSON
serializer to handle decimal:
http://archive.dojotoolkit.org/nightly/dojotoolkit/dojo/_base/json.js
(line 118)
Our parser function would need to add support for decimal
http://archive.dojotoolkit.org/nightly/dojotoolkit/dojo/parser.js
(line 32)
Matrix math handling for our graphics module:
http://archive.dojotoolkit.org/nightly/dojotoolkit/dojox/gfx/matrix.js
(line 88 is one example)
Actually there are numerous situations in the graphics packages where
a decimal should be acceptable for defining coordinates, scaling, etc.:
http://archive.dojotoolkit.org/nightly/dojotoolkit/dojox/gfx/
Charting also has a number of places where decimals should be an
acceptable form of a number:
http://archive.dojotoolkit.org/nightly/dojotoolkit/dojox/charting/
For example:
http://archive.dojotoolkit.org/nightly/dojotoolkit/dojox/charting/action2d/Magnify.js
(line 22)

Again, I understand there are difficulties with typeof 1.1m returning
number, but in practice it seems we would experience far more pain
with decimal.

- --
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
 
iEYEARECAAYFAkluhukACgkQ9VpNnHc4zAyaLgCeLbJeVvoLd1ypvK9uiyfO0Jhw
RuEAoKNZQeBKKfHzoupEdY+Nv16Lk+ch
=pV7U
-END PGP SIGNATURE-

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal

2009-01-14 Thread Brendan Eich

On Jan 14, 2009, at 4:44 PM, Kris Zyp wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Brendan Eich wrote:

On Jan 9, 2009, at 3:08 PM, Kris Zyp wrote:

The counter-argument is strong:

typeof x == typeof y = (x == y = x === y)

but 1.1 != 1.1m for fundamental reasons.

I understand the counter-argument, but with such an overwhelming
number of typeof uses having far easier migration with number,


Migration how? You'll have to change something to use decimal or
s/1.1/1.1m/. Only once you do that can you be sure about all
operands being decimal.


And I am sure our users will do that and pass decimals into our
library functions.


I'm not disputing that. Straw man?



I'm assuming it would be bad in the Dojo code you've looked at if
1.1 came in from some standard library that returns doubles, and was
tested against 1.1m via == or === with false result, where previous
to decimal being added, the result would be true.

I am not aware of any situations in the Dojo codebase where this would
cause a problem. I can't think of any place where we use an
equivalence test and users would expect that decimal behave in the
same way as a double. Do you have any expected pitfalls that I could
look for in Dojo?


Sure, starting with JSON (see below).


I can't possibly see how the desire to preserve this property is  
more

important than better usability for the majority use cases.


You really need to show some of these use cases from Dojo. I have a
hard time believing you've ruled out mixed-mode accidents.

Ok, sounds good, I will be glad to be corrected if I misunderstanding
this. Here are some of the places where I believe we would probably
add extra code to handle the case of typeof checks where decimal
values may have been passed in by users, and we would want the
behavior to be the same as a number:
As I have mentioned before, we would need to change our JSON
serializer to handle decimal:
http://archive.dojotoolkit.org/nightly/dojotoolkit/dojo/_base/json.js
(line 118)


You need to change this in any case, since even though the JSON RFC  
allows arbitrary precision decimal literals, real-world decoders only  
decode into IEEE doubles. You'd have to encode decimals as strings and  
decode them using domain-specific (JSON schema based) type knowledge.




Our parser function would need to add support for decimal
http://archive.dojotoolkit.org/nightly/dojotoolkit/dojo/parser.js
(line 32)


You're right, this parser would need to be extended. But if typeof  
1.1m == number, then str2ob around line 52 might incorrectly call  
Number on a decimal string literal that does not convert to double  
(which Number must do, for backward compatibility), or else return a  
double NaN (not the same as a decimal NaN, although it's hard to tell  
-- maybe impossible?).


It seems to me you are assuming that decimal and double convert to and  
from string equivalently. This is false.




Matrix math handling for our graphics module:
http://archive.dojotoolkit.org/nightly/dojotoolkit/dojox/gfx/matrix.js
(line 88 is one example)


I couldn't be sure, but by grepping for \.xx and \.yy I think I saw  
only generic arithmetic operators and no mode mixing, so you're  
probably right that this code would work if typeof 1.1m == number.  
Someone should take a closer look:


:g/\.xx/p
this.xx = this.yy = arg;
matrix.xx = l.xx * r.xx + l.xy * r.yx;
matrix.xy = l.xx * r.xy + l.xy * r.yy;
matrix.yx = l.yx * r.xx + l.yy * r.yx;
matrix.dx = l.xx * r.dx + l.xy * r.dy + l.dx;
D = M.xx * M.yy - M.xy * M.yx,
yx: -M.yx/D, yy: M.xx/D,
dy: (M.yx * M.dx - M.xx * M.dy) / D
return {x: matrix.xx * x + matrix.xy * y + matrix.dx, y:  
matrix.yx * x + matrix.yy * y +

 matrix.dy}; // dojox.gfx.Point
M.xx = l.xx * r.xx + l.xy * r.yx;
M.xy = l.xx * r.xy + l.xy * r.yy;
M.yx = l.yx * r.xx + l.yy * r.yx;
M.dx = l.xx * r.dx + l.xy * r.dy + l.dx;


:g/\.yy/p
this.xx = this.yy = arg;
matrix.xy = l.xx * r.xy + l.xy * r.yy;
matrix.yx = l.yx * r.xx + l.yy * r.yx;
matrix.yy = l.yx * r.xy + l.yy * r.yy;
matrix.dy = l.yx * r.dx + l.yy * r.dy + l.dy;
D = M.xx * M.yy - M.xy * M.yx,
xx: M.yy/D, xy: -M.xy/D,
dx: (M.xy * M.dy - M.yy * M.dx) / D,
return {x: matrix.xx * x + matrix.xy * y + matrix.dx, y:  
matrix.yx * x + matrix.yy * y +

 matrix.dy}; // dojox.gfx.Point
M.xy = l.xx * r.xy + l.xy * r.yy;
M.yx = l.yx * r.xx + l.yy * r.yx;
M.yy = l.yx * r.xy + l.yy * r.yy;
M.dy = l.yx * r.dx + l.yy * r.dy + l.dy;


I did not look at other files.



Actually there are numerous situations in the 

Re: Revisiting Decimal

2009-01-14 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


 You need to change this in any case, since even though the JSON
 RFC allows arbitrary precision decimal literals, real-world
 decoders only decode into IEEE doubles. You'd have to encode
 decimals as strings and decode them using domain-specific (JSON
 schema based) type knowledge.
No, every Java JSON library I have seen parses (at least some, if not
all) numbers to Java's BigDecimal. JSON's numbers are decimal,
languages that support decimals agree. Dojo _will_ convert JS
decimal's to JSON numbers regardless of what path ES-Harmony takes
with typeof, whether it requires a code change or not.

 Our parser function would need to add support for decimal
 http://archive.dojotoolkit.org/nightly/dojotoolkit/dojo/parser.js
 (line 32)

 You're right, this parser would need to be extended. But if
 typeof
 1.1m == number, then str2ob around line 52 might incorrectly call
  Number on a decimal string literal that does not convert to double
  (which Number must do, for backward compatibility), or else return
 a double NaN (not the same as a decimal NaN, although it's hard to
 tell -- maybe impossible?).

 It seems to me you are assuming that decimal and double convert
 to
 and from string equivalently. This is false.


 Actually there are numerous situations in the graphics packages
 where a decimal should be acceptable for defining coordinates,
 scaling, etc.:
 http://archive.dojotoolkit.org/nightly/dojotoolkit/dojox/gfx/

 Only if never compared to a double. How do you prevent this?
We already agree that the decimal-double comparison will always be
false. The point is that this is representative of real world code
that benefits more from the treatment of decimals as numbers.



 Charting also has a number of places where decimals should be an
 acceptable form of a number:
 http://archive.dojotoolkit.org/nightly/dojotoolkit/dojox/charting/
 For example:
 http://archive.dojotoolkit.org/nightly/dojotoolkit/dojox/charting/action2d/Magnify.js
  (line 22)

 I will look at these later as time allows, pending replies on
 above points.


 Again, I understand there are difficulties with typeof 1.1m
 returning number, but in practice it seems we would experience
 far more pain with decimal.

 Trouble for you Dojo maintainers but savings for users. You may
 have to do a bit more work to avoid imposing bugs on your users.
 That's life in the big Dojo city.
If that's true, that's fine, I have no problem with Dojo feeling the
pain for the sake of others, but I still find it very surprising that
Dojo code would be so misrepresentative of real code out there today.
Dojo covers a very broad swath of topics. Do you really think real
world JS is that much different than Dojo's?
Kris



 /be





- --
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
 
iEYEARECAAYFAklur9AACgkQ9VpNnHc4zAzlxwCgpKOVIUfUvIZpdYGOOTC3c2vp
LDIAmgLvpzAW8500idQvyTFaXQ4+eRPv
=cn6y
-END PGP SIGNATURE-

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal

2009-01-14 Thread Brendan Eich

On Jan 14, 2009, at 7:38 PM, Kris Zyp wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


You need to change this in any case, since even though the JSON

RFC allows arbitrary precision decimal literals, real-world
decoders only decode into IEEE doubles. You'd have to encode
decimals as strings and decode them using domain-specific (JSON
schema based) type knowledge.

No, every Java JSON library I have seen


You've seen

http://www.json.org/json2.js

It and the json.js alternative JS implementation are popular. json2.js  
contains


String.prototype.toJSON =
Number.prototype.toJSON =
Boolean.prototype.toJSON = function (key) {
return this.valueOf();
};



parses (at least some, if not
all) numbers to Java's BigDecimal.


JSON has nothing to do wth Java, and most implementations do not have  
Java BigDecimal, so I don't know how it can be relevant.




JSON's numbers are decimal,
languages that support decimals agree. Dojo _will_ convert JS
decimal's to JSON numbers regardless of what path ES-Harmony takes
with typeof, whether it requires a code change or not.


That will break interoperatability between current implementations  
that use doubles not decimals.




We already agree that the decimal-double comparison will always be
false.


No, only some for ==. See http://intertwingly.net/stories/2008/08/27/estest.html 
:


1.5m == 1.5 true



The point is that this is representative of real world code
that benefits more from the treatment of decimals as numbers.


It's not a question of more or less. If you let decimals and numbers  
mix, you'll get data-dependent, hard to diagnose bugs. If you do not,  
then you won't (and Dojo maintainers will have to work a bit to extend  
their code to handle decimals -- which is the right trade. Recall Mr.  
Spock's dying words from STII:TWoK :-).




If that's true, that's fine, I have no problem with Dojo feeling the
pain for the sake of others, but I still find it very surprising that
Dojo code would be so misrepresentative of real code out there today.


It's not necessarily representative. It's not necessarily mis- 
representative. But we need to agree on how decimal as proposed  
compares to number (double) first, since from what you wrote above I  
see misunderstanding.




Dojo covers a very broad swath of topics. Do you really think real
world JS is that much different than Dojo's?


I have no idea, but this is completely beside the point. Breaking  
typeof x == typeof x = (x == y = x === y) for decimal will break  
existing code in data-dependent, hard to diagnose ways.


Adding a new typeof code will not depend on the value of a given  
decimal: any decimal will cause control to fall into an else, default,  
or unhandled case, which is strictly easier to debug and fix. Plus,  
any future JS standard with Decimal will be a big enough deal that  
porting will be obligatory and understood, by the time browsers adopt  
decimal.


/be



Kris





/be








- --
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAklur9AACgkQ9VpNnHc4zAzlxwCgpKOVIUfUvIZpdYGOOTC3c2vp
LDIAmgLvpzAW8500idQvyTFaXQ4+eRPv
=cn6y
-END PGP SIGNATURE-



___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal

2009-01-14 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


Brendan Eich wrote:
 On Jan 14, 2009, at 7:38 PM, Kris Zyp wrote:


 You need to change this in any case, since even though the
 JSON
 RFC allows arbitrary precision decimal literals, real-world
 decoders only decode into IEEE doubles. You'd have to encode
 decimals as strings and decode them using domain-specific
 (JSON schema based) type knowledge.
 No, every Java JSON library I have seen

 You've seen

 http://www.json.org/json2.js

 It and the json.js alternative JS implementation are popular.
 json2.js contains

 String.prototype.toJSON = Number.prototype.toJSON =
 Boolean.prototype.toJSON = function (key) { return
 this.valueOf(); };


Of course, there is no decimal support in ES3, there is no other option.
 parses (at least some, if not all) numbers to Java's BigDecimal.

 JSON has nothing to do wth Java, and most implementations do not
 have Java BigDecimal, so I don't know how it can be relevant.

One of the major incentives for JSON is that it is interoperability
between languages. If other implementations in other languages treat
JSON's number as decimal than the assertion that I understood you were
making that JSON number's are being universally expected to be treated
as binary is not true.
 JSON's numbers are decimal, languages that support decimals agree.
 Dojo _will_ convert JS decimal's to JSON numbers regardless of what
 path ES-Harmony takes with typeof, whether it requires a code
 change or not.

 That will break interoperatability between current
 implementations that use doubles not decimals.

How so? And how did all the implementations that use decimals to
interpret JSON numbers not break interoperability?

 It's not a question of more or less. If you let decimals and
 numbers mix, you'll get data-dependent, hard to diagnose bugs. If
 you do not, then you won't (and Dojo maintainers will have to
 work a bit to extend their code to handle decimals -- which is
 the right trade. Recall Mr. Spock's dying words from STII:TWoK
 :-).

So you are suggesting that we shouldn't let users pass mix of decimals
and numbers even if they explicitly attempt to do so?


 If that's true, that's fine, I have no problem with Dojo feeling
 the pain for the sake of others, but I still find it very
 surprising that Dojo code would be so misrepresentative of real
 code out there today.

 It's not necessarily representative. It's not necessarily
 mis-representative. But we need to agree on how decimal as
 proposed compares to number (double) first, since from what you
 wrote above I see misunderstanding.


 Dojo covers a very broad swath of topics. Do you really think real
 world JS is that much different than Dojo's?

 I have no idea, but this is completely beside the point. Breaking
  typeof x == typeof x = (x == y = x === y) for decimal will
 break existing code in data-dependent, hard to diagnose ways.

 Adding a new typeof code will not depend on the value of a given
 decimal: any decimal will cause control to fall into an else,
 default, or unhandled case, which is strictly easier to debug and
  fix. Plus, any future JS standard with Decimal will be a big
 enough deal that porting will be obligatory and understood, by
 the time browsers adopt decimal.
It's not beside my point. If signficantly more real world code will
break due to violating the expected invariant of a constant finite set
of typeof results (and the expectation that numbers regardless of
precision will be typeof - number) than those that break due to
violating the expected invariant of typeof x == typeof x = (x == y
= x === y) than I think we would be negligent as language designers
to ignore that consideration. I understand the logical concerns, but I
would love to see real empirical evidence that contradicts my suspicions.

Kris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
 
iEYEARECAAYFAkluynkACgkQ9VpNnHc4zAx5LgCfWWzZ7s2gGDz0OMS6QrjOMbYy
VMIAoLWc9d6ZUqVmY/ma2PygBCXdNgK2
=oUop
-END PGP SIGNATURE-

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal

2009-01-14 Thread Brendan Eich

On Jan 14, 2009, at 9:32 PM, Kris Zyp wrote:

Of course, there is no decimal support in ES3, there is no other  
option.


This is not strictly true:

http://code.google.com/p/gwt-math/source/browse/trunk/gwt-math/js_originals/bigdecimal.js

The point is that JSON peers that do math on numbers, to interoperate  
in general, need to parse and stringify to the same number type. It  
may be ok if only ints that fit in a double are used by a particular  
application or widget, but the syntax allows for fraction and  
exponent, which begs representation-type precision and radix questions.



One of the major incentives for JSON is that it is interoperability
between languages. If other implementations in other languages treat
JSON's number as decimal than the assertion that I understood you were
making that JSON number's are being universally expected to be treated
as binary is not true.


It's probably a mix, with application-dependent restrictions on domain  
and/or computation so that using either double or decimal works, or  
else buggy lack of such restrictions.




JSON's numbers are decimal, languages that support decimals agree.
Dojo _will_ convert JS decimal's to JSON numbers regardless of what
path ES-Harmony takes with typeof, whether it requires a code
change or not.


That will break interoperatability between current
implementations that use doubles not decimals.



How so? And how did all the implementations that use decimals to
interpret JSON numbers not break interoperability?


Not necessarily. But correctness is not a matter of hopes or  
percentages. It may be fine for JSON to leave it to the app to choose  
number type and/or operations done on the data. But  some layer has to  
care. Some apps probably already depend on json2.js and json.js and  
the like (ES3.1's JSON built-in) using double, not decimal. Changing a  
future JSON codec to use decimal instead of double is not a backward- 
compatible change.




So you are suggesting that we shouldn't let users pass mix of decimals
and numbers even if they explicitly attempt to do so?


No, I'm suggesting unintended mixed-mode bugs will be common if we  
make typeof 1.1m == number.




It's not beside my point. If signficantly more real world code will
break due to violating the expected invariant of a constant finite set
of typeof results (and the expectation that numbers regardless of
precision will be typeof - number) than those that break due to
violating the expected invariant of typeof x == typeof x = (x == y
= x === y)


We can't measure this, realistically, but again: the breakage from a  
new typeof result is not dependent on the numeric value of the  
operand, and entails either a missing case, or a possibly insufficient  
default case, while the breakage from your proposal is subtly data- 
dependent.


Plus, the invariant (while not holy writ) is an important property of  
JS to conserve, all else equal.




than I think we would be negligent as language designers
to ignore that consideration.


It's not a consideration if it can't be quantified, and if it  
introduces value-dependent numeric bugs. Decimal and double are  
different enough that typeof should tell the truth. 1.1m != 1.1, 1.2m ! 
= 1.2, but 1.5m == 1.5.




I understand the logical concerns, but I
would love to see real empirical evidence that contradicts my  
suspicions.


I gave some already, you didn't reply. Here's one, about dojotoolkit/ 
dojo/parser.js:


But if typeof 1.1m == number, then str2obj around line 52 might  
incorrectly call Number on a decimal string literal that does not  
convert to double (which Number must do, for backward  
compatibility), 


It won't do to assume your proposal saves effort and demand me to  
prove you wrong. First, no one has access to all the extant typeof x  
== number code to do the analysis and prove the majority of such  
code would work with your proposal. This is akin to proving a  
negative. Second, I've given evidence based on Dojo that shows  
incompatibility if typeof 1.1m == number.


How about we talk about an alternative: use decimal as a way to make  
all literals, operators, and built-ins decimal never double?


The problem with this big red switch is that it requires conversion  
from outside the lexical scope in which the pragma is enabled, since  
code outside could easily pass double data into functions or variables  
in the pragma's scope. It requires a decimal-based suite of Math,  
etc., built-ins too, but that may be ok (it was contemplated for ES4).


The problem with this old idea is really the challenge of ensuring  
conversion when cross the pragma's lexical scope boundary. Presumably  
double numbers going in would convert to decimal, while decimals  
flowing out would remain decimal. Even this is questionable: what if  
the callee was compiled without use decimal and it's another window  
object's function that expects a double-number?


/be
___

Re: Revisiting Decimal

2009-01-14 Thread Bob Ippolito
On Wed, Jan 14, 2009 at 9:32 PM, Kris Zyp k...@sitepen.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1



 Brendan Eich wrote:
 On Jan 14, 2009, at 7:38 PM, Kris Zyp wrote:


 You need to change this in any case, since even though the
 JSON
 RFC allows arbitrary precision decimal literals, real-world
 decoders only decode into IEEE doubles. You'd have to encode
 decimals as strings and decode them using domain-specific
 (JSON schema based) type knowledge.
 No, every Java JSON library I have seen

 You've seen

 http://www.json.org/json2.js

 It and the json.js alternative JS implementation are popular.
 json2.js contains

 String.prototype.toJSON = Number.prototype.toJSON =
 Boolean.prototype.toJSON = function (key) { return
 this.valueOf(); };


 Of course, there is no decimal support in ES3, there is no other option.
 parses (at least some, if not all) numbers to Java's BigDecimal.

 JSON has nothing to do wth Java, and most implementations do not
 have Java BigDecimal, so I don't know how it can be relevant.

 One of the major incentives for JSON is that it is interoperability
 between languages. If other implementations in other languages treat
 JSON's number as decimal than the assertion that I understood you were
 making that JSON number's are being universally expected to be treated
 as binary is not true.
 JSON's numbers are decimal, languages that support decimals agree.
 Dojo _will_ convert JS decimal's to JSON numbers regardless of what
 path ES-Harmony takes with typeof, whether it requires a code
 change or not.

 That will break interoperatability between current
 implementations that use doubles not decimals.

 How so? And how did all the implementations that use decimals to
 interpret JSON numbers not break interoperability?

I'm pretty sure that interoperability is broken when they do this,
it's just very subtle and hard to debug. I have the same stance as
Brendan here, I've even refused to implement the capability to
directly encode decimal as JSON numbers in my simplejson package (the
de facto json for Python). If a user of the library controls both ends
of the wire, they can just as easily use strings to represent decimals
and work with them exactly how they expect on both ends of the wire
regardless of what their JSON implementation happens to do.

Imagine the person at the other end of the wire is using something
like JavaScript or PHP. If the message contains decimals as JSON
numbers they can not accurately encode or decode those messages unless
they write their own custom JSON implementation. How do they even KNOW
if the document is supposed to have decimal precision? What if the
other end passes too many digits (often the case if one side is
actually using doubles)? If they are passed around as strings then
everyone can use the document just fine without any compatibility
issues. The lack of a de jure number precision and the lack of a
date/datetime type are definitely my biggest grievances with the JSON
spec.

-bob
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal

2009-01-09 Thread Brendan Eich
Sam's mail cited below has gone without a reply for over a month.  
Decimal is surely not a high priority, but this message deserves some  
kind of response or we'll have to reconstruct the state of the  
argument later, at probably higher cost.


I was not at the Redmond meeting, but I would like to take Sam's word  
that the cohort/toString issue was settled there. I heard from Rob  
Sayre something to this effect.


But in case we don't have consensus, could any of you guys state the  
problem for the benefit of everyone on this list? Sorry if this seems  
redundant. It will help, I'm convinced (compared to no responses and  
likely differing views of what the problem is, or what the consensus  
was, followed months later by even more painful reconstruction of the  
state of the argument).


The wrapper vs. primitive issue remains, I believe everyone agrees.

/be

On Dec 4, 2008, at 2:22 PM, Sam Ruby wrote:


2008/12/4 Brendan Eich bren...@mozilla.com:


Sam pointed that out too, and directed everyone to his test- 
implementation

results page:
http://intertwingly.net/stories/2008/09/20/estest.html
Indeed we still have an open issue there ignoring the wrapper one:

[Sam wrote:] I think the only major outstanding semantic issue was  
wrapper
objects; apart from that, the devil was in the detail of spec  
wording.[End Sam]


No, the cohort/toString issue remains too (at least).


With a longer schedule, I would like to revisit that; but as of
Redmond, we had consensus on what that would look like in the context
of a 3.1 edition.

From where I sit, I find myself in the frankly surreal position that
we are in early December, and there are no known issues of consensus,
though I respect that David-Sarah claims that there is one on
wrappers, and I await his providing of more detail.


/be


- Sam Ruby


___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal

2009-01-09 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
What is the current state of the result of typeof on decimals, was
there consensus on this? I hope we will be using typeof 1.1m -
number. For a little bit of emperical evidence, I went through
Dojo's codebase and their are numerous places that we would probably
want to alter our code to include additional checks for decimal if
typeof 1.1m - decimal, whereas if number we would probably leave
virtually everything intact in regards to number handling with
consideration for decimals.
Thanks,
Kris

Brendan Eich wrote:
 Sam's mail cited below has gone without a reply for over a month.
 Decimal is surely not a high priority, but this message deserves
 some kind of response or we'll have to reconstruct the state of the
  argument later, at probably higher cost.

 I was not at the Redmond meeting, but I would like to take Sam's
 word that the cohort/toString issue was settled there. I heard
 from Rob Sayre something to this effect.

 But in case we don't have consensus, could any of you guys state
 the problem for the benefit of everyone on this list? Sorry if this
  seems redundant. It will help, I'm convinced (compared to no
 responses and likely differing views of what the problem is, or
 what the consensus was, followed months later by even more painful
 reconstruction of the state of the argument).

 The wrapper vs. primitive issue remains, I believe everyone agrees.


 /be

 On Dec 4, 2008, at 2:22 PM, Sam Ruby wrote:

 2008/12/4 Brendan Eich bren...@mozilla.com:

 Sam pointed that out too, and directed everyone to his
 test-implementation results page:
 http://intertwingly.net/stories/2008/09/20/estest.html Indeed
 we still have an open issue there ignoring the wrapper one:

 [Sam wrote:] I think the only major outstanding semantic issue
 was wrapper objects; apart from that, the devil was in the
 detail of spec wording.[End Sam]

 No, the cohort/toString issue remains too (at least).

 With a longer schedule, I would like to revisit that; but as of
 Redmond, we had consensus on what that would look like in the
 context of a 3.1 edition.

 From where I sit, I find myself in the frankly surreal position
 that we are in early December, and there are no known issues of
 consensus, though I respect that David-Sarah claims that there is
 one on wrappers, and I await his providing of more detail.

 /be

 - Sam Ruby

 ___ Es-discuss mailing
 list Es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


- --
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
 
iEYEARECAAYFAkln1acACgkQ9VpNnHc4zAzSOwCbBbcYMmHxg2emCBgjrca9ZDjq
3M4An16zI6nUjssjQ/q3ecnH84aomA5K
=nbmt
-END PGP SIGNATURE-

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal

2009-01-09 Thread Brendan Eich

On Jan 9, 2009, at 2:54 PM, Kris Zyp wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

What is the current state of the result of typeof on decimals, was
there consensus on this?


Did you follow Sam's link?

http://intertwingly.net/stories/2008/09/20/estest.html



I hope we will be using typeof 1.1m -
number. For a little bit of emperical evidence, I went through
Dojo's codebase and their are numerous places that we would probably
want to alter our code to include additional checks for decimal if
typeof 1.1m - decimal, whereas if number we would probably leave
virtually everything intact in regards to number handling with
consideration for decimals.
Thanks,


The counter-argument is strong:

typeof x == typeof y = (x == y = x === y)

but 1.1 != 1.1m for fundamental reasons.

/be

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal

2009-01-09 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


 The counter-argument is strong:

 typeof x == typeof y = (x == y = x === y)

 but 1.1 != 1.1m for fundamental reasons.
I understand the counter-argument, but with such an overwhelming
number of typeof uses having far easier migration with number, I
can't possibly see how the desire to preserve this property is more
important than better usability for the majority use cases. Do you
think other libraries and JS code are that vastly different than Dojo?
Thanks,
Kris

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
 
iEYEARECAAYFAkln2QcACgkQ9VpNnHc4zAwasgCfbRvlyhoUlNuWSRUKyNeTyWzh
B0IAoIh59kZflQy9A8Re9KpVUrNLQj/A
=PnfR
-END PGP SIGNATURE-

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal

2008-12-04 Thread David-Sarah Hopwood
Sam Ruby wrote:
 So now the question is: where are we now?

For the time being, concentrating on other things that will be in ES3.1.
That was the main point of removing Decimal, no?

-- 
David-Sarah Hopwood
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal

2008-12-04 Thread David-Sarah Hopwood
Brendan Eich wrote:
 On Dec 4, 2008, at 10:22 AM, David-Sarah Hopwood wrote:
 Sam Ruby wrote:
 So now the question is: where are we now?

 For the time being, concentrating on other things that will be in ES3.1.
 That was the main point of removing Decimal, no?
 
 es-discuss@mozilla.org has more bandwidth than singular focus on 3.1.

OK, but there is no longer any current detailed spec for Decimal to
comment on. I think the only major outstanding semantic issue was wrapper
objects; apart from that, the devil was in the detail of spec wording.

-- 
David-Sarah Hopwood
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal

2008-12-04 Thread Brendan Eich

On Dec 4, 2008, at 12:39 PM, David-Sarah Hopwood wrote:


Brendan Eich wrote:

On Dec 4, 2008, at 10:22 AM, David-Sarah Hopwood wrote:

Sam Ruby wrote:

So now the question is: where are we now?


For the time being, concentrating on other things that will be in  
ES3.1.

That was the main point of removing Decimal, no?


es-discuss@mozilla.org has more bandwidth than singular focus on 3.1.


OK,


Good, since we have harmonious energy for lambda syntax on this list  
(you do, at any rate -- so do I, don't get me wrong -- but let's play  
fair with Decimal for Harmony as well as Lambda for Harmony).




but there is no longer any current detailed spec for Decimal to
comment on.


Sam pointed that out too, and directed everyone to his test- 
implementation results page:


http://intertwingly.net/stories/2008/09/20/estest.html

Indeed we still have an open issue there ignoring the wrapper one:


I think the only major outstanding semantic issue was wrapper
objects; apart from that, the devil was in the detail of spec wording.


No, the cohort/toString issue remains too (at least).

/be

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal

2008-12-04 Thread Sam Ruby
2008/12/4 Brendan Eich [EMAIL PROTECTED]:

 Sam pointed that out too, and directed everyone to his test-implementation
 results page:
 http://intertwingly.net/stories/2008/09/20/estest.html
 Indeed we still have an open issue there ignoring the wrapper one:

 I think the only major outstanding semantic issue was wrapper
 objects; apart from that, the devil was in the detail of spec wording.

 No, the cohort/toString issue remains too (at least).

With a longer schedule, I would like to revisit that; but as of
Redmond, we had consensus on what that would look like in the context
of a 3.1 edition.

From where I sit, I find myself in the frankly surreal position that
we are in early December, and there are no known issues of consensus,
though I respect that David-Sarah claims that there is one on
wrappers, and I await his providing of more detail.

 /be

- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Revisiting Decimal (was: Prioritized list of Decimal method additions)

2008-12-03 Thread Sam Ruby
I saw the meeting minutes, and got a debrief from Allen yesterday.
I'm still unclear on how to proceed with Decimal, even if the new
target is Harmony.

Waldemar's issues were raised and responded to prior to Kona:

https://mail.mozilla.org/pipermail/es-discuss/2008-November/008074.html

Quick summary: there are at least eight sections with typos and
transcription errors.  By transcription errors, I mean places where
the prose doesn't match the output of the code that I posted
previously.  Those are embarrassing, but at this point moot.  Pratap
has already excised Decimal from the spec.

What are we left with relative to the the following output from the
code that I wrote?

http://intertwingly.net/stories/2008/09/20/estest.html

Relative to that output, I've heard two issues.

The first was no user visible cohorts.  The issue is Waldemar's
insistence that ES is irretrievably broken if array lookup for
x[1.10m] respects the trailing zero.  IIRC, Brendan's position was a
more pragmatic one, namely that small integers (like, say, up to
10**20th) are the only values for which toString must avoid both
exponential notation and trailing zeros, other values shouldn't get in
the way of doing the right thing.  That would have been fine, but
unfortunately he couldn't make the meeting (something I definitely
understand).  Mike and I weren't then, and still aren't happy about
conceding to Waldemar's position on this one, but at Redmond we did
with the understanding that with that concession, Decimal was in.

The second was the duplication between Math.min and Decimal.min.
I was operating under the if it ain't broken, don't fix it
guidelines.  To date, Math.min *always* returns a Number, never an
Object.  Waldemar apparently feels that people will call the wrong
function.  To me, this is a you say N-EEE-THER, I say N-EYE-THER
issue.  If the consensus is that Math.min should be changed and
Decimal.min should be removed, that's a pretty quick fix.

So now the question is: where are we now?

- Sam Ruby

On Sat, Sep 20, 2008 at 8:57 PM, Sam Ruby [EMAIL PROTECTED] wrote:
 Sam Ruby wrote:
 Previous discussions focused on operators and type/membership related
 builtin functions (i.e., typeof and instanceof).  Here's a prioritized
 list of functions provided by IEEE 754-2008 and/or the decNumber
 implementation.

 The actual number of a and a- methods is fairly small, particularly
 once you remove ones that are available in ECMAScript via other means.

 Updated test results including these methods can be found here:

 http://intertwingly.net/stories/2008/09/20/estest.html

 - Sam Ruby

 - - - - -

 Absolute requirement, and must be implemented as an 'instance' method
 (for most of the others, the difference between a 'static' and
 'instance' method is negotiable):

*  atoString

 Available as prefix or infix operators, or as builtin functions, may not
 need to be duplicated as named Decimal methods:
*  aadd
*  acompare
*  acopy
*  acopyNegate
*  adivide
*  aisFinite
*  aisNaN
*  amultiply
*  aremainder
*  asubtract

 Essential 754, not available as infix operator, so must be made
 available as a named method.  For consistency with Math, abs, max,
 and min should be 'static' methods:

*  aquantize
*  acopyAbs[called abs]
*  amax
*  amin

 Very useful functions which are not in 754 for various reasons;
 strongly recommend include:

*  a-   divideInteger  [extremely handy]
*  a-   digits [= significant digits]
*  a-   reduce [often asked for]
*  a-   toEngString[really handy in practice]
*  a-   getExponent[esp. if no compareTotal]

 Other 754 operations that are less essential but would probably add
 later anyway.  'b+' are a subset that are especially useful in
 practice:

*   b   FMA
*   b   canonical
*   b   compareSignal
*   b+  compareTotal
*   b   compareTotalMag
*   b   copySign
*   b   isCanonical
*   b+  isInfinite
*   b+  isInteger
*   b   isNormal
*   b+  isSignaling [if sNaNs supported]
*   b+  isSignalling[  ]
*   b+  isSigned
*   b   isSubnormal
*   b+  isZero
*   b   logB
*   b   maxMag
*   b   minMag
*   b   nextMinus
*   b   nextPlus
*   b   radix
*   b   remainderNear
*   b+  sameQuantum
*   b   scaleB
*   b+  setExponent
*   b   toInt32
*   b   toInt32Exact
*   b+  toIntegralExact [perhaps only one of these]
*   b+  toIntegralValue []
*   b   toUInt32
*   b   toUInt32Exact

 Probably drop because conflict with ES bitwise logical ops:

*c  and (as digitAnd)
*c  invert (as digitInvert)
*c  or (as digitOr)
*c  rotate
*c  shift
*c  xor (as digitXor)

 And, finally, not needed:

 (The first two of these are 754 but don't fit with ES)
 

Re: Revisiting Decimal (was: Prioritized list of Decimal method additions)

2008-12-03 Thread Brendan Eich

On Dec 3, 2008, at 1:04 PM, Sam Ruby wrote:


I saw the meeting minutes, and got a debrief from Allen yesterday.
I'm still unclear on how to proceed with Decimal, even if the new
target is Harmony.

Waldemar's issues were raised and responded to prior to Kona:

https://mail.mozilla.org/pipermail/es-discuss/2008-November/ 
008074.html


Did this address Waldemar's other message?

https://mail.mozilla.org/pipermail/es-discuss/2008-September/007631.html

I also don't see a reply to David-Sarah Hopwood's message:

https://mail.mozilla.org/pipermail/es-discuss/2008-November/008078.html



What are we left with relative to the the following output from the
code that I wrote?

http://intertwingly.net/stories/2008/09/20/estest.html


Looks like we may need Waldemar to comment or elaborate on his last  
post (first link above).




Relative to that output, I've heard two issues.

The first was no user visible cohorts.  The issue is Waldemar's
insistence that ES is irretrievably broken if array lookup for
x[1.10m] respects the trailing zero.  IIRC, Brendan's position was a
more pragmatic one, namely that small integers (like, say, up to
10**20th) are the only values for which toString must avoid both
exponential notation and trailing zeros, other values shouldn't get in
the way of doing the right thing.  That would have been fine, but
unfortunately he couldn't make the meeting (something I definitely
understand).  Mike and I weren't then, and still aren't happy about
conceding to Waldemar's position on this one, but at Redmond we did
with the understanding that with that concession, Decimal was in.


This Redmond-meeting result did sound like a breakthrough in any  
event. Was it memorialized with spec changes?




The second was the duplication between Math.min and Decimal.min.
I was operating under the if it ain't broken, don't fix it
guidelines.  To date, Math.min *always* returns a Number, never an
Object.  Waldemar apparently feels that people will call the wrong
function.  To me, this is a you say N-EEE-THER, I say N-EYE-THER
issue.  If the consensus is that Math.min should be changed and
Decimal.min should be removed, that's a pretty quick fix.


This doesn't seem like a big problem, by itself.



So now the question is: where are we now?


The two general kinds of problems from the Kona meeting were:

1. Spec bugs, not just typos but material ones that couldn't be fixed  
by that meeting, which was the deadline for major additions to ES3.1  
not already in the spec.


2. Future-proofing arguments including: do we need Decimal wrappers  
for decimal primitives. I know we've been over this before, but it  
still is an open issue in TC39.


I'd appreciate Waldemar's comments; and those of other TC39ers too, of  
course.


/be
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (was: Prioritized list of Decimal method additions)

2008-12-03 Thread Sam Ruby

Brendan Eich wrote:

On Dec 3, 2008, at 1:04 PM, Sam Ruby wrote:


I saw the meeting minutes, and got a debrief from Allen yesterday.
I'm still unclear on how to proceed with Decimal, even if the new
target is Harmony.

Waldemar's issues were raised and responded to prior to Kona:

https://mail.mozilla.org/pipermail/es-discuss/2008-November/008074.html


Did this address Waldemar's other message?

https://mail.mozilla.org/pipermail/es-discuss/2008-September/007631.html


The no user visible cohorts addressed that particular concern.


I also don't see a reply to David-Sarah Hopwood's message:

https://mail.mozilla.org/pipermail/es-discuss/2008-November/008078.html


Given that the spec text has been removed, the way I would like to 
proceed is to first come to an agreement on the semantics we desire, and 
for that I would like to solicit comments on the output produced by the 
implementation I produced.


While I agree that Decimal wrappers are useless; but I think that 
consistency argues that they need to be there (in fact, I was talked 
into putting them there); again I refer back to the output produced and 
solicit comments.



What are we left with relative to the the following output from the
code that I wrote?

http://intertwingly.net/stories/2008/09/20/estest.html


Looks like we may need Waldemar to comment or elaborate on his last post 
(first link above).



Relative to that output, I've heard two issues.

The first was no user visible cohorts.  The issue is Waldemar's
insistence that ES is irretrievably broken if array lookup for
x[1.10m] respects the trailing zero.  IIRC, Brendan's position was a
more pragmatic one, namely that small integers (like, say, up to
10**20th) are the only values for which toString must avoid both
exponential notation and trailing zeros, other values shouldn't get in
the way of doing the right thing.  That would have been fine, but
unfortunately he couldn't make the meeting (something I definitely
understand).  Mike and I weren't then, and still aren't happy about
conceding to Waldemar's position on this one, but at Redmond we did
with the understanding that with that concession, Decimal was in.


This Redmond-meeting result did sound like a breakthrough in any event. 
Was it memorialized with spec changes?


There were spec changes that went in as a result of the Redmond meeting, 
yes.  At least one was identified before the Kona meeting by Waldemar 
(and fessed up to by me) as having been botched by myself (and = or).



The second was the duplication between Math.min and Decimal.min.
I was operating under the if it ain't broken, don't fix it
guidelines.  To date, Math.min *always* returns a Number, never an
Object.  Waldemar apparently feels that people will call the wrong
function.  To me, this is a you say N-EEE-THER, I say N-EYE-THER
issue.  If the consensus is that Math.min should be changed and
Decimal.min should be removed, that's a pretty quick fix.


This doesn't seem like a big problem, by itself.


Agreed, and in any case, one that I would eagerly adopt.


So now the question is: where are we now?


The two general kinds of problems from the Kona meeting were:

1. Spec bugs, not just typos but material ones that couldn't be fixed by 
that meeting, which was the deadline for major additions to ES3.1 not 
already in the spec.


For the moment, I would like to split that list into two categories: 
areas where there isn't yet agreement within the committee on how to 
proceed, and the best way I know how to make progress on that is to come 
to agreement on the behavior desired, hence my suggestion that we look 
at concrete test cases; and a list of places where I erred in my 
converting my understanding into prose.


No matter how we proceed, the first list needs to be captured and 
addressed eventually anyway.


2. Future-proofing arguments including: do we need Decimal wrappers for 
decimal primitives. I know we've been over this before, but it still is 
an open issue in TC39.


That does sound like the type of issue that I would like to see us 
identify and work to resolve.  Two questions come to mind: (1) can 
anybody identify a specific expression which behaves differently that 
one would desire, and (2) if we've been over this before, what does it 
take to actually close this issue this time for good?


I'd appreciate Waldemar's comments; and those of other TC39ers too, of 
course.


/be


- Sam Ruby

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss