Re: Revisiting Decimal (generic algorithms)

2009-01-31 Thread Sam Ruby

Brendan Eich wrote:


This variation preserves wrappers, so a Decimal converter function (when 
invoked) and constructor (via new, and to hold a .prototype home for 
methods). The committee plunks for more of this primitive/wrapper 
business, since we have wrappers and primitives for numbers and other 
types, and backward compatibility requires keeping them. Operators work 
mostly as implemented already by Sam (results here 
http://intertwingly.net/blog/2008/08/27/ES-Decimal-Updates, with some 
out-of-date results; notably typeof 1.1m should be decimal not 
object -- and not number).


More up to date results can be found here:

http://intertwingly.net/stories/2008/09/20/estest.html

Which was discussed here:

https://mail.mozilla.org/pipermail/es-discuss/2008-December/008316.html

Sam and I are going to work on adapting Sam's SpiderMonkey 
implementation, along with our existing ES3.1-based JSON codec and 
trace-JITting code, to try this out. More details as we get into the work.


Since the bug is about usability, we have to prototype and test on real 
users, ideally a significant number of users. We crave comments and 
ideas from es-discuss too, of course.


I'd like to highlight one thing: Mike and I agreed to no visible 
cohorts with the full knowledge that it would be a significant 
usability issue.  We did so in order to get decimal in 3.1. In the 
context of Harmony, I feel that is is important that we fully factor in 
usability concerns.  Prototyping and testing on real users, ideally with 
a significant number of users, is an excellent way to proceed.



/be


- Sam Ruby

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-30 Thread Brendan Eich

On Jan 18, 2009, at 4:48 PM, Brendan Eich wrote:


In any case, I think we first need to decide what the semantics
would be *after* any desugaring of multimethods.


The goal is DWIM, which is why we've circled around these implicit  
or low-cost-if-explicit approaches.


Of course DWIM is ill-defined, but bug 5856 and dups suggest much of  
the problem comes from the language supporting numeric literals  
written in base 10 with certain precision or significance, but then  
mistreating them via conversion to binary and inevitable operation  
using only binary operators.




1. changing the number type to decimal by fiat;
2. adding a use decimal pragma;
3. trying to keep literals generic.

The high-cost explicit alternative is to tell 'em use the m  
suffix! That probably will not work out well in the real world.  
It's a syntax tax hike: it will require all user agents to be  
upgraded (unlike use decimal), and yet people will still forget to  
use the suffix.


I'm still interested in better use decimal design ideas.


Allen made another proposal, which Waldemar mentioned in his notes  
from the TC39 meeting:


4. All literals lex as decimal, string to number likewise converts to  
decimal; but contagion is to binary, Math.sin/PI/etc. remain binary.  
JSON would parse to decimal in this proposal.


This variation may require opt-in as Waldemar pointed out: people  
write 1e400 to mean Infinity.


This variation preserves wrappers, so a Decimal converter function  
(when invoked) and constructor (via new, and to hold a .prototype home  
for methods). The committee plunks for more of this primitive/wrapper  
business, since we have wrappers and primitives for numbers and other  
types, and backward compatibility requires keeping them. Operators  
work mostly as implemented already by Sam (results here, with some out- 
of-date results; notably typeof 1.1m should be decimal not object  
-- and not number).


Sam and I are going to work on adapting Sam's SpiderMonkey  
implementation, along with our existing ES3.1-based JSON codec and  
trace-JITting code, to try this out. More details as we get into the  
work.


Since the bug is about usability, we have to prototype and test on  
real users, ideally a significant number of users. We crave comments  
and ideas from es-discuss too, of course.


/be___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-30 Thread Brendan Eich

On Jan 30, 2009, at 6:28 PM, Brendan Eich wrote:

According to http://en.wikipedia.org/wiki/Polymorphism_(computer_science) 
 (hey, it's referenced):


There are two fundamentally different kinds of polymorphism,  
originally informally described by Christopher Strachey in 1967. If  
the range of actual types that can be used is finite and the  
combinations must be specified individually prior to use, it is  
called Ad-hoc polymorphism. If all code is written without mention  
of any specific type and thus can be used transparently with any  
number of new types, it is called parametric polymorphism. John C.  
Reynolds (and later Jean-Yves Girard) formally developed this notion  
of polymorphism as an extension to the lambda calculus (called the  
polymorphic lambda calculus, or System F).


So multimethods use parametric polymorphism.


Correction: multimethods are ad-hoc too, since you have write a  
particular type combination. For dyadic operators, the multiple- 
argument dispatch differs from single (left, receiver) dispatch, but  
the type combinations are still finite and specified.


Not sure this matters. The real argument is about single vs. multiple  
dispatch.



Lars's point about future-proofing, when he wrote ad-hoc  
overloading, seems to me to be about adding extensible dyadic  
operators via double-dispatch now, then adding multiple dispatch in  
some form later and being prevented by compatibility considerations  
from changing operators. Best to ask him directly, though -- I'll do  
that.


Lars meant exactly that -- in any conversation tending toward a future  
version of the language where multimethods or something that addresses  
the bugs (or features from the other point of view) of single- 
dispatch operators might come along, standardizing single dispatch and  
requiring double(-single)-dispatch from left to right, with  
reverse_add and so on, would be future-hostile.


/be___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-18 Thread Brendan Eich

On Jan 16, 2009, at 7:38 PM, David-Sarah Hopwood wrote:


It could be argued that most ES3.x programs are probably not relying
on the exact errors introduced by double-precision IEEE 754, but that
seems risky to me.


Emphatically agreed. People file dups of bug 5856 but they also  
knowingly and unknowingly depend on IEEE 754 behavior in detail.




By that argument, ignoring performance, you could
unconditionally implement all numbers as decimals, and I don't think
many people here would accept that as being compatible.


This was the path favored by Mike Cowlishaw and (sometimes, IIRC) by  
Doug Crockford. It was rejected by at least me (for Mozilla) and  
Maciej (for Apple).




To address the problem raised by Allen, you would probably want to
implicitly define implementations that used different types for
constants, depending on the argument types to a given function
(and it is not clear how that would work for mixed-type arguments).


Another idea for constants that seems strictly more usable than any  
suffix requirement or complicated constant-parameter-based dispatch:  
use decimal. The idea is to change the meaning of literals andn  
operators. Again the problem of built-ins, or really of interfacing  
with the rest of the world not scoped by the lexical pragma, remains.




In any case, I think we first need to decide what the semantics
would be *after* any desugaring of multimethods.


The goal is DWIM, which is why we've circled around these implicit  
or low-cost-if-explicit approaches.


* changing the number type to decimal by fiat;
* adding a use decimal pragma;
* trying to keep literals generic.

The high-cost explicit alternative is to tell 'em use the m suffix!  
That probably will not work out well in the real world. It's a syntax  
tax hike: it will require all user agents to be upgraded (unlike use  
decimal), and yet people will still forget to use the suffix.


I'm still interested in better use decimal design ideas.

/be
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


RE: Revisiting Decimal (generic algorithms)

2009-01-16 Thread Allen Wirfs-Brock
Returning to  Brendan's original question...

The problem I brought up at the Kona meeting was that the Decimal proposal did 
not consistently enable the writing of functions that implement generic numeric 
algorithms.  By this I mean algorithms that can be applied to either Number or 
Decimal arguments and produce a result that is of the same type as its inputs 
for example:

function add(a,b) { return a+b};
// ok in Kona proposal, add(1M,1M)== 2m; add(1.0,1.0)==2.0  (binary)

function max3(a,b,c) {return Math.max(a,b,c)}
// not generic in Kona proposal, max3(1m,2m,3m) == 3 (not 3m)
// no generic Math functions so user must explicitly code either Math.max or 
Decimal.max

function fuzz(a) { return a + 0.1}
//not generic in Kona draft, fuzz(1m) === 1.10008881784197001... 
(but see below)
//Kona spec. uses binary floating point for all mixed mode operations

The second case is fixable with some work by making the Math functions all be 
generic.

Sam says the third case is a bug in the Kona spec. whose fix had already been 
agreed upon at the Redmond meeting.  The fix, as I understand it, is that mixed 
mode arithmetic should be performed using decimal operations. However, that 
does not address my concern.  With that fix in place, the results of fuzz(1m) 
would be something like 1.1000888178419700125232338905334472656250m 
(-- note m).  That is because the literal 0.1 would be lexed as a Number 
(ie, binary floating point) literal, stored as a binary approximation, and that 
binary approximation would be dynamically converted to the decimal floating 
point equivalent of the binary approximation by the add operations.

This problem cannot be fixed simply by tweaking the coercion rules.  It 
probably requires that numeric literals be treated as generic values that are 
only interpreted situationally as either binary or decimal values in the 
context of a particular operations.

The design details of the integrations of multiple numeric data types 
(potentially not just Number and Decimal) and questions such as whether and how 
a dynamically typed language like ECMAScript should support such generic 
algorithms will have long lasting impact on the usability of the language.  My 
perspective in Kona, when we talked about Decimal, was that these are Harmony 
scale issues that must be carefully thought through and that they should not be 
prematurely and irrevocably resolved as a consequence of an accelerated effort 
to include Decimal in ES3.1.

Allen

-Original Message-
From: es-discuss-boun...@mozilla.org [mailto:es-discuss-
boun...@mozilla.org] On Behalf Of Brendan Eich
Sent: Friday, January 09, 2009 2:34 PM
To: Waldemar Horwat; David-Sarah Hopwood; Sam Ruby
Cc: es-discuss
Subject: Re: Revisiting Decimal

Sam's mail cited below has gone without a reply for over a month.
Decimal is surely not a high priority, but this message deserves some
kind of response or we'll have to reconstruct the state of the
argument later, at probably higher cost.

I was not at the Redmond meeting, but I would like to take Sam's word
that the cohort/toString issue was settled there. I heard from Rob
Sayre something to this effect.

But in case we don't have consensus, could any of you guys state the
problem for the benefit of everyone on this list? Sorry if this seems
redundant. It will help, I'm convinced (compared to no responses and
likely differing views of what the problem is, or what the consensus
was, followed months later by even more painful reconstruction of the
state of the argument).

The wrapper vs. primitive issue remains, I believe everyone agrees.

/be

On Dec 4, 2008, at 2:22 PM, Sam Ruby wrote:

 2008/12/4 Brendan Eich bren...@mozilla.com:

 Sam pointed that out too, and directed everyone to his test-
 implementation
 results page:
 http://intertwingly.net/stories/2008/09/20/estest.html
 Indeed we still have an open issue there ignoring the wrapper one:

 [Sam wrote:] I think the only major outstanding semantic issue was
 wrapper
 objects; apart from that, the devil was in the detail of spec
 wording.[End Sam]

 No, the cohort/toString issue remains too (at least).

 With a longer schedule, I would like to revisit that; but as of
 Redmond, we had consensus on what that would look like in the context
 of a 3.1 edition.

 From where I sit, I find myself in the frankly surreal position that
 we are in early December, and there are no known issues of consensus,
 though I respect that David-Sarah claims that there is one on
 wrappers, and I await his providing of more detail.

 /be

 - Sam Ruby

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-16 Thread Brendan Eich

On Jan 16, 2009, at 2:25 PM, Allen Wirfs-Brock wrote:

I think that carry dual encodings (both binary and decimal) for  
each numeric literal might be a reasonable approach as long as we  
only have two types.


Excluding small integer literals, most numeric literals in my  
experience are small enough that carrying 8 + 16 = 24 bytes loses, but  
you're right that this is all implementation detail. Still, the spec  
is informed by implementor feedback, to the point that it can't be  
developed in a vacuum or it might be ignored.



 However choosing  that over maintaining the source form sounds  
like an implementation rather than specification decision.


Speaking for Mozilla, we probably can't tolerate anything like  
carrying around two representations, or source forms, for number  
literals. I'd have to measure non-int literals to say for sure, but  
gut check says no.


I'm not saying multimethods are the only way forward. I'm genuinely  
interested in new thinking about numbers and decimal, because of that  
most-frequently-dup'ed bug:


https://bugzilla.mozilla.org/show_bug.cgi?id=5856

But I do not see a solution for it yet, and your point that we need to  
solve this just to get decimal+double into the language is right on.


/be
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-16 Thread David-Sarah Hopwood
Allen Wirfs-Brock wrote:
[...]
 function fuzz(a) { return a + 0.1}
 //not generic in Kona draft, fuzz(1m) === 1.10008881784197001... 
 (but see below)
 //Kona spec. uses binary floating point for all mixed mode operations
 
[...]
 This problem cannot be fixed simply by tweaking the coercion rules.
 It probably requires that numeric literals be treated as generic values
 that are only interpreted situationally as either binary or decimal values
 in the context of a particular operations.

I am not aware of any precedent for this approach in other languages, and
I'm very skeptical about whether it can be made to work in ECMAScript.
Consider

  function id(x) { return x; }

What is the result and type of id(0.1) in this approach, and why?

-- 
David-Sarah Hopwood
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-16 Thread David-Sarah Hopwood
David-Sarah Hopwood wrote:
 Allen Wirfs-Brock wrote:
 [...]
 function fuzz(a) { return a + 0.1}
 //not generic in Kona draft, fuzz(1m) === 
 1.10008881784197001... (but see below)
 //Kona spec. uses binary floating point for all mixed mode operations

 [...]
 This problem cannot be fixed simply by tweaking the coercion rules.
 It probably requires that numeric literals be treated as generic values
 that are only interpreted situationally as either binary or decimal values
 in the context of a particular operations.
 
 I am not aware of any precedent for this approach in other languages, and
 I'm very skeptical about whether it can be made to work in ECMAScript.
 Consider
 
   function id(x) { return x; }
 
 What is the result and type of id(0.1) in this approach, and why?

 - if binary 0.1, then we would have

 1m + 0.1 !== 1m + id(0.1)

   which breaks referential transparency (in the absence of side-effects)

 - if decimal 0.1m, then we break compatibility with ES3.

 - if the value remains generic, then such values must be supported at
   run-time as a third numeric type besides number and decimal, which
   seems unsupportably complex to me.

-- 
David-Sarah Hopwood ⚥

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-16 Thread Brendan Eich

On Jan 16, 2009, at 4:30 PM, Sam Ruby wrote:


Indeed.  This is the first time I understood (at a high level) the
request.  I'm not saying it wasn't explained before, or even that it
wasn't explained well, but this is the first time I understood it
(again at a high level, questions on details below).


It's good to get this understood widely -- it probably did not come  
through in the notes from Kona (useful as they were). Sorry on my part  
for that, kudos again to Allen.




Like Allen says later, most small integers (i.e., the ones that fit
exactly in a double precision binary value) can simply be retained as
binary64.


Or machine ints -- ALU  FPU still.



I suspect that covers the majority of constants in deployed
javascript.  Now let's consider the rest.

First, Allen's example:

function fuzz(a) { return a + 0.1}

Where fuzz(0.1)===0.2 and fuzz(0.1m)===0.2m

The only way I can see that working is if the constant is initially in
a form that either is readily convertible to source, or stores both
values.  I don't understand how multimethods (on +?) affect this.
If I'm missing something, please let me know (or simply provide a
pointer to where I can educate myself).


I did, see followup links to reading-lists, from which I'll pick a  
specific link:


http://www.artima.com/weblogs/viewpost.jsp?thread=101605



Continuing on, let's tweak this a bit.

function fuzz(a) {var b=0.1; return a+b}

I would suggest that if the expectation would be that this function
behaves the same as the previous one.


It had better!



My interpretation is that this means that internally there are three
data types, one that is double, one that is decimal, and one that
somehow manages to be both.  Internally in that this implementation
detail ideally should not be visible to the application programmer.
Again, I could be wrong (in the need for three data types, not on the
opinion that this should not be visible), but pressing on...


No, Allen allowed for that, but of course this generic type has to  
propagate at runtime through variable and function abstraction.




function is_point_one(a) {var b=0.1; return b===a}

Is the expectation that this would return true for *both* 0.1 and
0.1m?


I don't see how this could work.



 This leads to a rather odd place where it would be possible for
triple equals to not be transitive, i.e. a===b and b===c but not
a!===c.


Er, a!==c ;-).



 That alone is enough to give me pause and question this
approach.


Me too.



Continuing trip down this looking glass, what should typeof(0.1)
return?  You might come to a different conclusion, and again I might
be missing something obvious, but if these Schrödinger's catstants
(sorry for the bad pun) can be assigned to variable, then I would
assert that typeof(0.1) and typeof(0.1m) should both be 'number'.


It should be clear that I won't go this far. My reply to Allen was  
gently suggesting that his suggestion would not fly on implementation  
efficiency grounds, but I think you've poked bigger holes. I'm still  
interested in multimethods, including for operators.




Finally, this has bearing on the previous json discussion.  If it is
possible to defer the binding of a literal value to a particular
variant of floating point (i.e., binary vs decimal), then there no
longer is no need for a JSON parse to prematurely make this
determination.

I suspect that these last two paragraphs will make Kris happy.


The previous paragraphs should induce unhappiness that trumps that  
illusory joy, though.




But I'll stop here.  I may very well be out in the weeds at this
point.  But my initial take is that this approach produces a different
(and somehow more fundamental) set of surprises that the approach than
we had previously agreed on, and furthermore it isn't clear to me that
this approach can be implemented in a way that has negligible
performance impact for applications that never make use of decimal.

But I hope that one or both of you (or anybody else) can point out
something that I'm missing.


Not me, and I see David-Sarah has observed that dual representation  
cannot be confined to literals.


But I'd still like to encourage thinking outside of the narrow ES3-ish  
box in which Decimal has been cast. If not multimethods, some other  
novel (to ES, not to well-researched language design) is needed.


/be
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-16 Thread Brendan Eich

On Jan 16, 2009, at 5:30 PM, David-Sarah Hopwood wrote:


David-Sarah Hopwood wrote:

function id(x) { return x; }

What is the result and type of id(0.1) in this approach, and why?


- if binary 0.1, then we would have

1m + 0.1 !== 1m + id(0.1)

  which breaks referential transparency (in the absence of side- 
effects)


- if decimal 0.1m, then we break compatibility with ES3.

- if the value remains generic, then such values must be supported at
  run-time as a third numeric type besides number and decimal, which
  seems unsupportably complex to me.


Agreed on all points.

Have you looked at multimethods in Cecil?

http://www.cs.washington.edu/research/projects/cecil/pubs/cecil-oo-mm.html
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.8502

Good discussion, let's keep it going.

/be
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-16 Thread Sam Ruby
On Fri, Jan 16, 2009 at 8:30 PM, Brendan Eich bren...@mozilla.com wrote:

 Like Allen says later, most small integers (i.e., the ones that fit
 exactly in a double precision binary value) can simply be retained as
 binary64.

 Or machine ints -- ALU  FPU still.

Agreed.  Those values that could fit in int32 before could continue to do so.

 I suspect that covers the majority of constants in deployed
 javascript.  Now let's consider the rest.

 First, Allen's example:

 function fuzz(a) { return a + 0.1}

 Where fuzz(0.1)===0.2 and fuzz(0.1m)===0.2m

 The only way I can see that working is if the constant is initially in
 a form that either is readily convertible to source, or stores both
 values.  I don't understand how multimethods (on +?) affect this.
 If I'm missing something, please let me know (or simply provide a
 pointer to where I can educate myself).

 I did, see followup links to reading-lists, from which I'll pick a specific
 link:

 http://www.artima.com/weblogs/viewpost.jsp?thread=101605

I must be dense.  My previous understanding of multimethods was that
it depends on the assumption that the type of each argument can be
determined.  That article doesn't change that for me.

 Continuing on, let's tweak this a bit.

 function fuzz(a) {var b=0.1; return a+b}

 I would suggest that if the expectation would be that this function
 behaves the same as the previous one.

 It had better!

So, here's the problem.  At the point of the ';' in the above, what is
the result of typeof(b)?

The problem gets worse rapidly.  The above may seem to be appealing at
first, but it degenerates rapidly.  Consider:

function fuzz(a) {var b=0.05; var c=0.05; var d=b+c; return a+d}

Should this return the same results as the previous fuzz functions?
What is the value of typeof(d)?

 My interpretation is that this means that internally there are three
 data types, one that is double, one that is decimal, and one that
 somehow manages to be both.  Internally in that this implementation
 detail ideally should not be visible to the application programmer.
 Again, I could be wrong (in the need for three data types, not on the
 opinion that this should not be visible), but pressing on...

 No, Allen allowed for that, but of course this generic type has to propagate
 at runtime through variable and function abstraction.

I don't follow.

 function is_point_one(a) {var b=0.1; return b===a}

 Is the expectation that this would return true for *both* 0.1 and
 0.1m?

 I don't see how this could work.

Before proceeding, let me simplify that:

function is_point_one(a) {return a===0.1}

The point of fuzz was that 0.1 as a literal would be interpreted as
a binary64 or as a decimal128 based on what it was combined with.  Why
would this example be any different?

  This leads to a rather odd place where it would be possible for
 triple equals to not be transitive, i.e. a===b and b===c but not
 a!===c.

 Er, a!==c ;-).

  That alone is enough to give me pause and question this
 approach.

 Me too.

 Continuing trip down this looking glass, what should typeof(0.1)
 return?  You might come to a different conclusion, and again I might
 be missing something obvious, but if these Schrödinger's catstants
 (sorry for the bad pun) can be assigned to variable, then I would
 assert that typeof(0.1) and typeof(0.1m) should both be 'number'.

 It should be clear that I won't go this far. My reply to Allen was gently
 suggesting that his suggestion would not fly on implementation efficiency
 grounds, but I think you've poked bigger holes. I'm still interested in
 multimethods, including for operators.

I don't see how this reasonably can be done half way.

And while multimethods are appealing for other reasons, I don't think
they relate to what Allen is suggesting.

- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-16 Thread Brendan Eich

On Jan 16, 2009, at 5:54 PM, Sam Ruby wrote:


http://www.artima.com/weblogs/viewpost.jsp?thread=101605


I must be dense.  My previous understanding of multimethods was that
it depends on the assumption that the type of each argument can be
determined.  That article doesn't change that for me.


Good! :-P

Not static typing, mind you; but typing nonetheless.



My interpretation is that this means that internally there are three
data types, one that is double, one that is decimal, and one that
somehow manages to be both.  Internally in that this implementation
detail ideally should not be visible to the application programmer.
Again, I could be wrong (in the need for three data types, not on  
the

opinion that this should not be visible), but pressing on...


No, Allen allowed for that, but of course this generic type has to  
propagate

at runtime through variable and function abstraction.


I don't follow.


My reading of Allen's message was that the generic type was for  
literals only, and would collapse (as in a superposed wave function)  
into decimal or double on first operational use. but use can be  
delayed through variable or parameter assignment. So the generic or  
both-double-and-decimal type must be used more widely than just for  
literal terms at runtime.




function is_point_one(a) {var b=0.1; return b===a}

Is the expectation that this would return true for *both* 0.1 and
0.1m?


I don't see how this could work.


Before proceeding, let me simplify that:

function is_point_one(a) {return a===0.1}

The point of fuzz was that 0.1 as a literal would be interpreted as
a binary64 or as a decimal128 based on what it was combined with.  Why
would this example be any different?


It wouldn't, but that breaks one of three important properties  
(referential transparency, compatibility, or implementation  
efficiency)  as DSH has pointed out.



It should be clear that I won't go this far. My reply to Allen was  
gently
suggesting that his suggestion would not fly on implementation  
efficiency
grounds, but I think you've poked bigger holes. I'm still  
interested in

multimethods, including for operators.


I don't see how this reasonably can be done half way.


Right.



And while multimethods are appealing for other reasons, I don't think
they relate to what Allen is suggesting.


They do not -- they are the only sane alternative that I know of.

/be
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-16 Thread David-Sarah Hopwood
Brendan Eich wrote:
 On Jan 16, 2009, at 5:30 PM, David-Sarah Hopwood wrote:
 David-Sarah Hopwood wrote:
 function id(x) { return x; }

 What is the result and type of id(0.1) in this approach, and why?

 - if binary 0.1, then we would have

 1m + 0.1 !== 1m + id(0.1)

   which breaks referential transparency (in the absence of side-effects)

 - if decimal 0.1m, then we break compatibility with ES3.

 - if the value remains generic, then such values must be supported at
   run-time as a third numeric type besides number and decimal, which
   seems unsupportably complex to me.
 
 Agreed on all points.

A final nail in the coffin for the last (three-type) option above:

In ES3, the expression Number(0.1 + 0.1 + 0.1) would give
  Number(0.1) + Number(0.1) + Number(0.1) ==
0.3000444089209850062616169452667236328125

In the three-type option, it would give
  Number(0.3) ==
0.299988897769753748434595763683319091796875

(Decimal expansions are computed using SpiderMonkey's implementation
of toFixed. The point is simply that they are different.)

So the three-type option does not maintain compatibility, at least
if we are concerned with exact values.

It could be argued that most ES3.x programs are probably not relying
on the exact errors introduced by double-precision IEEE 754, but that
seems risky to me. By that argument, ignoring performance, you could
unconditionally implement all numbers as decimals, and I don't think
many people here would accept that as being compatible.

Compatibility could, in principle, be maintained by adding a third
kind of literal for generic values, with a different suffix. However,
I think it is likely that unless generic values used the suffix-free
numeric literal form, they would remain too rarely used to make any
difference to the issue that Allen is concerned about.

 Have you looked at multimethods in Cecil?

I've previously studied Cecil's multimethods and type system in
detail (it's very nicely designed IMHO), but I'm not sure that it
is what we need here. Multimethods address the problem of how to
concisely define type-dependent functions, but the implementations
of those functions still have to be given explicitly for each type
combination on which the behaviour differs (ignoring inheritance
and subtyping, which I don't think are relevant here).

To address the problem raised by Allen, you would probably want to
implicitly define implementations that used different types for
constants, depending on the argument types to a given function
(and it is not clear how that would work for mixed-type arguments).

In any case, I think we first need to decide what the semantics
would be *after* any desugaring of multimethods.

-- 
David-Sarah Hopwood
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-16 Thread Mark S. Miller
On Fri, Jan 16, 2009 at 5:34 PM, Brendan Eich bren...@mozilla.com wrote:

 Have you looked at multimethods in Cecil?

 http://www.cs.washington.edu/research/projects/cecil/pubs/cecil-oo-mm.html
 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.8502


On your recommendation, I have. I really wanted to like it. I really tried
to like it. In the end I was repelled in horror at its complexity.



 Good discussion, let's keep it going.


Indeed. After I made a simple proposal 
https://mail.mozilla.org/pipermail/es-discuss/2009-January/008535.html,
Michael Daumling pointed out that Adobe had made a similar proposal that had
been rejected:

On Fri, Jan 9, 2009 at 7:56 AM, Michael Daumling mdaeu...@adobe.com wrote:

 The discussion about operator overloading quickly went away from the
 JavaScript'ish approach that ExtendScript and your proposal used towards
 generic functions. At some time, the discussion stranded in areas too exotic
 for me. There is a rationale here:
 http://wiki.ecmascript.org/doku.php?id=discussion:operators

The objections listed there are

I think this feature is too weak to be included. Here are some reasons why I
 think that:

- Uncontrollable subtleties in dispatch: Adding eg a == operator to one
class and then comparing an instance x of that class to a value y of
another type means that the result can easily differ depending on whether
the programmer writes x == y or y == x. (If y has an operator == too
then its operator will be preferred in the latter case.) The most the 
 author
of the == operator can do about this is to add types to the operator's
signature so that strict mode catches the bug or the program fails
predictably at run-time.

 I'd argue that this is a feature, not a bug. Whether an operator is
commutative depends on the meaning of that operator on that data type. x *
y should mean the same as y * x if they are scalar numbers but not if
they are matrices.



-
- No inheritance: in almost all cases we would wish that if instances
of A and B are comparable with a certain semantics then instances of their
respective subclasses C and D are too.

 That objection doesn't apply to my proposal. (I'm not sure it does to
Adobe's either.)


-
- No compositionality: As the operators are tied to classes, a program
that wishes to use two separately authored classes A and B cannot define
their relations in terms of operators, the classes must be altered because
they do not know about each other.

 Again, I'd argue that this is a feature, not a bug. Likewise, if I see the
expression x.foo(y) and the meaning of the foo operation does not treat
its operands opaquely, if neither x nor y know about each other's interface,
then I'd expect the operation to fail. If some party outside of x and y
could define a generic foo that could make this operation succeed anyway,
I'd consider that a bug.


-

 Including operators as currently proposed would probably give us a headache
 if we wish to introduce a more powerful feature (probably based on some sort
 of ad-hoc overloading) in the future.

Functional folks often refer to oo polymorphism (or late binding) as
ad-hoc polymorphism, to distinguish it from their parametric polymorphism.
If this is what is meant, then my proposal and Adobe's both provide ad-hoc
polymorphism. If, as I suspect, something else is meant, I await hearing
what it might be.


-- 
   Cheers,
   --MarkM
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss