Re: Proposal: Math.add, Math.sub

2019-05-11 Thread Sam Ruby
On Sat, May 11, 2019 at 7:38 PM Ates Goral  wrote:
>
> (Percolating a comment I made in a thread earlier.)
>
> # Math.add
>
> ```
> Math.add = function (a, b) {
>   return a + b;
> };

What should Math.add('1', '2') produce?

- Sam Ruby
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Optional assignment operator

2018-07-04 Thread Sam Ruby
On Wed, Jul 4, 2018 at 7:12 PM, Jacob Pratt  wrote:
> I've been having this thought recently, after running across a potential use
> case in practice. There will likely be conditional accessors at some point
> in the future (with optional chaining), but there will not be conditional
> assignment.
>
> My thought was to have the following:
> this.foo ?= params?.foo;
> which can be desugared to
> if (($ref = params?.foo) !== undefined) { this.foo = $ref; }
>
> I would strictly check for undefined, rather than nullish, as anything other
> than undefined would indicate that a value is present that can be set. If no
> value is present (such as a missing key on an object), nothing would be set.
> A reference must be used for the general case, as the object being assigned
> (the RHS) could be a function or getter with side-effects.
>
> Not sure if it should be ?= or =?, as it would look somewhat odd (IMO) for
> things like ?+= or +=?.
>
> Initial thoughts?

Perl and Ruby have "||=" and "&&=" operators.  They don't strictly
check for undefined in either language.

These operators are frequently used (in particular, "||=").  I do miss them.

Looking at the node.js source:

$ find . -name '*.js' -type f | xargs egrep ' (\S+) = \1 \|\| ' | wc -l
   1416

$ find . -name '*.js' -type f | xargs egrep ' if \(!(\S+)\) \1 = ' | wc -l
 497

Nearly 2K occurrences in one code base.

> Jacob Pratt

- Sam Ruby
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


[no subject]

2017-08-12 Thread Sam Ruby

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Are there any 64-bit number proposals under consideration?

2015-11-12 Thread Sam Ruby
On Fri, Nov 13, 2015 at 1:00 AM, John Lenz <concavel...@gmail.com> wrote:
> By this I mean, a usable type like by mere mortals, not generated C++ code.
>
> Maybe something like:
>
> var v64 = LongNumber.from('2e58');
> v64 = LongNumber.add(v64, 2);
>
> It doesn't have to be pretty just reasonably and perform closer to native
> speed.

Word of advice, whatever you do, do NOT mention IEEE 754.  Trust me.  :-)

- Sam Ruby
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Existential Operator / Null Propagation Operator

2015-06-02 Thread Sam Ruby
On Tue, Jun 2, 2015 at 1:31 PM, Sander Deryckere sander...@gmail.com wrote:


 2015-06-02 18:57 GMT+02:00 Brendan Eich bren...@mozilla.org:

 You might hope for that, but as we both noted, `?[` is not going to fly.
 Don't break the (minified) Web.


 Which is why my proposal was about `??`. I believe there's currently no
 valid way to use a double question mark in  JS, so even `??[` should be easy
 to figure out what it means.


 The prefix idea generalizes:

 ?obj[key]
 obj[?key]
 obj[key1][?key2]

 and if you are not using computed property names, rather literal ones:

 obj.?prop1
 etc.

 I found this syntax to conflict with itself. As Andreas Rossberg says, what
 does `orders[?client.key].price` mean? Does it mean check if the client
 exists, and if not, return the price of the null order, or does it mean
 check if the order for this client exists, and return null if it doesn't?
 I don't see a way how both meanings can be made possible with this form of
 prefix notation.

Um, if I'm reading Brenden correctly, neither?

check if the client exists, and if not, return the price of the null order

===  orders[client.?key].price

check if the order for this client exists, and return null if it doesn't

=== orders[client.key].?price

I would suggest a third interpretation for `orders[?client.key].price`:

=== (orders ? orders[client.key] : null).price

I think that the problem here isn't that it is ambiguous, it is that
it isn't obvious.  Something that might be more obvious but requires
an additional character: `orders.?[client.key].price`.

More precisely, the suggestion is to standardize on .? and allow it to
be followed by either a simple name, a square bracket, or a left
paren.

 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

- Sam Ruby
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Why do JSON, Date, and Math exist?

2012-02-18 Thread Sam Ruby
On Fri, Feb 17, 2012 at 8:22 PM, Allen Wirfs-Brock
al...@wirfs-brock.com wrote:

 And my recollection is that the w3c had already declined to specify JSON
 encoding/decoding functions as part of html5.

I don't believe that this was ever brought before the HTML WG, and I
don't see any recorded decision on the matter.  That being said, I
personally believe that ECMA TC39 is a fine place to have standardized
this function.

- Sam Ruby
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A directive to solve the JavaScript arithmetic precision issue

2011-08-15 Thread Sam Ruby
On Mon, Aug 15, 2011 at 1:33 PM, David Bruant david.bru...@labri.fr wrote:

 Thoughts?

+1

 David

- Sam Ruby
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: HTML5 spec. seems to unnecessarily ban strict mode event handlers

2011-02-03 Thread Sam Ruby

On 02/03/2011 05:00 PM, Allen Wirfs-Brock wrote:

I was browsing Kangax's strict mode test result page
(http://kangax.github.com/es5-compat-table/strict-mode/ ) and I
noticed that he listed the recognition of a use strict directive of a
event handler as a non-standard feature that he tests for. This
spiked my curiosity as my recollection was the HTML5  event handler
content attribute was specified to be text that is an ECMAScript
FunctionBody.  FunctionBody may contain a Directive Prologue that
includes a use strict directive so, from an ECMAScript perspective,
there shouldn't be anything non-standard about a strict mode event
handler.

To be sure, I checked the event handler section of the  HTML5 spec
(http://dev.w3.org/html5/spec/Overview.html#event-handler-attributes)
and to my surprise I discovered that it specifies the creation of the
handler function in a manner that, at first glance, seems to
explicitly cause the presence of a use strict directive to  be
ignored.  Essentially it seems to specify that event handlers
specified using the event handler attribute are never executed in
ECMAScript 5 strict mode.  I don't know whether or not this was
intentional, but it certainly seems wrong.  The strictness of an
ECMAScript function is an internal and local characteristic of the
function.  For a ECMAScript host to say that a use strict directive
is ignored is really no different  from saying that IfStatements or
any other syntactically valid element of a FunctionBody will be
ignored.

The HTML5 spec. get into this trouble because of the way it uses the
abstract operation for creating function objects defined by section
13.2 of the ES5 specification
(http://www.ecma-international.org/publications/standards/Ecma-262.htm).
In step 2 of the algorithm in HTML5 6.1.6.1 it unconditionally uses
False as the Strict parameter to the ES5 13.2 algorithm.  That might
seem to exclude the function from strict mode, however that isn't
actually  the case. All the Strict parameter to 13.2 controls is
whether or not poison-pill properties for 'caller' and 'arguments'
are created for the function object.  The semantics of strict mode
are specified throughout the ES5 specification and are control by the
actual lexical occurrence of a use strict directive. The Strict
parameter to 13.2 does not alter this semantics.

The HTML5 spec. also contains another related bug.   Step three says
If the previous steps failed to compile the script, then ... where
the previous steps pretty clearly references the use of ES5 13.2 in
the immediately preceding step 2.  However, there is nothing in ES5
13.2 that concerns the compilation of ECMAScript source text.
Instead 13.2 expects to be passed an valid FunctionBody. That
validation (compilation) must occur somewhere else.

It appears to me that these problem are probably the result of the
HTML5 algorithm being patterned after the wrong parts of the ES5
spec.  The appropriate part of the ES5 spec. to use as a model is
steps 8-11 of ES5 15.3.2.1.  This is the definition of the Function
constructor.   These steps correctly take care of parsing the
FunctionBody and handling any resulting syntax errors.  It also calls
13.2 with a correct Strict parameter. Replacing HTML5 6.1.6.1 steps
2-3 with steps modeled after ES5 15.3.2.1 steps, 8, 9, and 11 (step
10 is not applicable) should correct these issues.

Finally, Kangax also lists as a non-standard feature the
recognition of strict coder as the string argument to setTimeout.  I
couldn't find anything the HTML5 spec.  that could be interpreted as
excluding strict ECMAScript code in this context.


Any chance you could open one or more bugs on this?

http://w3.org/brief/MjA2

Doing it via the bugzilla interface gives you an increased ability to be 
notified and participate, but you can also use the form directly on the 
W3C spec itself:


http://dev.w3.org/html5/spec/#status-of-this-document

Or the form at the bottom of every page on the WHATWG spec:

http://www.whatwg.org/specs/web-apps/current-work/multipage/

Whichever way you enter it, it ends up in the same place.

- Sam Ruby
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: value_types + struct+types = decimal

2010-10-18 Thread Sam Ruby
On Mon, Oct 18, 2010 at 6:57 PM, Brendan Eich bren...@mozilla.com wrote:

 You could use frozen binary data to implement the representation of a value 
 type, whose operators and literal syntax would come from its object-like 
 clothing (whether declarative via new syntax or using some Proxy-like API, 
 details TBD).

Any reason to believe that would necessarily incur significant
performance overhead?  If not, that's fine with me.

- Sam Ruby
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Object.eq is ready for discussion

2010-09-05 Thread Sam Ruby
On Sun, Sep 5, 2010 at 3:28 PM, Brendan Eich bren...@mozilla.com wrote:

 The eq name is freakishly short, which might up the odds of it not 
 colliding with existing user-level extensions to Object

http://api.jquery.com/eq/

- Sam Ruby
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: simple shorter function syntax

2010-07-25 Thread Sam Ruby
On Sun, Jul 25, 2010 at 7:57 PM, Maciej Stachowiak m...@apple.com wrote:

 Good point about the escaping hazard. I think # may look funny to people
 because it is a noisy symbol and also because it is a comment delimiter in
 many languages. Two other characters totally disallowed in the syntax are @
 and `, I wonder if either of those would be more visually pleasing:
 [0, 1, 2, 3].map( #(x) {x * x} )
 [0, 1, 2, 3].map( `(x) {x * x} )
 [0, 1, 2, 3].map( @(x) {x * x} )
 I also wonder if using a strictly binary operator might be workable without
 creating syntax ambiguities:
 [0, 1, 2, 3].map( ^(x) {x * x} )
 [0, 1, 2, 3].map( *(x) {x * x} )
 [0, 1, 2, 3].map( %(x) {x * x} )

The ruby syntax for the above is as follows:

[0,1,2,3].map {|x| x*x}

(try it in 'irb' to see what I mean)

While I don't believe that would fly here, perhaps adding parens
around the function would:

[0,1,2,3].map({|x| x*x})

- Sam Ruby
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Structs

2010-06-02 Thread Sam Ruby

On 06/02/2010 03:52 AM, Jason Orendorff wrote:



I'll still maintain that the choice that ECMA 334 takes, namely
that the assignment to b in the example above, makes a mutable
copy is a valid choice.


I would expect
   a[0].x = 3;
to modify a[0], not a temporary copy of a[0]. How do you propose to
make that work in ES?


I'll note that that is not the way strings work today:

a = abc';
a[0] = 'x';

That being said, I'll agree that a[0].x = 3 would be a handy thing to 
have.  (The clumsy alternative would be to require users to do a[0] = 
new TA(...);).



-j
js-ctypes: https://wiki.mozilla.org/Jsctypes/api


Thanks for the pointer to js-ctypes! I was unaware of this previously.

- Sam Ruby



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Structs

2010-06-02 Thread Sam Ruby

On 06/02/2010 02:03 PM, Brendan Eich wrote:

On Jun 2, 2010, at 7:50 AM, Brendan Eich wrote:


There's no issue if we separate value types from structs-for-WebGL,
but perhaps that is still premature. What I'd like to get back to is
value types are shallowly frozen, though. Otherwise we are
introducing a new optimization *and* user-programming hazard to the
language, beyond what objects as reference types created.


Sam pointed out in private mail that (my interpretation here) regardless
of value types being frozen, the structs for WebGL idea has aspects of
value types -- the structs in a typed array are allocated in-line,
that's the whole point -- and of reference types via element extraction
reifying a view object by which you can mutate the packed data.

So either we lose this refactoring equivalence:

b = a[i];
b.x = 42;
assert(a[i].x === 42);

This assertion botches with Sam's proposed semantics.


proposed is a bit more than I had intended.  My intent was merely to 
inquire if the usage of the word struct in this context matches the 
usage of that term in another ECMA standard that I was familiar with.  I 
seem to have implied much more than that, for which I apologize.



Or else we lose the other equivalence, if we reify a struct-view-object
on element extraction (rvalue):

a[j] = a[i];
a[j].x = 42;
assert(a[i].x === 42);

This example is just like the one above, but uses a[j] for some valid
index j, instead of a b temporary. Note that with objects (reference
types), specifically a plain old Array of object instances, the
assertion holds. But with the sketchy struct semantics I've been
proposing, typed-array-of-struct elements behave like value types when
assigned to (lvalues), so this assertion botches .


FWIW, I believe that in C#, both assertions would fail.


Structs can't be fully (mutable) value types without breaking one
equivalence. Yet they can't be reference types or we lose the critical
in-line allocation and packing that WebGL needs, but this leaves them in
limbo, breaking another equivalence.

/be


- Sam Ruby

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Structs

2010-05-28 Thread Sam Ruby

On 05/28/2010 01:20 PM, Brendan Eich wrote:



That said, I'm not sure I understand why this should gate anything in
this thread. Value types should be frozen (shallowly immutable)
regardless, or other things break, e.g., they could no longer be
transparently passed by copy. C# got this wrong, and paid a semantic
complexity price we must avoid. Non-frozen structs should not be value
types. Frozen structs could be value types, or could be wrapped in
value types or something.


Agreed. Sam?


There are so many undefined terms in that paragraph that I don't know 
what I would be agreeing to.


For example, I don't know why the word shallowly was inserted there. 
Was that just reflex, or is there an actual requirement to allow object 
references inside a struct?  Looking at the syntax that Brendan put out 
for discussion purposes, it isn't clear to me how one would do that.



const TA =
Array.newTypedArray(fixed_length,
Object.newStructType({x:u32, y:u32, z:u32,
  r:u8, g:u8, b:u8, a:u8}));
let a = new TA(...);


Mark mentions passed by copy.  What happens if I pass a[1] as an 
parameter on a method call?  Does something semantically different 
happen if the struct is frozen vs non-frozen?  Is that complexity worth it?


Putting that aside for the moment, my more specific questions is: under 
what conditions would it ever be possible for a[1]===a[2] to be true?


There is much wrapped in that simple question.  I'm inferring a lot from 
the discussion: typed arrays have a fixed number of elements, each of 
which has a fixed length.  It should be possible for implementations to 
store entire arrays in contiguous storage.


As such, a[1] and a[2] in a typed array can never be the same object. 
Which means that they can never be ===, much less egal.  By contrast, 
they could conceivably be the same object in a classic Array, 
depending on how they were constructed.


To facilitate discussion, I toss out the following:

  var a = new Array();
  a[0] = a;
  a[1] = ab;
  a[0] += b;

  if (a[0] === a[1]) { ... }

To the casual developer, I will assert that the fact that these strings 
are treated as being equal is an indication that they have the same 
value (i.e., sequence of bits) and not an indication that they occupy 
the same storage location (which could conceivably be true, but that's 
not generally something the implementation directly exposes).


The real question here: are what is currently being called a struct more 
like a bit string with convenient methods of accessing slices, or are 
they more like objects where no matter how close the sequence of bits 
are, two objects in different locations are never the same.


It might very well be that the requirements are such that the final 
conclusion will reluctantly be that it isn't worth trying to make === 
have a sane definition for these structs.  That just isn't something I 
would expect as a starting position.


- Sam Ruby
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Structs (was: Day 2 meeting notes)

2010-05-27 Thread Sam Ruby

On 05/25/2010 07:09 PM, Waldemar Horwat wrote:


Khronos group joint meeting:  The issues are visible endianness and
aliasing.  Khronos found that the operations that they needed for file
i/o were somewhat distinct from those they needed for graphics work.

Possible alternatives that don't expose endianness:  Lightweight
structs.  Khronos:  it's less efficient to index into those.  Why?

Arrays of structs are preferable to sets of homogeneous element arrays
due to cache and convenience effects.

Allen:  Trade off lots of different views of scalar values vs. a
richer set of SSE-like operations that might be a better semantic
match.  Example:  smalltalk bitblt allowed a rich set of
customizations.

Don't currently have scatter/gather operations.  Would like to have them.

Well-supported arrays of structs (as schemas) might be a satisfactory
compromise.  Consensus emerging around that solution.


Structs in ECMA-334 are value types, and consist of members that also 
are value types.  Would structs in future revisions of ECMA-262 share 
these characteristics?


Is it fair to assume that there would end up being a more richer set of 
primitives to select from when composing a struct than simply object, 
boolean, number, string and the like?  Again, ECMA-334 defines the 
following:


http://en.csharp-online.net/ECMA-334:_11.1.5_Integral_types

Would something similar be envisioned for ECMA-262?


Khronos would like to continue developing their API as WebGL host
objects in the meantime.  This may lead to forking of APIs in the
developers' minds.  The possible danger is failure to reuse common
abstractions.

Need a champion.  Waldemar offered to work with Khronos to drive a proposal.


If structs are anything like value types, then I am interested in 
participating.  I'm particularly interested in working through the 
details of how arithmetic of integer like quantities would work.



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


- Sam Ruby

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Structs

2010-05-27 Thread Sam Ruby

On 05/27/2010 03:05 PM, Brendan Eich wrote:

On May 27, 2010, at 11:38 AM, Sam Ruby wrote:


Well-supported arrays of structs (as schemas) might be a satisfactory
compromise. Consensus emerging around that solution.


Structs in ECMA-334 are value types, and consist of members that also
are value types. Would structs in future revisions of ECMA-262 share
these characteristics?

Is it fair to assume that there would end up being a more richer set
of primitives to select from when composing a struct than simply
object, boolean, number, string and the like? Again, ECMA-334
defines the following:

http://en.csharp-online.net/ECMA-334:_11.1.5_Integral_types

Would something similar be envisioned for ECMA-262?


No, we did not envision adding more primitive types, type annotations,
conversion rules, and first-class struct declarations.

Something more like

const TA =
Array.newTypedArray(fixed_length,
Object.newStructType({x:u32, y:u32, z:u32,
r:u8, g:u8, b:u8, a:u8}));
let a = new TA(...);

... a[i].x ...


I'll note that with newStructType one could define a 128 bit quantity 
for representing a complex number.  That coupled with something like the 
following proposal would enable a complete set of complex arithmetic 
operators to be supported:


https://mail.mozilla.org/pipermail/es-discuss/2009-January/008535.html

- Sam Ruby
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Structs

2010-05-27 Thread Sam Ruby

On 05/27/2010 05:29 PM, Brendan Eich wrote:

On May 27, 2010, at 2:18 PM, Jonas Sicking wrote:


On Thu, May 27, 2010 at 12:05 PM, Brendan Eich bren...@mozilla.com
wrote:

If structs are anything like value types, then I am interested in
participating. I'm particularly interested in working through the
details
of how arithmetic of integer like quantities would work.


The struct-array idea for WebGL avoids the typed array aliasing
design, and
consequent byte-order leakage. But we are not envisioning new operator
overloading or literal syntax. At most a[i].x would be optimized to a
very
fast, packed member read or write, after scaling i and adding the
offset of
x. Any read would use ES's number type, I believe.


The other thing that Khronos really wanted to avoid was for a[i] in
the above expression not to create an object which is to be
initialized and GCed. Though maybe you're including that in the
scaling+offset algorithm.


Yes, I wrote a[i].x on purpose. Just a[i] with no .x or .y, etc. after
would reify a JS object.


What would the results of the following be:

  a[i] === a[i]

Or the following(*):

  b=a[i]; ...; b == a[i]

Or the following(*):

  a[0]=a[1]; ...; a[0] === a[1]

I hope that the answer is the same in all three cases.  If the answer is 
the true in all three cases, then this is a value time in the ECMA 334 
sense of the word.  If the answer is false in all three cases, then I 
assert that many will find that to be surprising, particularly in the 
first expression.


- Sam Ruby

(*) where ... is a series of statements that do not affect the value 
of b or a.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Structs

2010-05-27 Thread Sam Ruby

On 05/27/2010 08:36 PM, Brendan Eich wrote:

On May 27, 2010, at 5:21 PM, Sam Ruby wrote:


a[0] = a[1] could also be made to work with sealed objects reflecting
the struct elements of the arrays. The right-hand side reifies or finds
the memoized object reflecting a[1], the packed struct. The assignment
to a[0] then reads property values from the object (fat ugly numbers!)
and writes into the packed struct at a[0] according to the schema.


... and the value of a[0] === a[1] is?


With any kind of mutable object reflecting the packed struct, the answer
will be false unless we have value types. See

http://wiki.ecmascript.org/doku.php?id=strawman:value_types#hard_cases

search for ===.


And I previously argued that false would be unexpected, that's how I 
came to the conclusion that structs in ECMA 262, just like in ECMA 334, 
should be value types.



Value types were conceived of as shallowly frozen, but we could try to
relax that.


So it still seems to me one could avoid equating value types and these
schema structs. They are distinct proposals, but you could tie them
together. I think that would be a mistake at this point, especially
since WebGL folks have no need for operators and literals (as far as I
know).


It is not a matter of whether they need === operators to be defined,
but rather a matter of defining what the value of === operators is to be.

Nor, am I suggesting that === be overloadable.

I am suggesting that if a[0]===a[1] is to be true, and one can assume
that a[n] is a series of octets, then we are talking about strncmp
like behavior. And if we can assume strncmp, then I see no reason to
preclude implementations from using strncpy.


If you want this, whether === is overloadable, and WebGL (lots of folks,
let's say) want structs in the array to be mutable, then we run into the
conflict recorded in the wiki.

''Jason: === must be overloadable. Mark: no, Value Types mean === can do
a recursive, cycle-tolerant structural comparison. Jason: not convinced
that complexity is warranted.
...

Decision logic:

if (=== is overloadable) {
The need for egal goes up;
if (egal is added (it is not overloadable))
{0, -0} and NaN issues go away;
} else {
The need for egal goes down,
because === is well-defined as structural recursive etc. comparison;
}
Value Type means “shallow frozen”.''

Say we add egal, so one can tell the difference between a[0] and a[1]
even if they happen to contain the same bits, because that difference
might matter due to one of them being mutated to have different bits.


I don't follow that.

If a[0] and a[1] have the same bits, why would a[0] === a[1] ever be false?


Then do the objections to unfrozen value types go away? Cc'ing Mark.


I am not assuming that === be overloadable.  Nor am I presuming that 
value types are mutable.


Some people believe that the {0, -0} and NaN behaviors are a historical 
wart, to be avoided in all future work.  Others believe that consistency 
here is important.  I see merit in both sides, and in any case believe 
that the issue can be made to go away without requiring an egal method. 
 At most, all that is requires is the addition of methods to determine 
NaN-ness and Zero-ness.


Pseudo-code:

function ===(a,b) {
  if (strcmp(a,b)) {
return a.isZero()  b.isZero();
  } else {
return !a.isNaN()
  }
}

My assumptions are that overloading the operators for value types, at 
some point in time, and based on double-dispatch approach, will be seen 
as a good idea as a possible candidate for inclusion in some version of 
ECMAScript.  Furthermore, I am assuming that whatever is designed for 
value types should apply to structs -- unless there is a compelling 
reason not to.  As you put it: 'having a value types proposal that 
composes well with any struct and struct array schema proposal would 
be great.'


What I see in structs is the potential to define a 128 bit quantity that 
could be efficiently schlepped around.  I don't see a need to modify one 
-- instead if you want a new value, you create one.  But I don't believe 
that the end result of a=b should result in a situation where a !== 
b' for structs any more than it does today for either Numerics today or 
for Objects (with the one notable exception being NaN...  meh).


Further, I see in structs as a data type with no legacy.  And one where 
neither arithmetic nor comparison functions are super-duper-ultra 
performance critical.  By that I mean that the isZero and isNaN calls 
above are tolerable.


A combination of double-dispatch and existing ES objects could enable 
the creation of an infinite precision integer library -- written in pure ES.


A combination of double-dispatch and structs could enable the creation 
of 128 bit decimal objects.  Again, this could be written in pure ES. 
Should it prove popular, vendors may chose to optimize this, and perhaps 
even standardize the library.


Or not.  In fact, there need not be only one true decimal library.  If 
others have

Meeting Schedule?

2009-12-17 Thread Sam Ruby
It looks like http://wiki.ecmascript.org/doku.php?id=meetings:meetings
is in desperate need for a little TLC...

Based on the meeting minutes, I gather that the next meeting is
January 27-28, 2010 @ Mozilla/Mountain View.  Do we have soft plans
for the remainder of the year?  I'm looking for something more precise
than near the end of alternating months as I want to proactively
block my calendar.  In fact, what prompted me is a request that I
speak at a conference at the end of May that I may turn down if it
conflicts...

Thanks!

- Sam Ruby
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ECMA TC 39 / W3C HTML and WebApps WG coordination

2009-09-25 Thread Sam Ruby

Maciej Stachowiak wrote:


On Sep 24, 2009, at 5:44 PM, Yehuda Katz wrote:

That sounds reasonable. There are really two issues. One is that there 
are parts of WebIDL that are unused. Another is that the parts of the 
spec themselves are fairly arcane and very implementor-specific. Consider:


interface UndoManager {
  readonly attribute unsigned long length;
  getter any item(in unsigned long index);
  readonly attribute unsigned long position;
  unsigned long add(in any data, in DOMString title);
  void remove(in unsigned long index);
  void clearUndo();
  void clearRedo();
};

I almost forget that I'm looking at something most widely implemented 
in a dynamic language when I look at that. Since this is most likely 
to be implemented in terms of ECMAScript, why not provide an 
ECMAScript reference implementation?


These methods do things that can't actually be implemented in pure 
ECMAScript, since they need to tie into the browser implementation and 
system APIs. So a reference implementation in ECMAScript is not possible.


I'll accept that it is a true statement that in an pure ECMAScript 
implementation of these interfaces in Safari on Mac OSX such wouldn't be 
possible.


Alternate perspective, one that I believe more closely matches the view 
of TC39: one could image an operating system and browser implemented in 
either in ECMAScript or in a secure subset thereof.  In such an 
environment it would be highly unfortunate if the the WebIDL for 
something as important as HTML5 and WebApps were written in such a way 
as to preclude the creation of a conforming ECMAScript implementation.


At this point, I'm not personally interested in discussions as to 
whether WebIDL is or is not the right way forward.  Anybody who wishes 
to invest their time in producing more useful documentation and/or 
reference implementations is not only welcome to do so, they are 
positively encouraged to do so.


Meanwhile, what we need is concrete bug reports of specific instances 
where the existing WebIDL description of key interfaces is done in a way 
that precludes a pure ECMAScript implementation of the function.


- Sam Ruby
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ECMA TC 39 / W3C HTML and WebApps WG coordination

2009-09-25 Thread Sam Ruby
On Fri, Sep 25, 2009 at 5:57 AM, Anne van Kesteren ann...@opera.com wrote:
 On Fri, 25 Sep 2009 11:38:08 +0200, Sam Ruby ru...@intertwingly.net wrote:

 Meanwhile, what we need is concrete bug reports of specific instances
 where the existing WebIDL description of key interfaces is done in a way
 that precludes a pure ECMAScript implementation of the function.

 Is there even agreement that is a goal?

This was expressed by ECMA TC39 as a goal.  There is no agreement as
of yet to this goal by the HTML WG.

I'm simply suggesting that they way forward at this time is via
specifics, ideally in the form of bug reports.

 I personally think the catch-all pattern which Brendan mentioned is quite
 convenient and I do not think it would make sense to suddenly stop using it.
 Also, the idea of removing the feature from Web IDL so that future
 specifications cannot use it is something I disagree with since having it in
 Web IDL simplifies writing specifications for the (legacy) platform and
 removes room for error.

 Having Web IDL is a huge help since it clarifies how a bunch of things map
 to ECMAScript. E.g. how the XMLHttpRequest constructor object is exposed,
 how you can prototype XMLHttpRequest, that objects implementing
 XMLHttpRequest also have all the members from EventTarget, etc. I'm fine
 with fiddling with the details, but rewriting everything from scratch seems
 like a non-starter. Especially when there is not even a proposal on the
 table.

I agree that either getting a proposal on the table or bug reports is
the right next step.  I further agree that removal of function and/or
a wholesale switch away from Web IDL is likely to be a non-starter.

 Anne van Kesteren
 http://annevankesteren.nl/

- Sam Ruby
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


ECMA TC 39 / W3C HTML and WebApps WG coordination

2009-09-24 Thread Sam Ruby
At the upcoming TPAC, there is an opportunity for F2F coordination 
between these two groups, and the time slot between 10 O'Clock and Noon 
on Friday has been suggested for this.


To help prime the pump, here are four topics suggested by ECMA TC39 for 
discussion.  On these and other topics, there is no need to wait for the 
TPAC, discussion can begin now on the es-discuss mailing list.


 - - -

The current WebIDL binding to ECMAScript is based on ES3... this needs 
to more closely track to the evolution of ES, in particular it needs to 
be updated to ES5 w.r.t the Meta Object Protocol.  In the process, we 
should discuss whether this work continues in the W3C, is done as a 
joint effort with ECMA, or moves to ECMA entirely.


 - - -

A concern specific to HTML5 uses WebIDL in a way that precludes 
implementation of these objects in ECMAScript (i.e., they can only be 
implemented as host objects), and an explicit goal of ECMA TC39 has been 
to reduce such.  Ideally ECMA TC39 and the W3C HTML WG would jointly 
develop guidance on developing web APIs, and the W3C HTML WG would apply 
that guidance in HTML5.


Meanwhile, I would encourage members of ECMA TC 39 who are aware of 
specific issues to open bug reports:


  http://www.w3.org/Bugs/Public/

And I would encourage members of the HTML WG who are interested in this 
topic to read up on the following emails (suggested by Brendan Eich):


https://mail.mozilla.org/pipermail/es5-discuss/2009-September/003312.html
  and the rest of that thread

https://mail.mozilla.org/pipermail/es5-discuss/2009-September/003343.html
  (not the transactional behavior, which is out -- just the
  interaction with Array's custom [[Put]]).

https://mail.mozilla.org/pipermail/es-discuss/2009-May/009300.html
   on an ArrayLike interface with references to DOM docs at the bottom

https://mail.mozilla.org/pipermail/es5-discuss/2009-June/002865.html
   about a WebIDL float terminal value issue.

 - - -

There are larger (and less precise concerns at this time) about 
execution scope (e.g., presumptions of locking behavior, particularly by 
HTML5 features such as local storage).  The two groups need to work 
together to convert these concerns into actionable suggestions for 
improvement.


 - - -

We should take steps to address the following willful violation:

  If the script's global object is a Window object, then in JavaScript,
  the this keyword in the global scope must return the Window object's
  WindowProxy object.

  This is a willful violation of the JavaScript specification current at
  the time of writing (ECMAScript edition 3). The JavaScript
  specification requires that the this keyword in the global scope
  return the global object, but this is not compatible with the security
  design prevalent in implementations as specified herein. [ECMA262]

- Sam Ruby
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ECMA TC 39 / W3C HTML and WebApps WG coordination

2009-09-24 Thread Sam Ruby

Maciej Stachowiak wrote:


On Sep 24, 2009, at 2:16 PM, Sam Ruby wrote:


On Sep 24, 2009, at 11:53 AM, Maciej Stachowiak wrote:


Any TC39 members whose employers can't join could perhaps become Invited
Experts to the W3C Web Applications Working Group, if that facilitates
review.


Unfortunately, no.  See #2 and #3 below:

 http://www.w3.org/2004/08/invexp.html


It depends on the nature of the employer, and the reason they are unable 
to join. Historically there have been Invited Experts in W3C Working 
Groups who are employed by such organizations as universities or small 
start-ups. We even have some in the HTML Working Group. So it would 
probably be more accurate to say it depends and that it may be subject 
to the judgment of the W3C Team.


I've discussed the specific case with the W3C, and it is the case that 
in the judgment of the W3C Team, the answer in this specific case is no.


You, of course, are welcome to try again in the hopes of getting a 
different answer.



Regards,
Maciej


- Sam Ruby


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Why decimal?

2009-06-24 Thread Sam Ruby

Erik Corry wrote:


2009/6/23 Brendan Eich bren...@mozilla.com mailto:bren...@mozilla.com

On Jun 23, 2009, at 12:18 AM, Christian Plesner Hansen wrote:


I've been looking around on the web for reasons why decimal arithmetic
should be added to ES.  The most extensive page I could find was
http://speleotrove.com/decimal/decifaq.html.  If anyone know other
good sources of information about decimal and its inclusion in ES
please follow up.


Mike Cowlishaw's pages on decimal have lots of arguments for it:

http://www2.hursley.ibm.com/decimal/decifaq.html
http://www2.hursley.ibm.com/decimal/

I'm afraid both these links seem to have broken.


The content can now be found on Mike's site:

http://speleotrove.com/decimal/decifaq.html
http://speleotrove.com/decimal/

- Sam Ruby
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-31 Thread Sam Ruby

Brendan Eich wrote:


This variation preserves wrappers, so a Decimal converter function (when 
invoked) and constructor (via new, and to hold a .prototype home for 
methods). The committee plunks for more of this primitive/wrapper 
business, since we have wrappers and primitives for numbers and other 
types, and backward compatibility requires keeping them. Operators work 
mostly as implemented already by Sam (results here 
http://intertwingly.net/blog/2008/08/27/ES-Decimal-Updates, with some 
out-of-date results; notably typeof 1.1m should be decimal not 
object -- and not number).


More up to date results can be found here:

http://intertwingly.net/stories/2008/09/20/estest.html

Which was discussed here:

https://mail.mozilla.org/pipermail/es-discuss/2008-December/008316.html

Sam and I are going to work on adapting Sam's SpiderMonkey 
implementation, along with our existing ES3.1-based JSON codec and 
trace-JITting code, to try this out. More details as we get into the work.


Since the bug is about usability, we have to prototype and test on real 
users, ideally a significant number of users. We crave comments and 
ideas from es-discuss too, of course.


I'd like to highlight one thing: Mike and I agreed to no visible 
cohorts with the full knowledge that it would be a significant 
usability issue.  We did so in order to get decimal in 3.1. In the 
context of Harmony, I feel that is is important that we fully factor in 
usability concerns.  Prototyping and testing on real users, ideally with 
a significant number of users, is an excellent way to proceed.



/be


- Sam Ruby

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal (generic algorithms)

2009-01-16 Thread Sam Ruby
On Fri, Jan 16, 2009 at 8:30 PM, Brendan Eich bren...@mozilla.com wrote:

 Like Allen says later, most small integers (i.e., the ones that fit
 exactly in a double precision binary value) can simply be retained as
 binary64.

 Or machine ints -- ALU  FPU still.

Agreed.  Those values that could fit in int32 before could continue to do so.

 I suspect that covers the majority of constants in deployed
 javascript.  Now let's consider the rest.

 First, Allen's example:

 function fuzz(a) { return a + 0.1}

 Where fuzz(0.1)===0.2 and fuzz(0.1m)===0.2m

 The only way I can see that working is if the constant is initially in
 a form that either is readily convertible to source, or stores both
 values.  I don't understand how multimethods (on +?) affect this.
 If I'm missing something, please let me know (or simply provide a
 pointer to where I can educate myself).

 I did, see followup links to reading-lists, from which I'll pick a specific
 link:

 http://www.artima.com/weblogs/viewpost.jsp?thread=101605

I must be dense.  My previous understanding of multimethods was that
it depends on the assumption that the type of each argument can be
determined.  That article doesn't change that for me.

 Continuing on, let's tweak this a bit.

 function fuzz(a) {var b=0.1; return a+b}

 I would suggest that if the expectation would be that this function
 behaves the same as the previous one.

 It had better!

So, here's the problem.  At the point of the ';' in the above, what is
the result of typeof(b)?

The problem gets worse rapidly.  The above may seem to be appealing at
first, but it degenerates rapidly.  Consider:

function fuzz(a) {var b=0.05; var c=0.05; var d=b+c; return a+d}

Should this return the same results as the previous fuzz functions?
What is the value of typeof(d)?

 My interpretation is that this means that internally there are three
 data types, one that is double, one that is decimal, and one that
 somehow manages to be both.  Internally in that this implementation
 detail ideally should not be visible to the application programmer.
 Again, I could be wrong (in the need for three data types, not on the
 opinion that this should not be visible), but pressing on...

 No, Allen allowed for that, but of course this generic type has to propagate
 at runtime through variable and function abstraction.

I don't follow.

 function is_point_one(a) {var b=0.1; return b===a}

 Is the expectation that this would return true for *both* 0.1 and
 0.1m?

 I don't see how this could work.

Before proceeding, let me simplify that:

function is_point_one(a) {return a===0.1}

The point of fuzz was that 0.1 as a literal would be interpreted as
a binary64 or as a decimal128 based on what it was combined with.  Why
would this example be any different?

  This leads to a rather odd place where it would be possible for
 triple equals to not be transitive, i.e. a===b and b===c but not
 a!===c.

 Er, a!==c ;-).

  That alone is enough to give me pause and question this
 approach.

 Me too.

 Continuing trip down this looking glass, what should typeof(0.1)
 return?  You might come to a different conclusion, and again I might
 be missing something obvious, but if these Schrödinger's catstants
 (sorry for the bad pun) can be assigned to variable, then I would
 assert that typeof(0.1) and typeof(0.1m) should both be 'number'.

 It should be clear that I won't go this far. My reply to Allen was gently
 suggesting that his suggestion would not fly on implementation efficiency
 grounds, but I think you've poked bigger holes. I'm still interested in
 multimethods, including for operators.

I don't see how this reasonably can be done half way.

And while multimethods are appealing for other reasons, I don't think
they relate to what Allen is suggesting.

- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: JSON numbers (was: Revisiting Decimal)

2009-01-16 Thread Sam Ruby

Brendan Eich wrote:

On Jan 15, 2009, at 7:28 PM, Sam Ruby wrote:

On Thu, Jan 15, 2009 at 9:24 PM, Brendan Eich bren...@mozilla.com 
wrote:


JSON's intended semantics may be arbitrary precision decimal (the RFC is
neither explicit nor specific enough in my opinion; it mentions only
range, not precision), but not all real-world JSON codecs use 
arbitrary

precision decimal, and in particular today's JS codecs use IEEE double
binary floating point. This approximates by default and creates a 
de-facto

standard that can't be compatibly extended without opt-in.


You might find the next link enlightening or perhaps even a pleasant 
diversion:


http://www.intertwingly.net/stories/2002/02/01/toInfinityAndBeyondTheQuestForSoapInteroperability.html 



Quick summary as it applies to this discussion: perfection is
unattainable (duh!) and an implementation which implements JSON
numbers as quad decimal will retain more precision than one that
implements JSON numbers as double binary (duh!).


DuhT^2 ;-).

But more than that: discounting the plain fact that on the web at least, 
SOAP lost to JSON (Google dropped its SOAP APIs a while ago), do you 
draw any conclusions?


My conclusion, crustier and ornier as I age, is that mixed-mode 
arithmetic with implicit conversions and best effort approximation 
is a botch and a blight. That's why I won't have it in JSON, encoding 
*and* decoding.


My age differs from your by a mere few months.

My point was not SOAP specific, but dealt with interop of such things as 
dates and dollars in a cross-platform setting.


My conclusion is that precision is perceived as a quality of 
implementation issue.  The implementations that preserve the most 
precision are perceived to be of higher quality than those that don't.


I view any choice which views binary64 as preferable to decimal128 as 
choosing *both* botch and blight.


Put another way, if somebody sends you a quantity and you send back the 
same quantity (i.e., merely round-trip the data), the originator will 
see it as being unchanged if their (the originator's) precision is less 
than or equal to the partner in this exchange.  This leads to an natural 
ordering of implementations from most-compatible to least.


A tangible analogy that might make sense to you, and might not.  Ever 
try rsync'ing *to* a Windows box?  Rsync from windows to windows works 
just fine.  Unix to unix also.  As does Windows-Unix-Windows.  But 
Unix-Windows-Unix needs fudge parameters.  Do you really want to the 
the Windows in this equation?  :-)


- Sam Ruby

P.S.  You asked my opinion, and I've given it.  This is something I have
  an opinion on, but not something I view as an egregious error if the
  decision goes the other way.
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal

2009-01-15 Thread Sam Ruby

Kris Zyp wrote:
 



Only if never compared to a double. How do you prevent this?

We already agree that the decimal-double comparison will always be
false.


Not strictly true.

(1m == 1) = true
(1m === 1) = false

It is only fractions which have denominators which are not a pure power 
of two within the precision of double precision floating point that will 
compare unequal.  In particular,


(1.75m == 1.75m) = true
(1.76m == 1.76m) = false

For most people, what that works out to mean is that integers compare 
equal, but fractions almost never do.  It is worth noting that comparing 
fractions that are the result of computations with double precision 
floating point for strict equality rarely works out in practice, one 
typically needs to take into account an epsilon.


 The point is that this is representative of real world code
 that benefits more from the treatment of decimals as numbers.

I agree with your overall argument that the real point of JSON is 
inter-language interoperability, and that when viewed from that 
perspective, and that any JSON support that goes into ECMAScript should 
interpret literals which contain a decimal point as decimal.  But that's 
just an opinion.


At the moment, the present state is that we have discussed at length 
what the results of typeof(1m) and what JSON.parse('[1.1]') should 
return.  And now we are revisiting both without any new evidence.


In the past, I have provided a working implementation, either as a 
standalone JSON interpreter, as a web service, or integrated into 
Firefox.  I could do so again, and provide multiple versions that differ 
only in how they deal with typeof and JSON.parse.


But first, we need to collectively decide what empirical tests would 
help us to make a different conclusion.


- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: JSON numbers (was: Revisiting Decimal)

2009-01-15 Thread Sam Ruby
On Thu, Jan 15, 2009 at 9:24 PM, Brendan Eich bren...@mozilla.com wrote:

 JSON's intended semantics may be arbitrary precision decimal (the RFC is
 neither explicit nor specific enough in my opinion; it mentions only
 range, not precision), but not all real-world JSON codecs use arbitrary
 precision decimal, and in particular today's JS codecs use IEEE double
 binary floating point. This approximates by default and creates a de-facto
 standard that can't be compatibly extended without opt-in.

You might find the next link enlightening or perhaps even a pleasant diversion:

http://www.intertwingly.net/stories/2002/02/01/toInfinityAndBeyondTheQuestForSoapInteroperability.html

Quick summary as it applies to this discussion: perfection is
unattainable (duh!) and an implementation which implements JSON
numbers as quad decimal will retain more precision than one that
implements JSON numbers as double binary (duh!).

- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Revisiting Decimal

2008-12-04 Thread Sam Ruby
2008/12/4 Brendan Eich [EMAIL PROTECTED]:

 Sam pointed that out too, and directed everyone to his test-implementation
 results page:
 http://intertwingly.net/stories/2008/09/20/estest.html
 Indeed we still have an open issue there ignoring the wrapper one:

 I think the only major outstanding semantic issue was wrapper
 objects; apart from that, the devil was in the detail of spec wording.

 No, the cohort/toString issue remains too (at least).

With a longer schedule, I would like to revisit that; but as of
Redmond, we had consensus on what that would look like in the context
of a 3.1 edition.

From where I sit, I find myself in the frankly surreal position that
we are in early December, and there are no known issues of consensus,
though I respect that David-Sarah claims that there is one on
wrappers, and I await his providing of more detail.

 /be

- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Revisiting Decimal (was: Prioritized list of Decimal method additions)

2008-12-03 Thread Sam Ruby
I saw the meeting minutes, and got a debrief from Allen yesterday.
I'm still unclear on how to proceed with Decimal, even if the new
target is Harmony.

Waldemar's issues were raised and responded to prior to Kona:

https://mail.mozilla.org/pipermail/es-discuss/2008-November/008074.html

Quick summary: there are at least eight sections with typos and
transcription errors.  By transcription errors, I mean places where
the prose doesn't match the output of the code that I posted
previously.  Those are embarrassing, but at this point moot.  Pratap
has already excised Decimal from the spec.

What are we left with relative to the the following output from the
code that I wrote?

http://intertwingly.net/stories/2008/09/20/estest.html

Relative to that output, I've heard two issues.

The first was no user visible cohorts.  The issue is Waldemar's
insistence that ES is irretrievably broken if array lookup for
x[1.10m] respects the trailing zero.  IIRC, Brendan's position was a
more pragmatic one, namely that small integers (like, say, up to
10**20th) are the only values for which toString must avoid both
exponential notation and trailing zeros, other values shouldn't get in
the way of doing the right thing.  That would have been fine, but
unfortunately he couldn't make the meeting (something I definitely
understand).  Mike and I weren't then, and still aren't happy about
conceding to Waldemar's position on this one, but at Redmond we did
with the understanding that with that concession, Decimal was in.

The second was the duplication between Math.min and Decimal.min.
I was operating under the if it ain't broken, don't fix it
guidelines.  To date, Math.min *always* returns a Number, never an
Object.  Waldemar apparently feels that people will call the wrong
function.  To me, this is a you say N-EEE-THER, I say N-EYE-THER
issue.  If the consensus is that Math.min should be changed and
Decimal.min should be removed, that's a pretty quick fix.

So now the question is: where are we now?

- Sam Ruby

On Sat, Sep 20, 2008 at 8:57 PM, Sam Ruby [EMAIL PROTECTED] wrote:
 Sam Ruby wrote:
 Previous discussions focused on operators and type/membership related
 builtin functions (i.e., typeof and instanceof).  Here's a prioritized
 list of functions provided by IEEE 754-2008 and/or the decNumber
 implementation.

 The actual number of a and a- methods is fairly small, particularly
 once you remove ones that are available in ECMAScript via other means.

 Updated test results including these methods can be found here:

 http://intertwingly.net/stories/2008/09/20/estest.html

 - Sam Ruby

 - - - - -

 Absolute requirement, and must be implemented as an 'instance' method
 (for most of the others, the difference between a 'static' and
 'instance' method is negotiable):

*  atoString

 Available as prefix or infix operators, or as builtin functions, may not
 need to be duplicated as named Decimal methods:
*  aadd
*  acompare
*  acopy
*  acopyNegate
*  adivide
*  aisFinite
*  aisNaN
*  amultiply
*  aremainder
*  asubtract

 Essential 754, not available as infix operator, so must be made
 available as a named method.  For consistency with Math, abs, max,
 and min should be 'static' methods:

*  aquantize
*  acopyAbs[called abs]
*  amax
*  amin

 Very useful functions which are not in 754 for various reasons;
 strongly recommend include:

*  a-   divideInteger  [extremely handy]
*  a-   digits [= significant digits]
*  a-   reduce [often asked for]
*  a-   toEngString[really handy in practice]
*  a-   getExponent[esp. if no compareTotal]

 Other 754 operations that are less essential but would probably add
 later anyway.  'b+' are a subset that are especially useful in
 practice:

*   b   FMA
*   b   canonical
*   b   compareSignal
*   b+  compareTotal
*   b   compareTotalMag
*   b   copySign
*   b   isCanonical
*   b+  isInfinite
*   b+  isInteger
*   b   isNormal
*   b+  isSignaling [if sNaNs supported]
*   b+  isSignalling[  ]
*   b+  isSigned
*   b   isSubnormal
*   b+  isZero
*   b   logB
*   b   maxMag
*   b   minMag
*   b   nextMinus
*   b   nextPlus
*   b   radix
*   b   remainderNear
*   b+  sameQuantum
*   b   scaleB
*   b+  setExponent
*   b   toInt32
*   b   toInt32Exact
*   b+  toIntegralExact [perhaps only one of these]
*   b+  toIntegralValue []
*   b   toUInt32
*   b   toUInt32Exact

 Probably drop because conflict with ES bitwise logical ops:

*c  and (as digitAnd)
*c  invert (as digitInvert)
*c  or (as digitOr)
*c  rotate
*c  shift
*c  xor (as digitXor)

 And, finally, not needed:

 (The first two of these are 754 but don't fit with ES

Re: Revisiting Decimal (was: Prioritized list of Decimal method additions)

2008-12-03 Thread Sam Ruby

Brendan Eich wrote:

On Dec 3, 2008, at 1:04 PM, Sam Ruby wrote:


I saw the meeting minutes, and got a debrief from Allen yesterday.
I'm still unclear on how to proceed with Decimal, even if the new
target is Harmony.

Waldemar's issues were raised and responded to prior to Kona:

https://mail.mozilla.org/pipermail/es-discuss/2008-November/008074.html


Did this address Waldemar's other message?

https://mail.mozilla.org/pipermail/es-discuss/2008-September/007631.html


The no user visible cohorts addressed that particular concern.


I also don't see a reply to David-Sarah Hopwood's message:

https://mail.mozilla.org/pipermail/es-discuss/2008-November/008078.html


Given that the spec text has been removed, the way I would like to 
proceed is to first come to an agreement on the semantics we desire, and 
for that I would like to solicit comments on the output produced by the 
implementation I produced.


While I agree that Decimal wrappers are useless; but I think that 
consistency argues that they need to be there (in fact, I was talked 
into putting them there); again I refer back to the output produced and 
solicit comments.



What are we left with relative to the the following output from the
code that I wrote?

http://intertwingly.net/stories/2008/09/20/estest.html


Looks like we may need Waldemar to comment or elaborate on his last post 
(first link above).



Relative to that output, I've heard two issues.

The first was no user visible cohorts.  The issue is Waldemar's
insistence that ES is irretrievably broken if array lookup for
x[1.10m] respects the trailing zero.  IIRC, Brendan's position was a
more pragmatic one, namely that small integers (like, say, up to
10**20th) are the only values for which toString must avoid both
exponential notation and trailing zeros, other values shouldn't get in
the way of doing the right thing.  That would have been fine, but
unfortunately he couldn't make the meeting (something I definitely
understand).  Mike and I weren't then, and still aren't happy about
conceding to Waldemar's position on this one, but at Redmond we did
with the understanding that with that concession, Decimal was in.


This Redmond-meeting result did sound like a breakthrough in any event. 
Was it memorialized with spec changes?


There were spec changes that went in as a result of the Redmond meeting, 
yes.  At least one was identified before the Kona meeting by Waldemar 
(and fessed up to by me) as having been botched by myself (and = or).



The second was the duplication between Math.min and Decimal.min.
I was operating under the if it ain't broken, don't fix it
guidelines.  To date, Math.min *always* returns a Number, never an
Object.  Waldemar apparently feels that people will call the wrong
function.  To me, this is a you say N-EEE-THER, I say N-EYE-THER
issue.  If the consensus is that Math.min should be changed and
Decimal.min should be removed, that's a pretty quick fix.


This doesn't seem like a big problem, by itself.


Agreed, and in any case, one that I would eagerly adopt.


So now the question is: where are we now?


The two general kinds of problems from the Kona meeting were:

1. Spec bugs, not just typos but material ones that couldn't be fixed by 
that meeting, which was the deadline for major additions to ES3.1 not 
already in the spec.


For the moment, I would like to split that list into two categories: 
areas where there isn't yet agreement within the committee on how to 
proceed, and the best way I know how to make progress on that is to come 
to agreement on the behavior desired, hence my suggestion that we look 
at concrete test cases; and a list of places where I erred in my 
converting my understanding into prose.


No matter how we proceed, the first list needs to be captured and 
addressed eventually anyway.


2. Future-proofing arguments including: do we need Decimal wrappers for 
decimal primitives. I know we've been over this before, but it still is 
an open issue in TC39.


That does sound like the type of issue that I would like to see us 
identify and work to resolve.  Two questions come to mind: (1) can 
anybody identify a specific expression which behaves differently that 
one would desire, and (2) if we've been over this before, what does it 
take to actually close this issue this time for good?


I'd appreciate Waldemar's comments; and those of other TC39ers too, of 
course.


/be


- Sam Ruby

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Initial comments on Kona draft

2008-11-14 Thread Sam Ruby

Waldemar Horwat wrote:


7.8.3:  This states that decimal literals may be rounded to 20
significant digits.  Is that what we want?


That's a bug.


8.5:  The Decimal type has exactly 10^34*12288+3 values.  I don't
think this is correct.  How did you arrive at this figure?


Mantissa has 34 digits
Exponent can range from -6144 to 6143
NaN, +Infinity, -Infinity

Oversight: sign would double the number of values.


There are not ten times as many denormalized Decimal values as there
are normalized Decimal values.  All of the Decimal number counts in
this section appear suspect.


There should be approximately 9 times as many... values with a non-zero 
mantissa that end with a digit of 0 as compared to mantissas which end 
in a non-zero value.



Why do we need to distinguish Decimal denorms anyway?  The concept is
not used anywhere in the document.


Do we need to distinguish binary floating point denorms?  Dropping both 
would make me happy.



Fix grammar and spelling errors.

9:  Decimal support broken in most of the tables.


Bug.


9.3:  ToDecimal on a Number gives the Number unchanged?

ToNumber on a Decimal is not defined.


Bug.


9.8:  ToString on a Decimal is not defined in the table.

The algorithm only works on Number values.  +0, -0, etc. are Number
values, not Decimal values.  Also, it internally references
conversions to Numbers.


Bugs.


9.3.1:  ToDecimal on a string results in a Number.  Also, it
optionally drops significant digits after the 20th.


Bug.

11.3.1, 11.3.2:  All four of the return statemets are wrong in different 
ways.  Some return the preincremented value.  Some return an lvalue 
instead of an rvalue.


Others have already noted this.  David-Sarah has proposed new wording.

11.5:  What's the corresponding Decimal operation?  There are a bunch of 
different remainder options.


Should specify roundTowardZero.  Good catch.

11.8.5: Status:  
You're treating Unicode character codes as Decimal numbers.  Which 
characters have Unicode numbers that are Numbers, and which ones have 
Unicode numbers that are Decimals?


I don't follow.  If either value is of type Decimal, both are converted 
to Decimal, and the results are then compared.


If you fix this and apply the same contagion rules as for +, -, *, etc., 
then you'll have the issue that 1e-400m  0m but 1e-400m  0 is false.  
The contagion rules need rethinking.


Again, I don't follow.  The intent is that in the latter case, the 
binary 0 is first converted to decimal prior to the comparison.


11.9.3:  The contagion here is from Number to Decimal.  This is 
inconsistent with +, -, *, etc., where the contagion is from Decimal to 
Number.  It should be the same for all arithmetic operations.


The contagion should consistently be from Number to Decimal.  I 
distinctly remember making this change this time, but double checking 
what I sent to Pratap, it seems like the version of the document that I 
sent him did not include this change.  *sigh*


To be clear, step 5 in the binary operators should read 5. If 
Type(Result(2)) is Decimal or Type(Result(4)) is Decimal, then, 
substituting or for and in the expression.



11.9.6:  Don't need to call ToDecimal on values that are already Decimals.


Agreed.

- Sam Ruby

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Encodings

2008-09-28 Thread Sam Ruby
liorean wrote:
 Hello!
 
 Just wondering if anybody has any real world data lying around
 covering what character encodings are necessary to support real world
 script content. UTF-8, UTF-16 and ISO-8859-1 are a given guess. What
 else?

My data relates to feeds, so it may not apply here, but in general 
UTF-16, while used internally in many places, is not widely supported as 
an interchange format.  Here are the encodings that the feed validator 
does *not* mark as obscure:

'US-ASCII', 'ISO-8859-1', 'UTF-8', 'EUC-JP', 'ISO-8859-2', 
'ISO-8859-15', 'ISO-8859-7', 'KOI8-R', 'SHIFT_JIS', 'WINDOWS-1250', 
'WINDOWS-1251', 'WINDOWS-1252', 'WINDOWS-1254', 'WINDOWS-1255', 
'WINDOWS-1256'

One other deserves special mention: 'GB18030'.  Doesn't seem to be 
popular, but is the Chinese government's mandatory standard.

- Sam Ruby


___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ES Decimal status

2008-09-25 Thread Sam Ruby
On Thu, Sep 25, 2008 at 10:24 AM, Mike Cowlishaw [EMAIL PROTECTED] wrote:

 OK, and also liorean says:
 I'm of the opinion that decimal128 and binary64 should behave
 identically in as many areas as possible.

 That's a valid model.  I suppose I see strings and decimals as being
 'closer' in concept, and in both what you see is what you get.  But for
 arrays, I see the problem.  In that case 'reduce to shortest form, that is
 strip trailing zeros, might be the right thing to do for decimals used as
 array indices.  That function is in Sam's implementation (it's called
 'reduce').

Reduce is subtly different.  Decimal.reduce(1000m) produces 1e+3m.  I
believe that what is desired is that foo[1e+3m] be the same slot as
foo[1000].  But as Brendan clarified yesterday, I think that's only
necessary for decimal values which happen to be integers which contain
16 digits or less (16 digits integers being the upper bound for
integers which can be exactly stored using a binary64 floating point
representation).

 Brendan summed up:

 Ignoring === as faux eq, the only issue here is d.toString() for decimal
 d: should it preserve scale and stringify trailing zeroes and funny
 exponents?

 Are there any other cases like array indices where toString of a number is
 used in a way such that 1.000 is materially different than 1?
 Certainly toString could reduce, and there could be a differently-spelled
 operation to produce the 'nice' string, but the principle that toString
 shows you exactly what you have got is a good one.  (And it would be
 goodness for ES to behave in the same way as other languages' toString for
 decimals, too.)

 In particular, when dealing with currency values, the number of decimal
 places does correspond to the quantum (e.g., whether the value is cents,
 mils, etc., and similarly for positive exponents, it indicates that one is
 dealing in (say) $millions).

 If the 'business logic' calculates a value rounded to the nearest cent
 then the default toString will display that correctly without any
 formatting being necessary (and if formatting were applied then if the
 business logic were later changed to round to three places, the display
 logic would still round to two places and hence give an incorrect result).
  In short: the act of converting to a string, for display, inclusion in a
 web page, etc., should not obscure the underlying data.  If there's some
 path in the logic that forgot to quantize, for example, one wants to see
 that ASAP, not have it hidden by display formatting.

The issue is that ToString is the basis for both toString (the method)
and the way that operations such as array's work.  My intuition is
that business logic also rarely requires scientific notation for small
integers.

I believe that we have a solution that everybody might not find ideal,
but hopefully can live with, which I will outline below with examples:

typeof(1m) === decimal
1.1m === 1.10m
(1.10m).toString() === 1.10
(1e+3m).toString() === 1000

Additionally, there will be another method exposed, say toSciString,
which will produce a value which will round trip correctly, using
scientific notation when necessary.

 Mike

- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ES Decimal status

2008-09-24 Thread Sam Ruby
Maciej Stachowiak wrote:
 
 You probably meant to send this to the list.

Oops.  Resending.  Thanks!

  - Maciej
 
 On Sep 24, 2008, at 8:17 AM, Sam Ruby wrote:
 
 On Wed, Sep 24, 2008 at 10:45 AM, Maciej Stachowiak [EMAIL PROTECTED] 
 wrote:

 On Sep 24, 2008, at 7:31 AM, Sam Ruby wrote:

 Maciej Stachowiak wrote:
 On Sep 24, 2008, at 3:33 AM, Mike Cowlishaw wrote:
 and in particular they don't call it on the index in an array
 indexing
 operation.
 This is true.  But that in itself is not the problem.  Currently,
 should a
 programmer write:

 a[1]=first
 a[1.000]=second

 it's assumed that the second case was an accidental typo and they
 really
 did not mean to type the extra '.000'.  The problem occurs at
 that  point,
 on the conversion from a decimal (ASCII/Unicode/whatever) string
 in  the
 program to an internal representation.  When the internal
 representation
 cannot preserve the distinction (as with binary doubles) there's
 not  much
 that can be done about it.  But a decimal internal representation
 can
 preserve the distinction, and so it should - 1m and 1.000m differ
 in  the
 same was a 1 and 1.000.  They are distinguishable, but when
 interpreted as a number, they are considered equal.
 I'm not sure what you are getting at. a[1] and a[1.000] refer to
 the  same property in ECMAScript, but a[1m] and a[1.000m] would
 not. Are  you saying this isn't a problem?
 I would agree with Waldermar that it is a serious problem. Not so
 much  for literals as for values that end up with varying numbers
 of  trailing zeroes depending on how they were computed, even
 though they  are numerically the same. Certainly it seems it would
 make arrays  unusable for someone trying to use decimal numbers only.

 broken, unusable.  Given superlatives such as these, one would
 think that code which would change in behavior would be abundant,
 and readily identified.

 I would not expect there to be a wide body of existing code using the
 decimal extension to ECMAScript, let alone trying to use it for all
 browsers. Such code would not work at all in today's browsers, and has
 probably been written by specialist experts, so I am not sure studying
 it would show anything interesting.

 My apologies.  That wasn't the question I was intending.

 Can you identify code that today depends on numeric binary 64 floating
 point which makes use of operations such as unrounded division and
 depends on trailing zeros being truncated to compute array indexes?

 I would think that such code would be more affected by factors such as
 the increased precision and the fact that 1.2-1.1 produces
 0.09987 than on the presence or absence of any trailing
 zeros.

 But given the continued use of words such as broken and unusable,
 I'm wondering if I'm missing something obvious.

 Regards,
 Maciej

 - Sam Ruby
 
 



___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ES Decimal status

2008-09-24 Thread Sam Ruby
Brendan Eich wrote:
 
 This a === b = o[a] is o[b] invariant (ignore the tiny number  
 exceptions; I agree they're undesirable spec bugs) is what folks on  
 the list are concerned about breaking, for integral values.  
 Fractional index values and strings consisting of numeric literals  
 with and without trailing zeroes are different use-cases, not of  
 concern.

This is most helpful.  It would suggest that 1.20 is not a significant 
concern, but 1e+2 is a potential concern.  (I'd also suggest that values 
with an absolute value less than 2**53 are not a concern)

Short of numeric literals explicitly expressed in such a manner, 
multiplication won't tend to produce such values, but division by values 
such as 0.1 may.

My intuition continues to be that such occurrences would exceedingly be 
rare.  Particularly given the use case of array indexes.

 /be

- Sam Ruby

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Decimal comparisons

2008-09-19 Thread Sam Ruby
On Fri, Sep 19, 2008 at 9:21 AM, Brendan Eich [EMAIL PROTECTED] wrote:
 On Sep 19, 2008, at 8:45 AM, Sam Ruby wrote:

 Does the committee feel that it can ever add new values to typeof
 under any circumstances?

 Certainly not if there is opt-in version selection.

The opt-in version selection techniques I recall being discussed
were scoped to compilation units.  Are you suggesting that the
typeof a value could produce different results based on which
compilation unit evaluated the expression?

**shudder**

- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: use decimal

2008-09-18 Thread Sam Ruby
On Wed, Sep 17, 2008 at 10:52 PM, Brendan Eich [EMAIL PROTECTED] wrote:
 On Sep 17, 2008, at 7:48 PM, Sam Ruby wrote:

 Anybody care to mark up what they would like to see the following
 look like?

http://intertwingly.net/stories/2008/09/12/estest.html

 Shipt it!

:-)

 (Not in ES3.1, certainly in Firefox 3.1 if we can... :-)

We need to discuss the former further next week.  *Some* support for
decimal was always in the plan for 3.1 (ever since I have been
involved at least) and I've been working tirelessly to refine what
that support is.

In Redmond, we should be in a position to determine what the remaining
work items are for any of the features that anybody has ever proposed
for ES3.1.

 /be

- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Out-of-range decimal literals

2008-09-18 Thread Sam Ruby
On Thu, Sep 18, 2008 at 9:10 AM, Igor Bukanov [EMAIL PROTECTED] wrote:
 Should EcmaScript allow decimal literals that cannot be represented as
 128-bit decimal? I.e should the following literals give a syntax
 error:

 1.001m
 1e10m ?

 IMO allowing such literals would just give another source of errors.

Languages have personalities, and people build up expectations based
on these characteristics.  As much as possible, I'd like to suggest
that ECMAScript be internally consistent, and not have one independent
choice (binary vs decimal) have unexpected implications over another
(signaling vs quiet operations).

As a tangent, both binary 64 and decimal 128 floating point provide
exact results for a number of operations, they simply do so for
different domains of numbers.  2**-52 can be represented exactly, for
example, in binary 64 floating point, but not in decimal 128 floating
point.  It is only the prevalence of things like decimal literals,
which naturally are in decimal, which tend to produce inexact but
correctly rounded values in binary 64 and exact values in decimal 128,
without a need for rounding.

As to your specific question, here's a few results from my branch of
SpiderMonkey:

js 1.001
1
js 1e10
Infinity
js 1.001m
1.0
js 1e10m
Infinity

 Regards, Igor

- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Prioritized list of Decimal method additions

2008-09-18 Thread Sam Ruby
Previous discussions focused on operators and type/membership related 
builtin functions (i.e., typeof and instanceof).  Here's a prioritized 
list of functions provided by IEEE 754-2008 and/or the decNumber 
implementation.

The actual number of a and a- methods is fairly small, particularly 
once you remove ones that are available in ECMAScript via other means.

- - - - -

Absolute requirement, and must be implemented as an 'instance' method
(for most of the others, the difference between a 'static' and 
'instance' method is negotiable):

   *  atoString

Available as prefix or infix operators, or as builtin functions, may not
need to be duplicated as named Decimal methods:
   *  aadd
   *  acompare
   *  acopy
   *  acopyNegate
   *  adivide
   *  aisFinite
   *  aisNaN
   *  amultiply
   *  aremainder
   *  asubtract

Essential 754, not available as infix operator, so must be made
available as a named method.  For consistency with Math, abs, max,
and min should be 'static' methods:

   *  aquantize
   *  acopyAbs[called abs]
   *  amax
   *  amin

Very useful functions which are not in 754 for various reasons;
strongly recommend include:

   *  a-   divideInteger  [extremely handy]
   *  a-   digits [= significant digits]
   *  a-   reduce [often asked for]
   *  a-   toEngString[really handy in practice]
   *  a-   getExponent[esp. if no compareTotal]

Other 754 operations that are less essential but would probably add
later anyway.  'b+' are a subset that are especially useful in
practice:

   *   b   FMA
   *   b   canonical
   *   b   compareSignal
   *   b+  compareTotal
   *   b   compareTotalMag
   *   b   copySign
   *   b   isCanonical
   *   b+  isInfinite
   *   b+  isInteger
   *   b   isNormal
   *   b+  isSignaling [if sNaNs supported]
   *   b+  isSignalling[  ]
   *   b+  isSigned
   *   b   isSubnormal
   *   b+  isZero
   *   b   logB
   *   b   maxMag
   *   b   minMag
   *   b   nextMinus
   *   b   nextPlus
   *   b   radix
   *   b   remainderNear
   *   b+  sameQuantum
   *   b   scaleB
   *   b+  setExponent
   *   b   toInt32
   *   b   toInt32Exact
   *   b+  toIntegralExact [perhaps only one of these]
   *   b+  toIntegralValue []
   *   b   toUInt32
   *   b   toUInt32Exact

Probably drop because conflict with ES bitwise logical ops:

   *c  and (as digitAnd)
   *c  invert (as digitInvert)
   *c  or (as digitOr)
   *c  rotate
   *c  shift
   *c  xor (as digitXor)

And, finally, not needed:

(The first two of these are 754 but don't fit with ES)
   * class
   * classString
   * fromBCD
   * fromInt32
   * fromNumber
   * fromPacked
   * fromPackedChecked
   * fromString
   * fromUInt32
   * fromWide
   * getCoefficient
   * setCoefficient
   *   d nextToward
   * show
   * toBCD
   * toNumber
   * toPacked
   * toWider
   * version
   * zero

- Sam Ruby

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: use decimal

2008-09-18 Thread Sam Ruby
2008/9/17 Mark S. Miller [EMAIL PROTECTED]:

 If that is the case then 1.5m / 10.0 != 1.5 / 10.0, and thus it seems
 wrong for 1.5m and 1.5 to be '==='.

 0/-0 != 0/0. Does it thus seem wrong that -0 === 0?

Just so that I'm clear what you point is, It is worth noting that 42/0
!= 42/0, yet hopefully we all agree that 42 === 42.

Perhaps the example that you were looking for 1/0 != 1/-0 ?

 Well, yes, actually it does seem wrong to me, but we all accept that
 particular wrongness. This is just more of the same.

just is a powerful word.  Use sparingly.

- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Out-of-range decimal literals

2008-09-18 Thread Sam Ruby
On Thu, Sep 18, 2008 at 9:49 AM, Sam Ruby [EMAIL PROTECTED] wrote:
 On Thu, Sep 18, 2008 at 9:10 AM, Igor Bukanov [EMAIL PROTECTED] wrote:
 Should EcmaScript allow decimal literals that cannot be represented as
 128-bit decimal? I.e should the following literals give a syntax
 error:

 1.001m
 1e10m ?

 IMO allowing such literals would just give another source of errors.

 Languages have personalities, and people build up expectations based
 on these characteristics.  As much as possible, I'd like to suggest
 that ECMAScript be internally consistent, and not have one independent
 choice (binary vs decimal) have unexpected implications over another
 (signaling vs quiet operations).

Thinking about it further, rejecting literals with an expressed
precision larger than the underlying data type can support might be
something that could be considered with use strict, particularly if
applied to both binary and decimal floating point quantities.

 As a tangent, both binary 64 and decimal 128 floating point provide
 exact results for a number of operations, they simply do so for
 different domains of numbers.  2**-52 can be represented exactly, for
 example, in binary 64 floating point, but not in decimal 128 floating
 point.  It is only the prevalence of things like decimal literals,
 which naturally are in decimal, which tend to produce inexact but
 correctly rounded values in binary 64 and exact values in decimal 128,
 without a need for rounding.

 As to your specific question, here's a few results from my branch of
 SpiderMonkey:

 js 1.001
 1
 js 1e10
 Infinity
 js 1.001m
 1.0
 js 1e10m
 Infinity

 Regards, Igor

 - Sam Ruby

- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


ES Decimal status

2008-09-12 Thread Sam Ruby
 that duplicate infix and prefix operators, such as add
 and plus.  I've heard arguments on both sides, and don't have a
 strong opinion on this subject myself.  After we verify
 consensus on the broader approach described in this email, I can
 present a list of potential candidate methods, grouped into
 categories.  We should be able to quickly sort through this
 list.  This effort could be done either on the mailing list or
 in the F2F meeting in Redmond.

   * Whether the named methods are to be static or instance methods.
 I've heard arguments both ways, and could go either way on this.
 Frankly, the infix operators capture the 80% use case.  Instance
 methods feel more OO, and some (like toString and valueOf) are
 required anyway.  Static methods may be more consistent with
 Math.abs, and are asserted to be more suited to code generators
 and optimization, though I must admit that I never could quite
 follow this argument.

   * Should there be a “use decimal”?  To me, this feels like
 something that could be added later.  The fact that all 15 digit
 integers convert exactly, as well as a few common fractions such
 as .5 and .25, greatly reduces the need.  The strategy of doing
 precise conversions also will tend to highlight when mixed
 operations occur, and will do so in a non-fatal way.

Approaches not selected:

   * Decimal as a library only.  Reason: usability concerns and
 evolution of the language concerns.  More specifically, the
 definition of the behaviors of operators like === and + need to
 be specified.  Throwing exceptions would not merely be developer
 unfriendly, it would likely be perceived as causing existing
 libraries to break.  And whatever was standardized would make
 latter support for such operators to be a breaking change – at
 the very least it would require an opt-in.

   * Attempting to round binary 64 values to the nearest decimal 128
 value.  Such approaches are fragile (e.g., 1.2-1.1) and tends to
 hide rather than reveal errors.

   * Decimal being either a “subclass” or “form” of number.  Turned
 out to be too confusing, potentially breaking, and in general
 larger in scope than simply providing a separate type and
 wrapper class.

   * Type(3m) being “object”.  Reason: the only false values for
 objects should be null, and it is highly desirable that 0.0m be
 considered false.

   * Methods naming based on Java's BigDecimal class.  This was my
 original approach as it was initially felt that IEEE 754 was too
 low of level.  This turned out not to be the case, and there are
 some conflicts (e.g. valueOf is a static method with a single
 argument on BigDecimal).

   * Having a separate context class or storing application state in
 Decimal class.  The former is unnecessary namespace pollution in
 a language with a simple syntax for object literals, and the
 latter is against the policy of this working group.

- Sam Ruby

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Coercing 'this' (was: Topic list - pending changes and issues for the ES3.1 spec)

2008-09-10 Thread Sam Ruby
On Tue, Sep 9, 2008 at 2:11 PM, Mark S. Miller [EMAIL PROTECTED] wrote:
 On Tue, Sep 9, 2008 at 9:21 AM, Mark S. Miller [EMAIL PROTECTED] wrote:
 What should be the rules for coercing non-object 'this' values?

 In a previous thread we've already settled that ES3.1-strict will not
 coerce null or undefined 'this' values. In order to do this, we're
 going to migrate the spec language for coercing a null-or-undefined
 'this' from the caller-side that scattered all over the spec (where
 it's hard to make it conditional on strictness), to the callee side in
 section 11.1.1. For non-strict code, this should make no observable
 difference.

 Some open questions:

 The ES3 spec language uses null as the implicit 'this' value
 associated with calling a function as a function. However, since the
 current spec language also coerces both null and undefined to the
 global object, it is unobservable whether null or undefined is used as
 the implicit 'this' value. In ES3.1 strict this difference becomes
 observable. In the interests of explaining 'this'-binding as being
 more like parameter-binding, I would like to see this changed to
 undefined. Calling a function as a function is like calling it without
 an argument for its 'this' parameter. I think this is more intuitive.

 When a primitive non-object type (number, boolean, string, presumably
 decimal) is passed as a 'this' value, the ES3 spec coerces it to a new
 corresponding wrapper object each time. This fresh allocation is
 observable and therefore expensive. For example, let's say someone
 adds a new method to String.prototype:

 String.prototype.capture = function() { return this; };
 var foo = foo;
 foo.capture() === foo.capture()
   false

 The expense of this allocation imposes a cost on all programs in order
 to provide a semantics that's strictly worse than not providing it,
 and which hardly any programs will ever care about anyway. To avoid
 the extra expense within the current semantics, an implementation
 would have to do escape analysis. IIRC, the ES4 spec avoids requiring
 this allocation, as that was felt to be an incompatibility everyone
 could live with. I agree. I propose that primitive 'this' values not
 be coerced at all.

Let's posit for the sake of discussion that a primitive decimal type
is in and we proceed as you describe above.  Furthermore, and just for
the sake of simplicity as it isn't really pivotal, let's assume that
all named Decimal methods are static, so there is no instance
methods of interest.

In such a scenario, is there value in providing a Decimal wrapper type
at all?  Could new Decimal('5') simply throw an exception and a
footnote be placed someplace in the spec that ECMA TC39 reserves the
right to change this behavior in future revs of the spec?

I'm not asking because I consider a Decimal wrapper is hard to
implement, but because I don't understand what value such a wrapper
would provide other than to be available for the situation you
describe above.

 If primitive 'this' values are no longer coerced, we can still explain
 the semantics of property lookup of an expression like 'foo.capture()'
 by saying that the property is looked up in the wrapper's prototype.
 Or we could say that the property is looked up after wrapping, but a
 method call proceeds with the original unwrapped value. In other words

foo.capture()

 should be equivalent to

Object(foo).capture.call(foo)

 given the original bindings for Object and Function.call.

It occurs to me that the latter case (looking up after wrapping)
trades a single wrapping at function entry against potentially
multiple wrappings that may occur inside the function.

 --
Cheers,
--MarkM

- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Consistent decimal semantics

2008-09-03 Thread Sam Ruby
Brendan Eich wrote:
 On Aug 25, 2008, at 5:25 PM, Waldemar Horwat wrote:
 
 Brendan Eich wrote:
 - Should decimal values behave as objects (pure library
 implementation) or as primitives?

 If they behave as objects, then we'd get into situations such as  
 3m !=
 3m in some cases and 3m == 3m in other cases.  Also, -0m != 0m would
 be necessary.  This is clearly unworkable.
 What should be the result of (typeof 3m)?
 decimal.  It should not be object because it doesn't behave  
 like other objects!
 
 Clearly, you are correct (but you knew that ;-).
 
 Specifically,
 
 1. typeof x == object  !x = x == null.
 
 2. typeof x == typeof y = x == y = x === y
 
 3. 1.1m != 1.1  1.1m !== 1.1 (IEEE P754 mandates that binary floats  
 be convertible to decimal floats and that the result of the  
 conversion of 1.1 to decimal be 1.100088817841970012523m)
 
 Therefore typeof 1.1m != object by 1, or else 0m could be mistaken  
 for null by existing code.
 
 And typeof 1.1m != number by 3, given 2.
 
 This leaves no choice but to add decimal.

:-)

You act as if logic applies here.  (And, yes, I'm being facetious).  If 
that were true, then a === b = 1/a === 1/b.  But that's not the case 
with ES3, when a=0 and b=-0.

1.1 and 1.1m, despite appearing quite similar actually identify two 
(marginally) different points on the real number line.  This does not 
cause an issue with #2 above, any more than substituting the full 
equivalents for these constants causes an issue.

Similarly, the only apparent issue with #1 is the assumption that !(new 
Decimal(0)) is true.  But !(new Number(0)) is false, so there is no 
reason that the same logic couldn't apply to Decimal.

  - - -

 From where I sit, there are at least three potentially logically 
consistent systems we can pick from.

We clearly want 0 == 0m to be true.  Let's get that out of the way.

Do we want 0 === 0m to be true?  If so, typeof(0m) should be number. 
Otherwise, it should be something else, probably object or decimal.

If typeof(0m) is number, then !0m should be true.
If typeof(0m) is object, then !0m should be false.
If typeof(0m) is decimal, then we are free to decide what !0m should be.

My preference is for typeof(0m) to be decimal and for !0m to be true. 
  But that is only a preference.  I could live with typeof(0m) being 
object and !0m being false.  I'm somewhat less comfortable with 
typeof(0m) being number, only because it implies that methods made 
available to Decimal need to be made available to Number, and at some 
point some change is to any existing class, no matter how apparently 
innocuous, will end up breaking something important.

Yes, in theory code today could be depending on typeof returning a 
closed set of values.  Is such an assumption warranted?  Actually that 
question may not be as important as whether anything of value depends on 
  this assumption.

If we are timid (or responsible, either way, it works out to be the 
same), then we should say no new values for typeof should ever be 
minted, and that means that all new data types are of type object, and 
none of them can ever have a value that is treated as false.

If we are bold (or foolhardy), then we should create new potential 
results for the typeof operator early and often.

- Sam Ruby

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Decimal operator behaviors

2008-08-27 Thread Sam Ruby
A few days ago, we had an extensive discussion, primarily about strict
equality, mixed binary double / decimal quad operations and the typeof
operator.  Perhaps there are other issues that we haven't explored
just yet: the behavior of instanceof or constructors, perhaps?

I've updated my SpiderMonkey branch based on my understanding of the
outcome of the past few days of discussion, and would appreciate any
input that people may have on any other operators.  To facilitate this
discussion, I've produced the following sets of tables:

  http://intertwingly.net/stories/2008/08/27/estest.html

The parts above the line are simply the output from the test tool,
which shows that the results in the tables below actually do represent
running code.  The tables below the line are organized by
specification section, and demonstrate a number of permutations of
data types and ES operators.

Once this discussion is complete, I'll produce similar tables for
named methods (both instance and static).  I imagine that this
latter exercise will generate considerably less discussion.

- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Decimal operator behaviors

2008-08-27 Thread Sam Ruby
liorean wrote:
 2008/8/27 Sam Ruby [EMAIL PROTECTED]:
 I've updated my SpiderMonkey branch based on my understanding of the
 outcome of the past few days of discussion, and would appreciate any
 input that people may have on any other operators.  To facilitate this
 discussion, I've produced the following sets of tables:

  http://intertwingly.net/stories/2008/08/27/estest.html
 
 I would be interested in seeing some tests covering the behaviour of
 negative infinity and negative zero as well (for comparison with the
 binary double equivalents's behaviour).

I can certainly add some tests, but if you could be a little more 
specific of the actual expressions you would like to see evaluated, I 
will be more likely to produce what you want. :-)

Meanwhile, here's a few examples:

js -1/0 === 1/0
false
js -1m/0m === 1m/0m
false

js -0 === 0
true
js -0m === 0m
true

I'll gladly add the lines above to my test suite as well as any others 
you might suggest.  When we get to the named methods, things might get a 
little more interesting:

js Decimal.compareTotal(-0m,0m)
-1

This has implications for the fabled Object.eq^h^hidentical method.

- Sam Ruby

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


online es-decimal demo

2008-08-26 Thread Sam Ruby
http://code.intertwingly.net/demo/es-decimal/

user: brendan, password: cowlishaw

Instructions should be relatively self evident.  Enter a bunch of 
expressions, one per line.  Click submit.  See results.  Alter 
expressions.  Repeat.

Implementation notes:

This simply is a CGI that shells out to a version of SpiderMonkey with 
decimal support added.  This code is at an alpha level at best.  It may 
produce incorrect results.  It may trap.

I've tried SpiderMonkey with the -i parameter and capturing the results, 
but it seems to batch up the echoing of the input followed by all of the 
output.  So, for now, I'm simply wrapping each line in print(...); and 
passing the whole stream to the shell.  Yes, this makes testing of 
things like for loops difficult.  If anybody on this list has a 
suggestion, let me know.

Security notes:

Yes, I'm aware that many of you who follow this list know that what I'm 
providing is a dumb idea.  You know it.  I know it.  There is no need to 
show off.  But offline suggestions are welcome.

Availability:

 From time to time I will be updating this to the latest code.  If I get 
hacked (this host is not the one that serves my blog), the service will 
be taken down.  Eventually, I will be taking this down anyway.  But for 
now, feel free to play.

Testing cases:

What I'm really hoping for is test cases.  Find some output that is 
wrong and let me know.  I'm starting to build SpiderMonkey test cases, 
which ultimately are a simple matter of pairs of expressions and 
expected results.

http://code.intertwingly.net/public/hg/js-decimal/file/d65d970dd2ea/js/tests/decimal/ops/

- Sam Ruby


___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Es-discuss - several decimal discussions

2008-08-25 Thread Sam Ruby
On Mon, Aug 25, 2008 at 12:47 AM, Mark S. Miller [EMAIL PROTECTED] wrote:
 On Sun, Aug 24, 2008 at 8:09 PM, Sam Ruby [EMAIL PROTECTED] wrote:
 As to what the what the value of 1.0m == 1.00m should be, the amount
 of code and the amount of spec writing effort is the same either way.
 I can see arguments both ways.  But if it were up to me, the
 tiebreaker would be what the value of typeof(1.1m) is.  If number,
 the scale tips slightly towards the answer being false.  If object,
 then then scale is firmly on the side of the answer being true.

 All things considered, I would argue for false.

Typo in the above: I meant ===

 On Sun, Aug 24, 2008 at 8:40 PM, Sam Ruby [EMAIL PROTECTED] wrote:
 On Sun, Aug 24, 2008 at 11:15 PM, Mark S. Miller [EMAIL PROTECTED] wrote:
 On Sun, Aug 24, 2008 at 8:09 PM, Sam Ruby [EMAIL PROTECTED] wrote:
 All things considered, I would argue for false.

 I'm curious. If 1.0m == 1.00m were false, what about 1.0m  1.00m and
 1.0m  1.00m?

 1.0m == 1.00m should be true.

 I'm confused. All things considered, what do you think 1.0m == 1.00m should 
 be?

true

 --
Cheers,
--MarkM

- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Es-discuss - several decimal discussions

2008-08-25 Thread Sam Ruby
On Mon, Aug 25, 2008 at 1:44 AM, Brendan Eich [EMAIL PROTECTED] wrote:
 On Aug 24, 2008, at 8:09 PM, Sam Ruby wrote:

 If there were an Object.eq method, then 1.1m and 1.10m should be
 considered different by such a function.

 I don't believe that decimal, by itself, justifies the addition of an
 Object.eq method.  Even if we were to go with 1.0m == 1.00m.

 Good, that's my position too!

 Whew. Sorry for the confusion, too much after-hours replying instead
 of sleeping on my part.

 As to what the what the value of 1.0m == 1.00m should be, the amount
 of code and the amount of spec writing effort is the same either way.
 I can see arguments both ways.  But if it were up to me, the
 tiebreaker would be what the value of typeof(1.1m) is.  If number,
 the scale tips slightly towards the answer being false.  If object,
 then then scale is firmly on the side of the answer being true.

 Ah, typeof. No good can come of making typeof 1.0m == number, but
 there is more room for non-singleton equivalence classes in that
 choice. We have -0 == 0 already. Making cohorts equivalence classes
 under == seems both more usable and more compatible if the operands
 have number type(of). If they're object then we have no
 precedent: o == p for two object references o and p is an identity test.

 In spite of this lack of precedent, I believe we are free to make
 1.0m == 1.00m if typeof 1.0m == object.

Note: that was a typo on my part.  We agree here.

 But what should typeof 1.0m evaluate to, anyway? I don't believe
 number is right, since 0.1 == 0.1m won't be true. Is anyone
 seriously proposing typeof 1.0m == number?

My working assumption coming out of the SF/Adobe meeting was that in
ES4 there would be both a primitive decimal, which could be wrappered
by the Number class, which is much in line with what you mentioned
below.  Based on your recent input, I now question the need to provide
a decimal primitive ever.

Note: here I'm talking about the conceptual model that the language
exposes.  Doubles are GC'ed in SpiderMonkey, as would be decimals.

 If Decimal is an object type, then typeof 1.0m == object is good
 for a couple of reasons:

 * Future-proof in case we do add a primitive decimal type, as ES4
 proposed -- a peer of double that shares Number.prototype; typeof on
 a decimal would return number. See below for the possibly-bogus
 flip side.

What would be the upside to such an approach?  I can see the
next-edition-of-ES-that-provides-decimal (my working assumption still
is 3.1 whatever that may be called, others may be understandably
skeptical) only providing a Decimal object, and with that addition the
language with respect to decimal being considered a steady state that
not need to be revisited in subsequent editions.

 * Analogous to RegExp, which has literal syntax but is an object
 (RegExp is worse because of mutable state; Decimal presumably would
 have immutable instances -- please confirm!).

I'd prefer if Decimal instances in ES were considered immutable and
automatically interned.  By the latter, I simply mean that new
Decimal(1.0) === new Decimal(1.0).

 Making typeof 1.0m == object could be future-hostile if we ever
 wanted to add a decimal primitive type, though. We're stuck if we
 treat literals as objects and tell that truth via typeof. We can't
 make them be numbers some day if they are objects in a nearer
 edition, with mutable prototype distinct from Number.prototype, etc.
 At least not in the ES4 model, which has had some non-trivial thought
 and RI work put into it.

My assumption prior to the few days of discussion was that 1.0m was a
primitive.  Based on these discussions, making it be an object makes
sense to me.

 Probably at this point, any future number types will have to be
 distinct object typeof-types, with magic (built-in, hardcoded) or
 (generic/multimethod) non-magic operator support. That may be ok
 after all. We never quite got the ES4 model whereby several
 primtiives (at one point, byte, int, uint, double, and decimal) could
 all be peer Object subtypes (non-nullable value types, final classes)
 that shared Number.prototype. We cut byte, int, and uint soon enough,
 but problems remained.

Agreed, that may be ok after all.

 I can't find any changes to 11.4.3 The typeof Operator in ES3.1
 drafts. Am I right to conclude that typeof 1.0m == object? Sorry if
 I'm beating a dead horse. Just want it to stay dead, if it is dead ;-).

That was previously still on my list of things to do.  Now it appears
that it is one less thing to do. :-)

 All things considered, I would argue for false.  I just wouldn't dig
 in my heels while doing so.

 I was playing down the importance of this design decision to
 highlight the separable and questionable addition of Object.eq, but I
 do think it's important to get == right for the case of both operands
 of Decimal type. I'm still sticking to my 1.0m == 1.00m story, while
 acknowledging the trade-offs. No free lunch.

Again, sorry for the typo.  I

Re: Es-discuss - several decimal discussions

2008-08-25 Thread Sam Ruby
On Mon, Aug 25, 2008 at 9:45 AM, Mark S. Miller [EMAIL PROTECTED] wrote:
 On Sun, Aug 24, 2008 at 11:20 PM, Brendan Eich [EMAIL PROTECTED] wrote:
 Yes, this is gross. I'm in favor of Object.identical and Object.hashcode,

 I don't care if Object.eq is named Object.identical. Other than
 spelling, does your Object.identical differ from Object.eq? If not,
 then I think we're in agreement.

 maybe even in ES3.1 (I should get my act together and help spec 'em). Just
 not particularly on account of Decimal, even with equated cohort members. I
 still agree with Sam. And as always,hard cases make bad law.

 What is it you and Sam are agreeing about? I lost track.

 I like the point about bad law. Numerics are definitely a hard case.

Give me a day or so, and I'll post a typo-free transcript based on
running code, and people can identify specific results that they take
issue with, and/or more expressions that they would like to see in the
results.

- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Es-discuss - several decimal discussions

2008-08-25 Thread Sam Ruby
On Mon, Aug 25, 2008 at 9:45 AM, Mark S. Miller [EMAIL PROTECTED] wrote:
 On Sun, Aug 24, 2008 at 11:20 PM, Brendan Eich [EMAIL PROTECTED] wrote:
 Yes, this is gross. I'm in favor of Object.identical and Object.hashcode,

 I don't care if Object.eq is named Object.identical. Other than
 spelling, does your Object.identical differ from Object.eq? If not,
 then I think we're in agreement.

 maybe even in ES3.1 (I should get my act together and help spec 'em). Just
 not particularly on account of Decimal, even with equated cohort members. I
 still agree with Sam. And as always,hard cases make bad law.

 What is it you and Sam are agreeing about? I lost track.

 I like the point about bad law. Numerics are definitely a hard case.

Here's the current output from my current branch of SpiderMonkey:

js /* Hopefully, we all agree on these */
js 1.0m == 1.0m
true
js 1.0m == new Decimal(1.0)
true
js 1.0m == Decimal(1.0)
true
js 1.0m == 1.0.toDecimal()
true
js
js /* And these too... */
js 1.0m == 1.00m
true
js 1.0m == 1.0
true
js 1.0m == 1.0
true
js
js /* Conversion is exact, up to the number of digits you specify */
js 1.1.toDecimal()
1.100088817841970012523
js 1.1.toDecimal(2)
1.10
js 1.1m - 1.1
-8.8817841970012523E-17
js 1.1m == 1.1
false
js
js /* Non-strict equals doesn't care about precision */
js 1.0m == 1.0m
true
js 1.0m == 1.00m
true
js 1.0m == 1.0
true
js 1.0m == 1.000
true
js
js /* You can mix things up */
js Decimal.add(1.1,1.1m)
2.200088817841970012523
js 1.2m - 1.1m
0.1
js 1.2m - 1.1
0.099911182158029987477
js 1.2 - 1.1
0.09987
js 1.1  1.1m
true
js
js /* Things we agree on for strictly equals */
js 1.0m === 1.0m
true
js 1.0m === 1.0
false
js
js /* Something that a case could be made for either way */
js 1.00m === 1.0m
true
js
js /* In any case, there always is this to fall back on */
js Decimal.compareTotal(1.0m, 1.0m) == 0
true
js Decimal.compareTotal(1.00m, 1.0m) == 0
false
js
js /* Still open for discussion  */
js typeof 1m
object

- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Consistent decimal semantics

2008-08-25 Thread Sam Ruby
 should we have?
 
 Edition 3 says there's only one, but IEEE P754's totalOrder now
 distinguishes between NaN and -NaN as well as different NaNs
 depending on how they got created.  Depending on the implementation,
 this is a potential compatibility problem and an undesirable way for
 implementations to diverge.

There are multiple layers to this.

At the physical layer, even binary64 provides the ability to have 2**52 
different positive NaNs, and an equal number of negative NaNs.

At a conceptual layer, ES is described as having only having one NaN. 
This layer is of the least consequence, and most easily changed.

At an operational layer, the operations defined in ES4 are (largely?) 
unable to detect these differences.  For backwards compatibility 
reasons, it would be undesirable to change any of these existing 
interfaces, unless there were a really, really, really compelling reason 
to do so.

Overall, as long as we don't violate the constraints presented by the 
physical and existing operational layers, we may be able to introduce 
new interfaces (such as Object.identity) that is able to distinguish 
things that were not previously distinguishable.

 - How many decimal NaNs should we have?
 
 Presumably as many as we have double NaNs

Actually, there are about 10**46-2**52 more.

 Waldemar

- Sam Ruby

[1] https://mail.mozilla.org/pipermail/es-discuss/2008-August/007231.html
[2] http://code.intertwingly.net/public/hg/js-decimal/
[3] http://speleotrove.com/decimal/decifaq6.html#binapprox
[4] http://speleotrove.com/decimal/decifaq6.html#bindigits

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Es-discuss - several decimal discussions

2008-08-24 Thread Sam Ruby
Brendan Eich wrote:
 On Aug 24, 2008, at 9:34 AM, Sam Ruby wrote:
 
 What should the result of the following expressions be:

   1.5m == 1.5

   1.5m === 1.5

typeof(1.5m)

 A case could be made that both binary64 and decimal128 are both
 numbers, and that a conversion is required.
 
 For ==, yes. For ===, never!
 
 A case could be made that
 while they are both numbers, the spec needs to retain the behavior
 that users have come to expect that conversions are not performed for
 strict equality.
 
 We must not make strict equality intransitive or potentially 
 side-effecting. Please tell me the case /should/ be made, not just could 
 be made.
 
  And finally, a case could be made that the result of
 typeof(1.5m) should be something different, probably decimal, and
 that decimal128 values are never strictly equal to binary64 values,
 even when we are talking about simple integers.
 
 I'm sympathetic to the decimal128 values are never strictly equal to 
 binary64 values part of this case, but that's largely because I am not 
 an advocate of === over == simply because == is not an equivalence 
 relation. == is useful, in spite of its dirtiness.
 
 Advocates of === for all use-cases, even those written by casual or 
 naive JS programmers, are just setting those programmers up for 
 confusion when === is too strict for the use-case at hand -- and they're 
 setting up the TC39 committee to add three more equality-like operators 
 so we catch up to Common Lisp :-(.
 
 The place to hold this DWIM fuzzy line is at ==. Do not degrade ==='s 
 strictness.
 
 The typeof question should be separated. You could have typeof return 
 number for a double or a decimal, but still keep === strict. I believe 
 that would be strictly (heh) more likely to break existing code than 
 changing typeof d to return object for Decimal d.

If we go with object, a Decimal.parse methods won't be strictly 
(heh-backatcha) necessary, a Decimal constructor would do.  In fact, 
decimal could also be called as a function.  I like it when things 
become simpler.

My only remaining comment is that might tip the scales as to whether or 
not 1.10m === 1.1m.  They certainly are not the same object.  But we 
previously agreed that this one could go either way.

 I don't see why we would add a decimal result for typeof in the 
 absence of a primitive decimal type. That too could break code that 
 tries to handle all cases, where the object case would do fine with a 
 Decimal instance. IIRC no one is proposing primitive decimal for ES3.1. 
 All we have (please correct me if I'm wrong) is the capital-D Decimal 
 object wrapper.
 
 Guy Steele, during ES1 standardization, pointed out that some Lisps
 have five equality-like operators. This helped us swallow === and !==
 (and keep the == operator, which is not an equivalence relation).

 Must we go to this well again, and with Object.eq (not an operator),
 all just to distinguish the significance carried along for toString
 purposes? Would it not be enough to let those who care force a string
 comparison?

 I think that a static Decimal method (decNumber calls this
 compareTotal) would suffice.  Others believe this needs to be
 generalized.  I don't feel strongly on this, and am willing to go with
 the consensus.
 
 Premature generalization without implementation and user experience is 
 unwarranted. What would Object.eq(NaN, NaN) do, return true? Never! 
 Would Object.eq(-0, 0) return false? There's no purpose for this in the 
 absence of evidence. I agree with you, compareTotal or something like it 
 (stringization followed by ===) is enough.
 
 /be

- Sam Ruby

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Es-discuss - several decimal discussions

2008-08-24 Thread Sam Ruby
On Sun, Aug 24, 2008 at 2:43 PM, Mark S. Miller [EMAIL PROTECTED] wrote:

 In any case, I'm glad we seem to be in all around agreement to pull
 decimal completely from 3.1.

I believe that's a bit of an overstatement.

- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Es-discuss - several decimal discussions

2008-08-24 Thread Sam Ruby
On Sun, Aug 24, 2008 at 11:15 PM, Mark S. Miller [EMAIL PROTECTED] wrote:
 On Sun, Aug 24, 2008 at 8:09 PM, Sam Ruby [EMAIL PROTECTED] wrote:
 All things considered, I would argue for false.

 I'm curious. If 1.0m == 1.00m were false, what about 1.0m  1.00m and
 1.0m  1.00m?

1.0m == 1.00m should be true.

But to answer your question, IEEE 754 does define a total ordering.
One can even test is using the implementation I posted a few weeks
back.

js Decimal.compareTotal(1.0m, 1.00m)
1
js Decimal.compareTotal(1.00m, 1.0m)
-1

This function compares two numbers using the IEEE 754 total ordering.
If the lhs is less than the rhs in the total order then the number
will be set to the value -1. If they are equal, then number is set to
0. If the lhs is greater than the rhs then the number will be set to
the value 1.

The total order differs from the numerical comparison in that: –NaN 
–sNaN  –Infinity  –finites  –0  +0  +finites  +Infinity  +sNaN
 +NaN. Also, 1.000  1.0 (etc.) and NaNs are ordered by payload.

 --
Cheers,
--MarkM

- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Es-discuss - several decimal discussions

2008-08-23 Thread Sam Ruby
On Sat, Aug 23, 2008 at 11:44 AM,  [EMAIL PROTECTED] wrote:
 On Sat, Aug 23, 2008 at 1:49 AM, Mike Cowlishaw [EMAIL PROTECTED] wrote:
 Finally, I'd like to take a poll: Other than people working on decimal
 at IBM and people on the EcmaScript committee, is there anyone on this
 list who thinks that decimal adds significant value to EcmaScript? If
 so, please speak up. Thanks.

 Decimal arithmetic is sufficiently important that it is already
 available in all the 'Really Important' languages except ES
 (including C, Java, Python, C#, COBOL, and many more).  EcmaScript is
 the 'odd one out' here, and not having decimal support makes it
 terribly difficult to move commercial calculations to the browser for
 'cloud computing' and the like.

 Decimals in Java are implemented at the library level, as
 java.math.BigDecimal. There is no expectation that intrinsic math
 operators work on them. Is this approach valid for ES; if not, then
 why not?

Decimal implemented as a library would be sufficient for a 3.1
release.  The problem is an interoperable definition for what infix
operators is required for completeness.  Taking no other action, the
default behavior for the result of a + operator given a Number and a
library provided Decimal would be to convert both to string
representations and concatenate the results.

This was discussed at the last ECMA TC39 meeting in Oslo, and was
found to be unusable and would create a backwards compatibility issue
for Harmony.  An assertion was made (reportedly by Waldemar and
Brendan -- I wasn't present) that spec'ing the operators would not be
all that difficult.

To be sure, I then proceeded to implement such functionality using the
then current SpiderMonkey (i.e. pre TraceMonkey, meaning I'm facing a
significant merge -- the joys of DVCS), and found that it wasn't all
that difficult.  Based on the results, I've updated the 3.1 spec, and
again found it wasn't all that difficult -- precisely as Waldemar and
Brendan thought.

At this point, I don't think the issue is infix operators.  A few
don't seem to like the IEEE standard (Waldemar in particular tends to
use rather colorful language when referring to that spec), some have
expressed vague size concerns when at this point it seems to me that
we should be able to express such in measurable terms, and finally
there are some usability concerns relating to mixed mode operations
that we need to work through.  More about this in a separate email.

 Implementing decimals at the library level has the advantage that they
 can be deployed today, as functional (if slower) ES code, and
 optimized later on by a native implementation with no loss of
 compatibility. After all, it will be several years before the next ES
 version becomes reliably available on consumers' browsers. Does this
 manner of easing migration inform the approach being taken?

 Conversely, if one is to add support for the intrinsic math operators
 on decimals, does the required work generalize easily to arithmetic on
 complex numbers and matrices? Will the addition of complex numbers and
 matrices require more difficult work about how they interoperate with
 existing number representations (including, at that point, decimal
 numbers)? How, if at all, does this inform the present discussion?

Judging by other programming languages, the next form for Number that
is likely to be required is arbitrary precision integers.  While we
can never be sure until we do the work, I do believe that Decimal will
pave a path for such additions, if there is a desire by the committee
to address such requirements.

 Ihab

 --
 Ihab A.B. Awad, Palo Alto, CA

- Sam Ruby
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss