Re: var declarations shadowing properties from Window.prototype

2012-08-15 Thread Axel Rauschmayer
 That's basically what the Global Scope Polluter does.  That's certainly how I 
 plan to implement it in Gecko in the WebIDL bindings: 
 Window.prototype.__proto__ will be a proxy which will do all the weird stuff 
 the GSP has to do.


In FF 14, I’m getting the following results (if there is an element whose ID is 
foo):

$ foo in window
false
$ foo
ReferenceError: foo is not defined
$ window.foo  // (*)
[object HTMLDivElement]
Element referenced by ID/NAME in the global scope. Use W3C standard 
document.getElementById() instead.
$ foo  // (**)
undefined
Element referenced by ID/NAME in the global scope. Use W3C standard 
document.getElementById() instead. @ Web Console:1
$ foo in window
true

That looks like what you described: The Global Scope Polluter auto-creates foo 
after the read access (*). However, (**) puzzles me: A getter for foo seems to 
be called (as a warning is displayed), but it returns `undefined`. How come?

Also interesting: window instanceof EventTarget holds, but 
EventTarget.prototype is not in protos(window), an array created by:
function protos(obj) {
var chain = [];
while (obj) {
chain.push(obj);
obj = Object.getPrototypeOf(obj);
}
return chain;
}

Axel

-- 
Dr. Axel Rauschmayer
a...@rauschma.de

home: rauschma.de
twitter: twitter.com/rauschma
blog: 2ality.com

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: var declarations shadowing properties from Window.prototype

2012-08-15 Thread Brendan Eich

Travis Leithead wrote:

From: Cameron McCormack [mailto:c...@mcc.id.au]

Brendan Eich:

As noted, they started out that way 17 years ago. I think WebIDL and
interface-based method definition made onload, e.g., predefined on
window objects, or more recently on Window.prototype. Was this useful?
Was it intended specifically (for window, not just intended generally
due to WebIDL's uniform rules for binding its definitions in JS)?

I don't think it provides any benefit.  Uniformity is the only reason the spec
says they should be there, currently.


It does provide the monkey-patch benefit for shared interfaces (e.g., those 
shared by inheritance). At the present time, the only one I can think of that [will] act 
like this is EventTarget (IE10 hasn't yet implemented this hierarchy change).


Do you think it's worth another exception (to the exception whereby 
inherited properties are flattened to be own on the global) that adds 
EventTarget.prototype to window objects' prototype chains? Presumably in 
front of (closer to the head of the global object itself) the GSP.


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Batch assignment functions

2012-08-15 Thread Rick Waldron
 Recently, Allen produced a strawman proposal[0][1] for the object define
properties operator, which was designed to provide syntax that
differentiated semantics of define vs. assign. Towards the end of the
thread, IIRC, Brendan suggested some new Object functions: Object.define()
and Object.assign().[2]

I spent time over the weekend preparing common use cases for batch
assignment based on real world examples from jQuery[3], Dojo[4], Node.js[5]
and Lodash (an improved Underscore)[6][7] (With all due respect, Prototype
is no longer considered a relevant library for use in modern day web
development). Initially, I assumed that the jQuery deep extend was the
common case and drafted an attempt at handling nested assignment or deep
extending. Off-list, Dave Herman reminded me that there is no way to know
what the user _actually_ wants with regard to nested object properties:


1. Target property does not exist: define source value
2. Target property exists, its value is anything _except_ a nested object:
Assign source value
3. Target property exists, its value is a nested object, the source
property value is anything _except_ a nested object: Assign source value

The ambiguity:

4. Target property exists, its value is a nested object, the source
property is a nested object: _?


In the trenches we use an icky boolean trap[8] flag to tell the jQuery API
what we want; jQuery.extend( true, a, b ) will assume you want to deep
merge/extend b into a and will do so by using a pure evil function called
jQuery.isPlainObject to determine if an object was created from Object or
{}. Regardless, this is not an arguable use case, it's a reality.


As a result, I re-drafted Object.assign based on the real-world use cases,
but specifically does not attempt nested object property assignment
recursion. At this point I still believe that the deep nested assignement
case is strong enough to consider, but I'm not sure how to approach it. It
might warrant its own implicit merge object properties whenever
possible... Object.merge?


In summary, based on findings so far, I'd like to propose the following:

In all cases, target refers to an object in the dictionary of values or
bag of properties sense. source can be any kind of object that has own
properties.

Object.define( target, source ): defineProperties w/ sensible defaults (w,
e, c: true).

Object.put( target, source ): is... put! (https://gist.github.com/3350283)

Object.merge( target, source ): deep put, ideally this would drill down
into nested objects that line up on the left and right, [[Put]]ing values
from the right onto the left. I want to make it perfecty clear that this is
desired, but the semantics are not yet clear.




-Rick



[0] https://mail.mozilla.org/pipermail/es-discuss/2012-August/024402.html

[1]
http://wiki.ecmascript.org/doku.php?id=strawman:define_properties_operator

[2] https://mail.mozilla.org/pipermail/es-discuss/2012-August/024477.html

[3] https://github.com/jquery/jquery/blob/master/test/unit/core.js#L866-973

[4] http://dojotoolkit.org/reference-guide/1.7/dojo/mixin.html#dojo-mixin

[5]
https://github.com/joyent/node/blob/master/test/simple/test-util.js#L73-80(note:
I included this, but primarily, users will require underscore or
lodash for such utilities)

[6] https://github.com/bestiejs/lodash/blob/master/test/test.js#L375-399

[7]
https://github.com/documentcloud/underscore/blob/master/test/objects.js#L30-41

[8] http://ariya.ofilabs.com/2011/08/hall-of-api-shame-boolean-trap.html
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: var declarations shadowing properties from Window.prototype

2012-08-15 Thread Brendan Eich

Travis Leithead wrote:

From: Boris Zbarsky [mailto:bzbar...@mit.edu]

On 8/12/12 5:29 PM, Brendan Eich wrote:

Boris Zbarsky wrote:

Note that data in
http://lists.w3.org/Archives/Public/public-script-coord/2012JanMar/00
33.html suggests that IE also implements the erratum to 5.1 we were
talking about up-thread.  Oh what a tangled web we weave.

Yes, current thinking is that we should take the erratum that major JS
engines already fixed, and include it in ES6. But this means we must
do something different in WebIDL, probably make own global properties
for window-implements-interface-inherited attributes and even

operations.

And then (for strict mode) be careful about get-only accessors. This
reminds me of [Replaceable], which was for non-writable but
configurable data properties that var and function must be able to
replace. It's kind of the opposite and only for accessors:
var-captures-own-accessor-via-detection, or some such.

Just recapping, tell me if I'm missing something.

The above sounds like a reasonable summary to me.  Certainly hits all the
high points of the discussion, with the addition that the GSP as currently
specced depends on the erratum sticking around.


Allow me to also recap to ensure I understand this thread:

The problem: a few [high profile] sites are using a coding practice that uses 
feature detection of the following pattern:
var [standardized property name] = window.[standardized property name] || 
window.[implementation-specific property name] || [etc.]


Usually window.[standardized property name] is last -- this matters below.


Firefox is affected by this problem (e.g., the result of the var declaration is 
undefined)
Chrome is not affected by this problem because their var creation algorithm 
checks the prototype chain for an existing property name


No, Chrome is not affected because their IDL bindings put inherited 
attributes on the global object as own accessors.



IE10 is not affected by this problem because they define both indexedDb and 
msIndexedDb and the latter wins--otherwise they _would_ be affected by this problem.


That's true, but see above point about order of || terms.


The reaction by this group is:
* Don't change anything about ES5.1 var declaration and initialization (because 
we like the behavior, it works well with global scope pollutors in the 
prototype chain)


No. Rather, the erratum at 
https://bugs.ecmascript.org/show_bug.cgi?id=78 was already fixed by 
Mozilla, Google, Opera, and I believe IE and Apple. We think it's 
important for user-defined non-configurable properties on prototypes of 
the global object that var ignore such properties.


Instead, we think WebIDL must make a special case for inherited 
properties of the global object: make them own.


Also, WebIDL-reliant spec authors must be careful not to define get-only 
accessors (readonly attributes) on the global object or interfaces from 
which it inherits. This leads to the idea that the global object 
defaults to [Replaceable] (which may be removed from WebIDL as explicit 
qualifier syntax) and [Unforgeable] must be used selectively for some 
few historic exceptions.



* Change WebIDL so that any properties that would mixin to Window (or any 
ancestors that Window would inherit from) would instead be created directly on 
the global object (instead of on a prototype of the global).


Yes, but note this is sufficient to keep the fix for erratum 
https://bugs.ecmascript.org/show_bug.cgi?id=78 and not revert to ES5.1 spec.



  Additionally, ensure that all of these properties are [Replaceable] meaning 
that if they are readonly (no setter), a [[Put]] request would instead create a 
data property of the same name in place of the pre-existing property.


Right.


One side effect as I understand this, would be that:
var indexedDb = test;
alert(indexedDb);
would result in test, and the original indexedDb property would be lost. Is 
this your understanding as well?


You bet. Who knows whether indexedDB (such a lovely name! snark) was not 
used as a var name (undeclared, even) in 2003 by some important site in 
Slovenia?



Another side effect of this proposed change is that:
var onload = function(e) { ...}
would actually assign the event handler (it does in Chrome today, not in 
Gecko/IE9/10).


Note that if you take away the 'var ' then Gecko at least assigns the 
event handler.


Per http://www.w3.org/TR/html5/browsers.html#the-window-object the 
onload, etc. attributes are read/write, so not (implicitly in a new 
world, and certainly not explicitly in the HMTL5 spec) [Replaceable]. So 
given the change to put inherited attributes on the global as own 
properties, 'var' can't matter. 'var' does not replace an existing own 
binding in ES5.1 or ES5.1 + the erratum fix.



I was curious just how bad the currently reported bug actually is. I ran a query 
looking for use of var indexedDb and var requestAnimationFrame across our web data index (which is 
unfortunately 

Re: Batch assignment functions

2012-08-15 Thread Brendan Eich

Rick Waldron wrote:
As a result, I re-drafted Object.assign based on the real-world use 
cases, but specifically does not attempt nested object property 
assignment recursion. At this point I still believe that the deep 
nested assignement case is strong enough to consider, but I'm not sure 
how to approach it. It might warrant its own implicit merge object 
properties whenever possible... Object.merge?


Object.merge, maybe -- but definitely do not want this muddled into 
assign/define or put/define or whatever those shallow-only names should be.



In summary, based on findings so far, I'd like to propose the following:

In all cases, target refers to an object in the dictionary of 
values or bag of properties sense. source can be any kind of 
object that has own properties.


Object.define( target, source ): defineProperties w/ sensible defaults 
(w, e, c: true).


Object.put( target, source ): is... put! 
(https://gist.github.com/3350283)


Some questions:

1. Why implicitly bind any function value?

2. Ditto for get and set accessor functions?

3. Why Object.defineProperty if !(key in target)? Why not just always 
assign, since this is Object.put and not Object.define?


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Batch assignment functions

2012-08-15 Thread Rick Waldron
On Wed, Aug 15, 2012 at 5:30 PM, Brendan Eich bren...@mozilla.org wrote:

 Rick Waldron wrote:

 As a result, I re-drafted Object.assign based on the real-world use
 cases, but specifically does not attempt nested object property assignment
 recursion. At this point I still believe that the deep nested assignement
 case is strong enough to consider, but I'm not sure how to approach it. It
 might warrant its own implicit merge object properties whenever
 possible... Object.merge?


 Object.merge, maybe -- but definitely do not want this muddled into
 assign/define or put/define or whatever those shallow-only names should be.


Agreed, which was the motivation for the three distinct APIs below




  In summary, based on findings so far, I'd like to propose the following:

 In all cases, target refers to an object in the dictionary of values
 or bag of properties sense. source can be any kind of object that has
 own properties.

 Object.define( target, source ): defineProperties w/ sensible defaults
 (w, e, c: true).

 Object.put( target, source ): is... put! (https://gist.github.com/**
 3350283 https://gist.github.com/3350283)


 Some questions:

 1. Why implicitly bind any function value?


The code in the gist should be regarded as a loose approximation of what I
think we should try to achieve — that said, it was actually just a modified
version of my API-ified implementation of Allen's define properties
operator :)



 2. Ditto for get and set accessor functions?


Same  answer


 3. Why Object.defineProperty if !(key in target)? Why not just always
 assign, since this is Object.put and not Object.define?


This part I was actually unsure of and it shows here; I made an assumption
that doesn't hold now that you've pointed it out and I've had to re-think
my motivation. So really, this can and should just be [[Put]]

I'll update the gist.

Rick




 /be

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Batch assignment functions

2012-08-15 Thread David Bruant
Le 15/08/2012 21:42, Rick Waldron a écrit :
 Recently, Allen produced a strawman proposal[0][1] for the object
 define properties operator, which was designed to provide syntax that
 differentiated semantics of define vs. assign. Towards the end of the
 thread, IIRC, Brendan suggested some new Object functions:
 Object.define() and Object.assign().[2]

 I spent time over the weekend preparing common use cases for batch
 assignment based on real world examples from jQuery[3], Dojo[4],
 Node.js[5] and Lodash (an improved Underscore)[6][7] (With all due
 respect, Prototype is no longer considered a relevant library for use
 in modern day web development). Initially, I assumed that the jQuery
 deep extend was the common case and drafted an attempt at handling
 nested assignment or deep extending. Off-list, Dave Herman
 reminded me that there is no way to know what the user _actually_
 wants with regard to nested object properties:


 1. Target property does not exist: define source value
 2. Target property exists, its value is anything _except_ a nested
 object: Assign source value
 3. Target property exists, its value is a nested object, the source
 property value is anything _except_ a nested object: Assign source value

 The ambiguity:

 4. Target property exists, its value is a nested object, the source
 property is a nested object: _?
This seems to be close to the issue of copying an object. What is
exactly expected is as much ambiguous.

I would add to the ambiguity prototypal inheritance. All properties of
WebIDL objects are inherited getters. If you want to extend an object
with the values of a WebIDL object, do you loop through inherited
properties? (actually, if you don't, you have no property at all)
To the ambiguity, we can also add whether non-enumerable properties
should be assigned or not.
Facing function, do we want another function (callable with same body)
or juste the properties.
And finally, facing accessors, do we want to call the getter or copy it?
Bound to the original object it was extracted from or working with the
new object?

I'm of course not asking for answer, just enumerating other sources of
ambiguity

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Consistency in The Negative Result Values Through Expansion of null's Role

2012-08-15 Thread Erik Reppen
This topic has probably been beaten to death years before I was even aware
of es-discuss but it continues to get mentioned occasionally as a point of
pain so I thought I'd see if I couldn't attempt to hatch a conversation and
maybe understand the design concerns better than I likely do now.


Consistent Type Return for Pass and Fail?

The principle of consistent type-return has occasionally skewered me as
somebody who came to non-amateur levels of understanding code primarily
through JavaScript. I can see the value in maintaining consistent types for
positive results but not so much for indicators that you didn't get
anything useful. For instance:

* [0,1].indexOf('wombat'); //returns an index on success or  -1 to indicate
failure. -1 passed on to a lot of other array methods of course, indicates
the last element. If you'd asked me the day I made that mistake I could
have told you indexOf probably returns -1 on failure to find something but
it didn't occur to me in the moment.

* 'wombat'.charAt(20); //returns an empty string, but that's a concrete
value whereas 'wombat'[20] returns undefined

Is consistent type return a heuristic carried over from more strictly-typed
paradigms or would it murder performance of the native methods to do the
logic required to return something like null in these cases? In a dynamic
language, why not focus on more consistent return types across the board
for an indicator that you won't be getting particularly handy results?


Generic Fail Values

I suspect I'm in the minority but I actually like the variety in the more
generic negative-result/failure values like undefined, null and NaN since
they can help you understand the nature of a problem when they show up in
unexpected places but more consistency of implementation and clarity in
terms of what they mean would definitely be valuable.


Here's my assumptions about the intent of the following values. Please
correct me if I'm wrong:

* undefined - Makes sense to me as-typically implemented (possibly 100%
consistently as I can't think of exceptions). You tried to access something
that wasn't there. Only happens when a function actually returns a
reference to something holding that value or doesn't define something to
return in the first place, or via any property access attempt that doesn't
resolve for the indicated property name/label.

* NaN - Something is expected to evaluate as a number but that's not really
possible due to the rules of arithmetic or a type clash. In some cases it
seems as if the idea is to return NaN any time a number return was expected
but for some reason couldn't be achieved, which as a heuristic doesn't seem
like such a hot idea to me.

* null - Indicates an absence of value. There were no regEx matches in a
string, for instance.


How I'd prefer to see them:

* undefined - as is. It seems like the most consistently implemented of the
lot and when I spot an undefined somewhere unexpected it only takes 1-2
guesses to sort out what's going wrong typically.

* NaN - It can tell you a lot about what kind of thing went wrong but given
it's not-equal-to-itself nature it can be a nasty return value when
unexpected. For instance, 'wombat'.charCodeAt(20) returns NaN. How does
this makes sense in the context of JavaScript? Yes, I'm trying to get a
number but from what I would assume (in complete ignorance of unicode
evaluation at the lower level) is some sort of look-up table. I'm not
trying to divide 'a' by 2, parseInt('a') or get the square root of a
negative number. It's as counter-intuitive as indexOf returning a NaN on a
failure to find a matching element value. A highly specific return value
like NaN only seems ideal to me when the user-placed value responsible is
an operand or as a single argument for a simpler method that is one step
away from being evaluating the arg as a number or failing to do so.

* null - As typically implemented but more universally and broadly. I'd
like to see null in core methods acting more as a catch-all when dealing
with something like a NaN that resulted from operations that don't directly
hit a single obvious argument.  Essentially a message from core methods
telling you, There's no error but I can't do anything useful with these
argumetns Examples: There is no index for a value that can't be found in
an array. No matches were possible with that regEx. A more complicated
method that could be attempting to access something in its instance that's
not there or have trouble with a number of args runs into trouble and
returns null on the principle that it's better to be general than misdirect.


An overly explicitly named method to make my point:

someImaginaryCoreMethodThatGetsAnArrayValueViaSomeArrayKeyAndDividesByTwo(someArrayKey)

So basically when the method takes that array key, gets an undefined value
with it, tries to divide undefined by 2, and gets NaN, what's the most
helpful return value from a less experienced user's perspective? Is the
array key undefined or not a 

Re: Consistency in The Negative Result Values Through Expansion of null's Role

2012-08-15 Thread David Bruant
Le 16/08/2012 00:35, Rick Waldron a écrit :
 On Wed, Aug 15, 2012 at 6:02 PM, Erik Reppen erik.rep...@gmail.com
 mailto:erik.rep...@gmail.com wrote:
  


 Is consistent type return a heuristic carried over from more
 strictly-typed paradigms or would it murder performance of the
 native methods to do the logic required to return something like
 null in these cases? In a dynamic language, why not focus on more
 consistent return types across the board for an indicator that you
 won't be getting particularly handy results?


 It would break the web.
I agree and would like to encourage you (Erik) to read the foreword of
my ECMAScript regrets project
https://github.com/DavidBruant/ECMAScript-regrets#foreword

The only way forward to fix broken parts of JavaScript (like making it
more consistent) is to give up on JavaScript and create a new language
that compiles down to JavaScript. That's my opinion at least.

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Feedback on hypot and hypot2

2012-08-15 Thread David Bruant
Le 14/08/2012 04:16, Allen Wirfs-Brock a écrit :
 check out the current ES66 spec. draft.  Based upon discussions at the
 March TC39 meeting hypot2 was eliminated and an optional third
 argument as added to hypot.
Quoting relevant part of the March meeting notes [1]:
 Discussion of hypot, hypot2.
 hypot is the square root of the sum of squares and takes either two or
 three arguments.
 hypot2 is the sum of squares and takes either two or three arguments.
 Waldemar: How is hypot2 better than just doing x*x + y*y?
 Luke: It's just ergonomics.
 General reluctance about the hypot2 name because it looks like the 2
 means two arguments (as in atan2).  Some debate about other function
 names (hypotSq? sumOfSquares?).
 MarkM: How is hypot better than sqrt(x*x + y*y)?
 It's potentially more efficient and more accurate.  It is widespread
 in numeric libraries.
 Consensus:  hypot will support just two or three arguments.  hypot2 dropped.
Consensus here indeed.

 Waldemar, MarkM:  Why not one or zero arguments?  It would be 0 for
 zero arguments and abs for one argument.
 Allen, DaveH:  If you pass one argument to hypot, you'll get NaN.
 Luke:  It's not variadic.
 Waldemar:  Why isn't it variadic?
 Luke:  2 or 3 is the 99% use case.
 Waldemar:  2 or 3 arguments is the 99% use case for max.
 Waldemar:  If it's not variadic and takes only 2 or 3 arguments,
 you'll get silent mistakes.  If you pass in four arguments, you'll get
 the hypot of the first three, and the last one will be silently
 ignored.  That's bad.
I agree and it seems to be unadressed by the decision in the consensus,
is it?
If it's decided that hypot should only accept at most 3 arguments, then,
passing 4 or more args should return NaN (instead of making people hate
JavaScript more because of error hiding).
I however still believe having hypot variadic is more interesting, though.

 Luke:  Will go back to the experts to explore implementing variadic hypot.
Who/what does the experts refer to here?


 Note that the new function names such as hypot are generally selected
 to match the names from widely used c
 libraries: 
 https://docs.google.com/spreadsheet/ccc?key=0Ak51JfLL8QLYdDBVcFZaMXhlY2d2RnM0TDVxLWlua3chl=en#gid=0

I certainly missed the discussion where this choice was made, but does
C-matching matters?
For Math functions, I would tend to favor function names math folks are
used to, like the ones in Matlab (which I don't know at all, maybe they
map the C ones too).

Is the proportion of people who come to JavaScript from C using math
libraries (as opposed to people coming to JavaScript with any other
background) big enough to consider familiarity with C function names a
decisive argument for JavaScript function naming?

David

[1] https://mail.mozilla.org/pipermail/es-discuss/2012-March/021919.html
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Test intl402/ch12/12.1/12.1.1_18.js problem

2012-08-15 Thread Norbert Lindenberg
For those puzzled by the subject line: It's a reference to a test in the 
growing conformance test suite for the ECMAScript Internationalization API, 
currently available here:
http://lindenbergsoftware.com/ecmascript/test262/tests1013.patch

For this specific test, an implementation could simply save the value of the 
hour12 property provided by the caller and return it through resolvedOptions(). 
But if the caller doesn't provide an hour12 property, the implementation is 
supposed to look up whether the locale of the DateTimeFormat uses 12-hour time 
or 24-hour time, and that indeed doesn't make much sense if the format doesn't 
include an hour field.

So yes, I think we can change the spec to set the [[hour12]] internal property 
only if the [[hour]] internal property is present after step 28 of 
InitializeDateTimeFormat. That's similar to how [[currency]] depends on 
[[style]] in InitializeNumberFormat.

A bigger problem actually is that FormatDateTime currently ignores hour12 
entirely. I'll have to fix that too: If hour12 is true, the algorithm should 
set hour = hour % 12, allow locales to display hour 0 as 12, and insert a 
localized am/pm indicator.

Any objections to these changes?

Norbert


On Aug 15, 2012, at 17:50 , Nebojša Ćirić wrote:

 Error message:
 
 Option value true for property hour12 was not accepted; got undefined instead.
 
 If you create date formatter with:
 var df = Intl.DateTimeFormat([], {hour12: true})
 
 you get only day/month/year by default, no hour data. ICU will return M/d/y 
 pattern back, and you don't have enough data about hour12 value. As soon as 
 you request hour data you'll get some. I wonder if we should change the spec 
 to say that hour12 field makes sense only if hour field is present?
 
 -- 
 Nebojša Ćirić

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Feedback on hypot and hypot2

2012-08-15 Thread Allen Wirfs-Brock

On Aug 15, 2012, at 5:09 PM, David Bruant wrote:

 Le 14/08/2012 04:16, Allen Wirfs-Brock a écrit :
 check out the current ES66 spec. draft.  Based upon discussions at the
 March TC39 meeting hypot2 was eliminated and an optional third
 argument as added to hypot.
 Quoting relevant part of the March meeting notes [1]:
 ...
 Waldemar, MarkM:  Why not one or zero arguments?  It would be 0 for
 zero arguments and abs for one argument.
 Allen, DaveH:  If you pass one argument to hypot, you'll get NaN.
 Luke:  It's not variadic.
 Waldemar:  Why isn't it variadic?
 Luke:  2 or 3 is the 99% use case.
 Waldemar:  2 or 3 arguments is the 99% use case for max.
 Waldemar:  If it's not variadic and takes only 2 or 3 arguments,
 you'll get silent mistakes.  If you pass in four arguments, you'll get
 the hypot of the first three, and the last one will be silently
 ignored.  That's bad.
 I agree and it seems to be unadressed by the decision in the consensus,
 is it?
 If it's decided that hypot should only accept at most 3 arguments, then,
 passing 4 or more args should return NaN (instead of making people hate
 JavaScript more because of error hiding).
 I however still believe having hypot variadic is more interesting, though.

All JavaScript built-ins accept and ignore unspecified extra arguments, so 
there is nothing out of the ordinary about ignoring a 4th augment.  There is a 
significant difference between specifying that a function takes exactly 2 or 3 
arguments and saying that it is takes an arbitrary number of arguments 

 
 Luke:  Will go back to the experts to explore implementing variadic hypot.
 Who/what does the experts refer to here?

presumably Microsoft' keeps warehouses full of experts
 
 
 Note that the new function names such as hypot are generally selected
 to match the names from widely used c
 libraries: 
 https://docs.google.com/spreadsheet/ccc?key=0Ak51JfLL8QLYdDBVcFZaMXhlY2d2RnM0TDVxLWlua3chl=en#gid=0
 
 I certainly missed the discussion where this choice was made, but does
 C-matching matters?

matching precedent.  


 For Math functions, I would tend to favor function names math folks are
 used to, like the ones in Matlab (which I don't know at all, maybe they
 map the C ones too).
 
 Is the proportion of people who come to JavaScript from C using math
 libraries (as opposed to people coming to JavaScript with any other
 background) big enough to consider familiarity with C function names a
 decisive argument for JavaScript function naming?

I believe that the thinking was that people may be translating Math libraries 
or numeric code currently implemented in C/C++ into JS and that name 
familiarity would help.

Allen


 
 David
 
 [1] https://mail.mozilla.org/pipermail/es-discuss/2012-March/021919.html
 

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Consistency in The Negative Result Values Through Expansion of null's Role

2012-08-15 Thread Norbert Lindenberg

On Aug 15, 2012, at 15:35 , Rick Waldron wrote:

 On Wed, Aug 15, 2012 at 6:02 PM, Erik Reppen erik.rep...@gmail.com wrote:
 
 * 'wombat'.charAt(20); //returns an empty string, but that's a concrete 
 value whereas 'wombat'[20] returns undefined
 
 For the same reason indexOf always returns a number, charAt always returns a 
 string.
 
 wombat[20] will dereference the string at an index that doesn't exist, 
 which means it's undefined.

I think undefined would have been a fine return value for 'wombat'.charAt(20) - 
you asked for something that's not there. The expected return value in the 
it's there case is a one-code-unit string, so an empty string doesn't meet 
expectations anyway.

 * NaN - It can tell you a lot about what kind of thing went wrong but given 
 it's not-equal-to-itself nature it can be a nasty return value when 
 unexpected. For instance, 'wombat'.charCodeAt(20) returns NaN. How does this 
 makes sense in the context of JavaScript? Yes, I'm trying to get a number 
 but from what I would assume (in complete ignorance of unicode evaluation at 
 the lower level) is some sort of look-up table. I'm not trying to divide 'a' 
 by 2, parseInt('a') or get the square root of a negative number.

In this case NaN is clearly wrong, because what the caller expects is a code 
unit, an integer between 0 and 0x. You have to check for NaN just like you 
have to check for undefined, but undefined would have been the normal 
JavaScript result for nothing there.

It's too late to fix charCodeAt, but for the new codePointAt I'm proposing 
undefined as the nothing there result.
http://norbertlindenberg.com/2012/05/ecmascript-supplementary-characters/index.html#String

Norbert

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss