Re: revive let blocks

2015-06-20 Thread Kyle Simpson
Just to wrap this thread up, quoting myself from another thread:

In any case, I won't push my proposal anymore.



But for posterity sake, wanted to make one last comment as to why the various 
suggestions for IIFE's and arrow expressions are inappropriate for the task: 
they change (hijack) the behavior of `return`, `break`, and `continue`. A 
standalone block like `{ let x = 2; .. }` or `let (x = 2) { .. }` can be placed 
anywhere, inside a function, loop, etc, and not hijack these types of 
statements.

I'll be sticking with:

```js
{ let x = 42;

console.log(The meaning of JS: , x);

}
```

Appreciate the various thoughtful responses.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: The Tragedy of the Common Lisp, or, Why Large Languages Explode (was: revive let blocks)

2015-06-20 Thread Kyle Simpson
  I agree completely, and I fully apologize. Starting the thread this way was 
  inappropriate, at least without some mitigating text which I did not think 
  to add. I like the fact that we are all civil to each other here and try to 
  keep the environment welcoming and friendly. Please no one take my message 
  as precedent in the other direction.
 
 I think what that HN thread is missing is that you and Kyle know each other 
 from before and interacted before and you knew he would not take personal 
 offense in it or think you don't appreciate his contribution efforts to the 
 standard.

Kill seemed too directed at first and was off-putting, especially since there 
are a lot of post-ES6 proposals floating about and none of them received the 
same sort of strong pushpack. It's not as if there was evidence that the 
proposal was intentionally bad-faith, toxic or trolling. I appreciate Mark's 
reflective sentiments, and I also echo his general concerns of growing a 
language beyond its appropriate scope.



I believe small variations to existing features should have far less burden of 
proof than large features. It's a minor affordance for a particular style of 
coding with no other value proposition other than to assist in avoiding 
mistakes (the TDZ footgun specifically). I think that carries its own weight 
and then some. And further, I don't think small polishes are necessarily the 
subject of death by a thousand papercuts, though I recognize they can lead to 
that if we're not judicious.

In any case, I won't push my proposal anymore. I just wanted to assert it *was* 
carefully considered beforehand.



In the spirit of retrospective on ES6, my own concerns with ES6 are not in 
the small things (or even in the raw count of features added), but actually in 
some large things. There were major features (such as `class`) added in ES6 
that I continue to have strong reservations about, not just in themselves but 
in how we're seeing them already in post-ES6 act as magnets for several other 
feature requests. I'm not trying to re-litigate `class` by any means -- it's a 
done deal -- but simply pointing out that a large feature like that is, I 
think, more susceptible to lead to feature bloating than tiny syntax tweaks.

I respect and appreciate the difficult work it takes to make these decisions. I 
hope the positive spirit of this thread carries over into careful consideration 
of the other post-ES6 proposals, even and especially the ones that have 
garnered lots of excitement and are already being talked about as if they're 
done, but which may not in the long run be best for JS.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: revive let blocks

2015-06-18 Thread Kyle Simpson
 (function (a, b, c) {
 
 }(2))

The main disadvantage of that style over the one I'm advocating for is that it 
visually separates the variable declaration (`a`) from its value initialization 
(`2`). If there's 5, 10, or more lines of code in between them, it makes it 
much harder to figure out the initial state of variables as they enter a 
block.

Also, obviously, a function call is more heavy weight (performance wise) than a 
block with scoped declarations.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: revive let blocks

2015-06-18 Thread Kyle Simpson
 Be aware that the way you utilize 'let' will be a breaking change. In ES5 and 
 ES6

In addition to the fact that this feature is long since co-existing in FF and 
doesn't seem to have broken the web, IIUC, there was already a breaking 
change in ES6, with `let` and destructuring:

```js
let[x] = foo();
```

I believe it was deemed that the chances of that breakage being widespread were 
low enough to warrant the `let` feature anyway. I would postulate (though I 
don't have the facility to test it) that the exact function-call-and-block 
pattern `let(x) { .. }` would be as much or less likely to occur in legacy code 
than `let[x] = foo()`.

 If you consider use of block without context is ugly, use if(true) or 
 do-while(false)

Those options are much uglier than the standalone `{ .. }` block.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: revive let blocks

2015-06-18 Thread Kyle Simpson
 Apart from complicating the engine and the grammar

I've tried to figure out what complications it introduces. In my imperfect 
analysis, it hasn't seemed like much. I've written a transpiler tool[1] that 
finds `let (x..) { .. }` occurrences and changes them to `{ let x.. .. }`. It 
was pretty easy to do. I would imagine a similar technique could work for other 
transpilers and even the engines themselves (simple AST transformation).

I'm sure there are other/obscure issues I'm missing, but the fact that this 
still works in FF leads me to believe it is a tenable feature at least.

 what advantage does the second version have over the first one?

The primary advantage is that it's an explicit form that syntactically forces 
the main `let` declarations to the top of the block, eliminating the TDZ hazard 
(at least for those).

Ofc that's not to say that you can't also do other `let` declarations inside 
the block and shoot yourself in the foot. But at least for the main ones you're 
declaring for the block, it's clear and obvious what variables will exist for 
that block's scope by looking at the top of the block.

That notion is similar to the advantages many devs feel/felt from putting all 
the `var` declarations at the top of a function declaration, or locating the 
formal function parameters explicitly in the function declaration instead of 
implicitly pulling in local variable declarations from `arguments`:

```js
function foo() {
   var [a,b,c] = arguments;
   // ..
}

// vs:

function bar(a,b,c) {
   // ..
}
```
  
 Why do you prefer it to the first one?

I prefer creating explicit blocks for scope rather than implicitly hijacking 
existing blocks for scope. I prefer to be able to reason about my `if` block 
separately from a localized block of scope that may appear inside it. That's 
why I create the explicit `{ .. }` block to put my `let` declarations in. And 
that's why the next logical step is to move the `let` declarations to a 
syntactic form that forcibly attaches them to the block, making the 
purpose/intent of the block all that much clearer.


  [1] https://github.com/getify/let-er

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


revive let blocks

2015-06-17 Thread Kyle Simpson
I'd like to ask if there's anyone on TC39 that would be willing to champion a 
proposal to add the let-block (let-statement) syntax?

I currently write my block-scoped declarations as:

```js
{ let a = 2, b, c;
// ..
}
```

I do this because I want to be in the habit of always putting my `let` 
declarations at the top of blocks to avoid TDZ hazards. However, Firefox has 
long had the alternate let-block/statement syntax, which I prefer:

```js
let (a = 2, b, c) {
// ..
}
```

Would there be support to consider such a proposal?

Side note: I'd also be in favor of a `const (a = 2) { .. }` form, if the 
symmetry was appealing.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: super() on class that extends

2015-04-10 Thread Kyle Simpson
Neither the base (parent) nor derived (child) class requires a constructor, nor 
does the child class require a `super()` call. If you omit either constructor, 
an assumed one is present. However, if you *do* declare a constructor in a 
derived class, you'll need to call `super()` in it.

So, to the point of your original question, this is totally valid:

```js
class A {
  foo() { console.log(A:foo); }
}

class B extends A {
  bar() { super.foo(); }
}

var x = new B();

x.bar(); // A:foo
```

See it in action:

http://babeljs.io/repl/#?experimental=falseevaluate=trueloose=falsespec=falseplayground=falsecode=class%20A%20%7B%0A%20%20foo()%20%7B%20console.log(%22A%3Afoo%22)%3B%20%7D%0A%7D%0A%0Aclass%20B%20extends%20A%20%7B%0A%20%20bar()%20%7B%20super.foo()%3B%20%7D%0A%7D%0A%0Avar%20x%20%3D%20new%20B()%3B%0A%0Ax.bar()%3B



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Supporting feature tests directly

2015-03-29 Thread Kyle Simpson
Without the direct feature test API I'm suggesting (or something like it), how 
will someone feature test the two new (proposed for ES7) `export` forms, for 
example?

https://github.com/leebyron/ecmascript-more-export-from

I'm not strongly opposed to going the `Reflect.parse(..)` route for 
feature-testing (certainly more preferable than `eval` / `Function`), except 
I'm concerned that:

1. it will offer no reasonable path in the future for answering the hard 
tests, like TCO would have been. Would `Reflect.parse( Symbol.TCO )` be too 
janky of a hack for such things?
2. engines won't be able to tell (static analysis?) that the parse tree isn't 
needed and not wasting that memory for GC to clean up.

The advantage of an API that doesn't return anything but `true` / `false` means 
the engine knows it doesn't need to keep the tree around and send it into 
JS-land. I don't know if there's any internal processing benefits, but it 
certainly seems there's memory benefits.


 I don't see a real need for high performance in these tests

High performance? No.

But, if these feature tests slow down an app in the most critical of its 
critical paths (the initial load) to the point where people can't use the 
feature tests in the way I've proposed, then the solution is moot.

I *could* load up an entire parser written in JS and use it to parse syntax 
strings. That's *a* solution. But it's not a *viable* solution because it's way 
too slow for the purpose of feature tests during a split load.

So it should be noted that the proposal does imply that whatever solution we 
come up with, it has to be reasonable in performance (certainly much better 
than `eval` / `Function` or a full JS parser loaded separately).
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: short-circuiting Array.prototype.reduce

2015-03-27 Thread Kyle Simpson
 I think you could write that like this:
 
outer = outer.filter(arr =
  !arr.some((e, i) =
i  0  arr[i-1] === e));

Yes, you are of course correct. What I was doing in the originally cited code 
was illustrating using how `reduce(..)` by its nature supports the adjacency 
check, instead of using indexes and manual `i-1` type logic.

IOW, I initially wanted to avoid the ugly `i-1`, and I traded that for the 
unfortunate lack of early exit necessitating the equally ugly `prev === false`. 
:/



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: short-circuiting Array.prototype.reduce

2015-03-26 Thread Kyle Simpson
 Um, that's not exactly what reduction is meant for.

There's lots of different ways `reduce(..)` gets used in the wild; I can list 
several entirely distinct but common idioms right off the top of my head. Just 
because it's not the original intent doesn't mean it's invalid to do so.

To the point of the earlier question I was addressing, I was just giving *an* 
example of real code, in a very specific circumstance, where I would have liked 
early-exit. It was not a broad endorsement of the presented idiom as a general 
pattern.


 The reduce method is designed so that the return values and the accumulator 
 argument do have the same type.

In my observation, there's nothing at all that requires that. This is certainly 
not the only time that I've made effective use of mixing/toggling types during 
reduction.


 In your example, you have somehow mixed an expected boolean result with the 
 item type of the array.

If by expected boolean result you mean what `filter(..)` expects to receive, 
actually it doesn't require a strict boolean. It expects a truthy/falsy value 
(check the spec). I like coercion. I use it liberally. The values in my `inner` 
arrays were all truthy values (see below) and `filter(..)` works perfectly fine 
receiving such.


 This leads to several bug in your implementation, which doesn't work

None of those are bugs in my implementation, because none of those can happen 
within the constraints of the problem. If you re-read the stated setup for the 
problem I was solving, you'll see the constraints I'm referring to.

BTW, since you brought it up, for the empty `inner` array case to be supported 
(I didn't need it, but...), all I would need to do is `inner.reduce( 
function.., undefined )` (or `false` if you prefer) if I wanted empty arrays 
filtered out, or `inner.reduce( function.., true )` if I wanted empty arrays 
preserved. Easy.


 all operations that return an absorbing element...would benefit

My `false` value trigger on finding an `inner` that should be filtered out is 
conceptually that. From then on in the reduction, all other values are 
absorbed (aka ignored, aka overriden) by the `false`. :)
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Converting strings to template strings

2015-03-26 Thread Kyle Simpson
 What have you been calling the MemberExpression TemplateLiteral and 
 CallExpression TemplateLiteral forms?

Those are two variations of the Tagged String Literals form.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: short-circuiting Array.prototype.reduce

2015-03-26 Thread Kyle Simpson
 The example code isn't very compelling either; something more real-world 
 would be good

I recently ran across a usage of `reduce(..)` that could have benefitted from 
an early return. Figured I'd just drop it here for posterity sake, in case 
anything ever comes of this idea.

I had an array of arrays (2-dimensional array), where each inner array had a 
list of simple string/number elements (none of them falsy) that could have some 
repetition within the inner array. I wanted to filter the outer array (the list 
of the inner arrays) based on whether an inner array had any adjacent 
duplicates. That is, `[1,4,2,4]` is fine to keep but `[2,4,4,5]` should be 
filtered out.

Since `reduce(..)` conveniently can compare two adjacent elements if you always 
return the current value, I decided to model the inner check as a `reduce(..)` 
that reduces from the original array value to either a `false` or a truthy 
value (the last element of the inner array element). This reduction result then 
is how `filter(..)` decides to keep or discard. The reason an early exit would 
be nice is that as soon as you run across an adjacency-duplication, no more 
reduction is necessary -- you can immediately reduce to `false`.

Here's how I did it, which worked but which was slightly less appealing:

```js
var outer = [
  // [1,2,1,3,4,2]
  // [foo,bar,bar,10,foo]
  // ..
];

outer = outer.filter(function filterer(inner){
  return inner.reduce(function reducer(prev,current){
if (prev === false || prev === current) return false;
return current;
  });
});
```

The reduction initial-value is omitted, so it's `undefined`, which never 
matches any of the `inner` contents.

The `prev === false` check is the way that I fake the early exit, by which 
once the reduction value is tripped to `false`, that's always the result for 
the rest of the reduction.

There's lots of other ways to slice that problem, I know. But reduction was 
attractive except for its lack of early exit.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Supporting feature tests directly

2015-03-26 Thread Kyle Simpson
 doesn't yet solve my use cases, although I can't speak for Kyle.

It would not support my use-case. At least, in the sense that it's an 
all-or-nothing which is counter to what I'm looking for. It's also going to be 
way more processing intensive than just doing an `eval` / `Function` test, 
which defeats the entire point of the proposal.


 a feature that was specificatlly design to enable non-conforming 
 implementations

That's not at all the intent of this feature. More below.


 This sort of feature testing is inherently a short term need. Within a few 
 years, all implementations will support all major features

Within a few years, all implementations will be ES6 compilant, sure. But 
they'll never all be entirely up to date on ES2016, ES2017, ES2018, … as they 
roll out.

This feature testing mechanism is intended to be a rolling window of FT's for 
the gap between when something is standardized (to the point that developers 
could rely on polyfills/transpiles for it) and when it's fully implemented in 
all browsers that your app is running on. This gap could be as short as 6-12 
months and (considering mobile) as long as several years.

On an app-by-app, need-by-need basis, there will *always* be such a gap, and 
FT's let you know what you have available at that moment in that specific 
browser.

This is directly analogous to all other classes of FT's, such as modernizr 
(focused more on HTML/CSS, with JS only as it related to one of those).


 For example, I’m sure nobody today has a need to test 
 Reflect.supports(Symbol.functionExpression) or 
 Reflect.supports(Symbol.tryCatch).

No, they don't. Exactly my point with the rolling window. And exactly why I 
stated that the intent of this feature is *not* about ES6 (or ES5) features, 
but rather about new stuff in ES2016+. It would be my hope that the feature 
testing API proposed could be one of the first things browsers could land 
post-ES6, which would mean devs could soon'ish start using those tests to 
track/cope with the gap between the ES2016 stamp of approval and when all those 
ES2016 features land. And of course the same for ES2017 and beyond.

And since what I'm asking for is stuff that, largely, can already be tested, 
just less efficiently, we could very quickly polyfill `Reflect.supports` to let 
devs use it even earlier.


 would be throw-away work that within in few years would just be legacy baggage

My design intent with my proposal, supporting the string syntax form, is to not 
have a huge table of lookup values that are legacy baggage and thrown away, but 
a general feature that is flexible and continues to be useful going forward.

The few exception cases, if any, like for example a `Symbol.TCO` test or 
whatever, would be very small, and their burden of legacy would be quite low 
once we're past the window of them being useful.


 a feature such as Reflect.parse which has other uses 

As I mentioned near the beginning of this thread, `Reflect.parse(..)` would 
generally suit the proposed use-case, except it does a lot of extra work 
(creating and returning a tree -- a value that then I'd be throwing away 
creating unnecessary GC) that feature testing itself doesn't need. It's unclear 
that `Reflect.parse(..)` would provide any additional performance gains over 
the current `eval` / `Function` approach, and could even be potentially worse.

It's also unclear that `Reflect.parse(..)` would ever have any reasonable 
answer for the hard tests we've briefly touched on, such as exposing 
semantics like TCO or any other sorts of things we invent which can't 
reasonably be tested by syntax checks or pragmatically tested via runtime code. 
At least `Reflect.supports(..)` *could* have an answer for that.



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: `import` and hoisting

2015-03-25 Thread Kyle Simpson
bump.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Supporting feature tests directly

2015-03-25 Thread Kyle Simpson
What this sub-discussion of CSS `supports(..)` is reinforcing is what I said 
earlier: a capability to do feature tests in a direct, efficient, and non-hacky 
manner is valuable to some/many uses and use-cases, even with the recognition 
that it doesn't have to *perfectly* support all conceivable 
uses/use-cases/tests.

We should avoid a mindset that anything short of perfect isn't worth doing at 
all. Thankfully JS doesn't have such a design principle.

A `Reflect.supports( Symbol.TCO )` test isn't perfect. It could accidentally or 
intentionally lie. But it *could* be better to some audiences than having no 
information. I personally would prefer to use it, even with its risks, than 
trying a long recursive loop in a `try..catch` to imply if TCO was in effect.

Nevertheless, it's the least important kind of test being advocated for here, 
even though it seems to be getting all the attention. If that kind of test is a 
bone of contention, it should be the easiest to drop/ignore.

Moreover, to reduce the risk of bitrot on feature lookup tables (that 
`Symbol.TCO` would suffer), the `Reflect.supports( (() = {}) )` test seems 
like it would be preferable to a `Reflect.supports( Symbol.arrowFunction )` 
type of test.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Supporting feature tests directly

2015-03-25 Thread Kyle Simpson
 It's not that it's imperfect. It's that it's useless in the real world.

It's clear it's useless to you. It's not clear that it's useless to everyone. 
In fact, I for one definitely find it useful. No sense in continuing to argue 
over subjective opinion.


 We can already do shallow testing of APIs. Reflect.support doesn't help 
 there, and in some ways (that I've outlined before) it is a regression.
 
 ```
 if (!Array.prototype.includes) { ... }
 if (!Reflect.supports(Array.prototype.includes)) { ... }
 ```

As I've repeatedly said, this proposed feature is not for those sorts of tests. 
It's for all the syntax tests that require `try..catch` + `Function` / `eval`. 
Please (re)read the rest of the thread.


 You also wouldn't do testing of syntax support at runtime

I already do. I fully intend to keep doing so.


 as you would effectively be duplicating the code.

Nope, not duplicating code. Maintaining code in original ES6+ authored form as 
well as transpiled form. They're both files that can be loaded by a browser. So 
my intent is to decide at runtime which one is appropriate, and only load one 
or the other.


 ...send down a file that tests for support and then sends it back to the 
 server

Yep, absolutely. Bootstrapping.


 and then build the appropriate assets for that browser?

Of course not. It picks one of two already existing files.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Supporting feature tests directly

2015-03-24 Thread Kyle Simpson
I should stress that while my original proposal (linked earlier in thread) 
mentions some of the hard ES6 cases (like TCO), my focus is not on creating 
feature tests for ES6. ES6 has sailed. Any feature we could possibly conceive 
here is quite unlikely to land in a browser before that browser gets all (or at 
least most) of the ES6 stuff that one might be wanting to test for.

My goal is for us to stop adding features to JS that aren't practically feature 
testable. I would strenuously desire to have something like 
`Reflect.supports(..)` (of whatever bikeshedded form) in ES2016 along with any 
new conceived features. That goes a thousand times more if we invent new syntax 
(we likely are) or new untestable semantics (like TCO).

Of course, if we had `Reflect.supports(..)` now, it'd be amazingly helpful in 
detecting TCO, which I would dearly love. But that's not the goal. I don't 
think we need to muddy the waters about what the ES6 feature tests would be. At 
least not for now.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Supporting feature tests directly

2015-03-24 Thread Kyle Simpson
 That sounds like a horrible future to me.

 IMO, this is the only remotely sensible go-forward plan to deal with the new 
 transpiler-reality we're in.

 I for one hope that we're using the actual ES6+ code browser makers are 
 implementing rather than transpiling around it forever.

Ugh. Apologies for the hyperbole. Got carried away. But that *is* how strongly 
I feel about it.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Supporting feature tests directly

2015-03-24 Thread Kyle Simpson
 A lot of feature detection relies on shallow tests:

 However, others need to test that features are properly supported by the 
 engine. This is because shallow testing does not cover engine quirks. 


Of course, shallow tests are often totally sufficient, and I'm trying to have 
the most efficient method for doing that for places where there is no API 
identifier to check for.

That doesn't mean that you wouldn't also conduct some targeted deeper semantics 
conformance tests in places you needed to. It just means that as a first pass, 
a lot of FT's that otherwise require `Function(..)` or `eval(..)` can have a 
shorter more optimal path supported by the engine.

It's not intended to be an exclusive replacement for any test you could ever 
conceive.


 relying on something like `Reflect.supports(...)` isn't any more useful than 
 shallow feature detection

Of course not. Nothing in my proposal is supposed to indicate as such.


 (the engine might be lying to you).

Good grief, why would we add a feature to ES2016+ that is intended to lie to 
developers or mislead them? :)

But in all seriousness, why would an engine do something like that? The bad 
cases in the past where this kind of thing happened are all hold-over vestiges 
of a bad web (a locked-in IE ecosystem, a 
still-too-painfully-slow-to-update-and-siloed-mobile ecosystem, etc).

Just because browsers have committed those sins in the past doesn't mean we 
have to assume they'll keep doing them.


 TCO is one of the places where it is difficult to test for. However, it's a 
 pretty rare that you would want to.

Totally disagree here. Anyone that's following the (Crockford) advice of not 
using loops anymore and writing all recursion absolutely cares if such code can 
be directly loaded into a browser or not.


 In this case you would just write the second. This is also true for most 
 syntax features: you wouldn't use feature detection, you would simply 
 transpile your code down to the lowest level of support you need it to have.

Again, totally disagree. At least, that's not even remotely my intention. 
That's locking us in to always running transpiled code forever, which basically 
makes the engines implementations of features completely pointless. That sounds 
like a horrible future to me.

My intention is to feature test for the features/syntax that I need in my 
natively written code, and if tests pass, load my native code so it uses the 
native features. If any tests fail, I fall back to loading the transpiled code. 
IMO, this is the only remotely sensible go-forward plan to deal with the new 
transpiler-reality we're in.

I'm even building a whole feature-detects-as-a-service thing to support exactly 
that kind of pattern. Will anyone else follow? I have no idea. But I sure hope 
so. I for one hope that we're using the actual ES6+ code browser makers are 
implementing rather than transpiling around it forever.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Supporting feature tests directly

2015-03-22 Thread Kyle Simpson
 ...using eval or Function is not even an option in CSP constrained 
 environments

 ...that's exactly what we'd like to know, if a generic syntax will break or 
 not.

Furthermore, there are things which are valid syntax which cannot be directly 
`eval`'d or `Function`'d, such as `import` and `export`.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Supporting feature tests directly

2015-03-22 Thread Kyle Simpson
 likely to be engine variances in the future

I hope you just mean like changes that ES7 might make to an ES6 feature. And I 
hope those aren't syntactic as much as semantic. :)

If there was a change on syntax, I would assert that should be considered a 
new feature with its own new test, even if it was just a variation on an 
existing one. Like `Symbol.arrowLiteral` and `Symbol.conciseArrow`, where the 
second test might check specifically places where the grammar for arrows was 
relaxed to allow omission of `( )` or whatever.


 knowing that the syntax is supported doesn't mean that ES6's semantics apply

That's true. But I think semantics are more a run-time concern, and thus should 
be checked with actually executed code (`Function(..)`, etc).

Off the top of my head, things which are statically verifiable, like duplicate 
param names, could be checked (if that's the kind of thing a parser even 
checks), but things like if we relax and allow implicit symbol coercion are 
much more clearly run-time errors.


 If that's the sole goal - detecting SyntaxErrors efficiently without using 
 eval

Yep, that's it.

Consider it a first-pass quick feature test for syntax… if more extensive 
deeper run-time semantics checks are necessary, that would more be the realm of 
`Function(..)` or other similar (future) features. At least in those 
deeper-check cases, you wouldn't have to worry about catching `SyntaxError`s, 
since you could know in advance before trying the more performance-expensive 
tests.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Supporting feature tests directly

2015-03-21 Thread Kyle Simpson
Has there been any consideration or discussion for direct support of feature 
tests for ES7+ features/syntax? I'm thinking specifically of things which are 
difficult or impossible to just simply test for, like via the existence of some 
identifier.

I have an idea of what that could look like, and am happy to discuss further 
here if appropriate. But I was just checking to see if there's any prior art 
around related specifically to JS to consider before I do?
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ES6 module syntax – done?

2015-03-21 Thread Kyle Simpson
Just for posterity sake, since I got tripped up here…

`import .. from this module` did not make it into ES6. It may come in later, in 
that form or some other.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Supporting feature tests directly

2015-03-21 Thread Kyle Simpson
 I think you're referring to the `eval` function?

Actually, I'm referring to proposing something new that would substitute for 
having to hack feature tests with `eval`.

These are the initial details of my idea, a `Reflect.supports(..)` method: 
https://gist.github.com/getify/1aac6cacec9cb6861706

Summary: `Reflect.supports( (()={}) )` or `Reflect.supports( let x )` 
could test **just** for the ability to parse, as opposed to the 
compilation/execution that `eval(..)` does. It'd be much closer to `new 
Function(..)` except without the overhead of needing to actually produce the 
function object (and then have it be thrown away for GC).

This is inspired by 
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/SpiderMonkey/Parser_API,
 where FF has a `Reflect.parse(..)` method that is somewhat like what I'm 
suggesting, except that for feature tests we don't need the parse tree, just a 
true/false of if it succeeded.

An alternate form would be `Reflect.supports( Symbol.arrowFunction )`, where 
the engine is just specifically saying yes I support that feature by 
recognizing it by its unique built-in symbol name.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module import/export bindings

2015-03-18 Thread Kyle Simpson
If you assign another variable from an imported binding, is that assignment 
done as a reference to the binding (aka creating another binding) or via 
normal reference-copy/value-copy assignment behavior?

```js
export var a = 42;
export function b() { console.log(orig); };
export function change() {
   a = 100;
   b = function b() { console.log(new); };
};
```

And then:

```js
import { a, b, change } from ...;

var x = a, y = b;

x;// 42
y();  // orig

change();

a;// 100
b();  // new

x;// ??
y();  // ??
```

Will the final `x` and `y()` result in:

* `42` / `orig`
* `42` / `new`
* `100` / `orig`
* `100` / `new`

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module import/export bindings

2015-03-18 Thread Kyle Simpson
 we are NOT changing the semantic of the assignment expression.

So the result is going to be `42` / `orig`, right? :)

The reason I asked is not because I thought we were changing the semantic of 
the assignment expression, but because I wasn't sure if this top-level const 
or whatever binding was some sort of special thing that you can only hold a 
reference to. Clearly not.

Also, since `y` keeps a reference to the original function reference 
imported, even if the module updates itself, this very well may affect those 
who (in places other than this thread, and for different reasons) have often 
suggested they plan to do stuff like:

```js
import { y } from ..;

let x = y;
..
// use x now
```

In those cases, I was trying to find out if `x` could be updated by the module 
itself, like `y` can, because that matters (either good or bad) to the desire 
to use such a pattern. Since `x` can't get updated here, it's now clear to me 
that I wouldn't want to use such pattern, for fear that I am not using the 
latest API binding.

Thanks for clarifications!
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module import/export bindings

2015-03-15 Thread Kyle Simpson
 Regarding the internal server error

Ahh, thanks. Yeah, not only is it confusing to see the edit button, but 
especially since clicking it asks you to login to the site as if to verify your 
authorization to do such. :)


 I guess you'd intended to write export {foo as default} instead of export 
 default foo...

Well, no, until your reply I didn't realize there was a difference in the two! 
Yikes. That's… surprising.

Just to make sure I'm crystal clear… in my original example:

```js
var foo = 42;
export default foo;
foo = 10;
```

That exports a binding only to `42`, not `10`. But:

```js
var foo = 42;
export {foo as default};
foo = 10;
```

That exports a binding to `foo` so the importer sees `10`. Correct?


 All three assignments throw a TypeError exception

Thanks for the clarifications!

Would it then be appropriate to explain that conceptually the binding would 
otherwise indeed be 2-way, but that the immutable/read-only nature of the 
bindings is what prevents an outside mutation of a module's internals? That is, 
without such bindings (and errors), a module could be changed from the outside?

Also, are these errors runtime or static? I'm guessing from the spec text you 
quoted they're runtime. If so, was there a reason they couldn't be static?

-

I have some modules where the intentional design of the API has been for a 
consumer to be able to change property value(s) on the public API, and the 
module would behave differently corresponding to those values, almost like if 
they were configuration.

I take it there's no way to do this directly on the module namespace (even with 
the namespace import) with ES6 modules? So basically to do that I'd have to 
export an object (called like `config`) that holds such intentionally mutable 
properties?

Similarly, the (much maligned but still somewhat popular) pattern of having 
module plugins which attach themselves on top of an existing API… sounds like 
this pattern is also not supported?
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Module import/export bindings

2015-03-15 Thread Kyle Simpson
From my current understanding (based on other threads), this module:

```js
var foo = 42;
export default foo;
export foo;
foo = 10;
```

When imported:

```js
import foo, * as FOO from coolmodule;
foo; // 10
FOO.default; // 10
FOO.foo; // 10
```

However, I am curious if this binding is 2-way or only 1-way. What happens if I 
do:

```js
import foo, * as FOO from coolmodule;
foo = 100;
FOO.default = 200;
FOO.foo = 300;
```

Have I changed the local `foo` variable inside the module? If so, to which 
value?

Moreover, what are now the values of these three in my imported context:

```js
foo; // ??
FOO.default; // ??
FOO.foo; // ??
```

Have they all become 300? or are they 100, 200, and 300 separately?
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module import/export bindings

2015-03-15 Thread Kyle Simpson
Of course in my exports, I meant `export {x}` instead of `export x`, but I 
tried to edit my OP and I get internal server error. :)
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module import/export bindings

2015-03-15 Thread Kyle Simpson
Thanks, all answers super helpful!

One last clarification:

```js
import foo;
```

This doesn't do any binding does it? AFAICT, it just downloads and runs the 
module (if it hasn't already)?

If that's true, what's the use-case here besides preloading a module 
performance wise?
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array.prototype change (Was: @@toStringTag spoofing for null and undefined)

2015-02-19 Thread Kyle Simpson
 are there any other builtins that anybody (Kyle, or otherwise) sees as 
 problematic to continue with the breaking change

As that book chapter mentions, the only other one I've ever used is 
RegExp.prototype (being the default empty match /(?:)/ regular expression). I 
have used that only once in my recollection, though I've certainly taught it so 
I don't know if others ever did. I would *like* it to keep working, but it's 
not a sword I'd die on. AWB has suggested on twitter a patch to test() and 
exec() that could hack around that case while letting the ES6 change go through.

 Kyle, if there was Array.empty and Function.empty, which would both be 
 polyfillable, would you find those sufficient replacements for your current 
 usages of Function.prototype and Array.prototype?

Yes, a long time back I proposed (not on here, but informally) that there 
should be just such a thing Function.empty, but basically just settled back 
into using `Function.prototype` since it was already there (and I didn't 
conceive of it ever changing).

The points in favor of either the prototype exotic or an empty stand-in are: 
1) convenience  2) possible performance aide to engines.

 can we provide for this use case

I certainly wasn't coming to this list to propose new features for ES6, as late 
as it is. I only just late last nite found out about this change, and was just 
hoping it wasn't too late to abort the change. But if the fix/compromise is 
empty stand-ins that give a polyfill path to migration, I'd be OK with that.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array.prototype change (Was: @@toStringTag spoofing for null and undefined)

2015-02-19 Thread Kyle Simpson
 you want to freeze everything *.empty

I don't think most of those *need* to be frozen, per se, since they're already 
immutable: `Function`, `String`, `Number`, `Boolean`, `RegExp`, … all immutable 
themselves. `Array.prototype` is however mutable 
(`Array.prototype.push(1,2,3)`), so freezing it from mutation is an extra step 
of caution you might want to take.

FWIW, I've used `Array.prototype` as an empty array in quite a few cases, and 
never actually run across one where it got mutated. *That* part is currently 
just theory, I think.

 you can’t freeze Array.prototype.

I think what he meant was, freezing Array.prototype would both prevent it from 
being mutated, but also prevent it from being extended 
(Array.prototype.superCool = ..). That seems, to me anyway, as a negative. So 
in support of Axel's argument, an `Array.empty` *could* definitely be frozen, 
if it were separate, without affecting `Array.prototype` extensibility.



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array.prototype change (Was: @@toStringTag spoofing for null and undefined)

2015-02-19 Thread Kyle Simpson
I just remembered that I also do a sort of `Object.empty` in my own code 
somewhat frequently, as can be seen here for example:

https://github.com/getify/asynquence/blob/master/asq.src.js#L826

Declaring an empty object: `var ø = Object.create(null)`, and then using that 
`ø` as a sort of global DMZ object that I use for any place where I need a 
throw-away `this` binding, like `apply(..)`, `bind(..)`, etc.

I also wrote about that technique in YDKJS: this  Object Prototypes, here:

https://github.com/getify/You-Dont-Know-JS/blob/master/this%20%20object%20prototypes/ch2.md#safer-this

So, having an `Object.empty` might be nice in place of `ø`.




___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array.prototype change (Was: @@toStringTag spoofing for null and undefined)

2015-02-19 Thread Kyle Simpson
I'm not writing to start or join a debate on the merits of using 
`Function.prototype` and `Array.prototype` in the aforementioned ways. I'm 
writing to confirm that they are in fact used, not just theoretically made up.

I have been writing about and teaching for several years usage of 
`Function.prototype` as a (convenience) no-op empty function and 
`Array.prototype` as a (convenience) default empty array (not to be mutated, 
obviously). The most recent case of me publicly talking about these techniques 
is in my recently published book YDKJS: Types  Grammar:

https://github.com/getify/You-Dont-Know-JS/blob/master/types%20%20grammar/ch3.md#prototypes-as-defaults

While I can't go back now and get at all those old code bases that I either 
consulted on or taught on in workshops, and though that code may unfortunately 
not show up in GitHub searches, I assure you that such code exists. Moreover, I 
have right now a local (non-GH, for certain reasons) fork of Esprima that I've 
been hacking on for about a year, and atm I have 34 occurrences in it of using 
`Array.prototype` as a shared empty default array for starting iterations, etc.

There is no debate of if it will break code, but rather if it's ok to break 
code since the numbers are sufficiently low. Please don't pretend this is just 
academic contrarianism at play.

Furthermore, I would posit that whatever evidence was used to roll back the 
`Function.prototype` change -- that is, people who use it as an empty no-op 
function -- would be the symmetric evidence, and the nearly identical mindset, 
to using `Array.prototype` as a default empty array. That is, I think there's 
at least a decent amount of correlation/overlap there.

However, I'm not seeing or finding the contra-argument of why it's so much 
better to justify making this breaking change, nor why it makes more sense to 
break `Array.prototype` usage but not `Function.prototype` usage.



--Kyle







___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array.prototype change (Was: @@toStringTag spoofing for null and undefined)

2015-02-19 Thread Kyle Simpson
Just curious… for RegExp, Date, String and the others that *are* changing to 
plain objects… does that mean `Object.prototype.toString.call( .. )` will 
return [object Object] on them?

Sorry, I've kinda gotten lost on what the default @@toStringTag behavior is 
going to be here.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On I got 99 problems and JavaScript syntax ain't one (was: OnIncremental Updates)

2011-10-04 Thread Kyle Simpson
I'm sorry David, I just have to express a dissenting opinion here. While I 
could see that better tooling! would be a positive side-effect of some 
syntax suggestions, I think it's a overreaching idea to consider such a main 
argument for adding new syntax.


You make a compelling argument of how tooling *could* benefit from the new 
syntax, sure. But it misses a few things:


1. The readability of the language in environments where such tooling 
doesn't or can't really exist.
2. Syntax highlighting of 1-2 char operators is far less visually helpful 
than syntax highlighting of method names, etc.
3. Not all developers use a unified toolset for JavaScript (compared to say 
.NET where the vast majority use VS)


For #1, I personally think | looks awful. I hate it. I wouldn't use it. And 
I'm less than entranced by the idea that I might have to read others' code 
with such confusing looking operators in it. Only if all the tools I was 
using to read  write JS were capable of somehow making that syntax 
useful/beautiful instead of ugly would your argument hold much water for me. 
But that's just opinion and preference.


For #2, which kinda goes with #1, I'm concerned that the readability (even 
with syntax highlighting) of the language as a whole will take a dip for 
several years as developers re-adjust to the new syntax. Syntax highlighting 
is often seen as a way to help the readability of a language, but syntax 
highlighting for new weird unfamiliar operators isn't going to help much at 
all.


Will it eventually get better? Sure, if we don't keep adding new syntax 
every edition, it'll eventually stabilize. But I don't look forward to the 
dip in readability for the short-term, for my own code and for everyone 
else's code that I read.


For #3, I don't use fancy IDE's at all. I use text editors at best. I use 
notepad, PSPad, and Sublime on windows. And on linux I use vi. I doubt any 
of those text editors are ever going to care to pick up on the syntactic 
nuances you suggest. Which is my bigger point. Just because tooling *CAN* 
benefit from new syntax, doesn't mean all (or even most) tooling *WILL* 
benefit. What we'll end up with is a broad range of support from none all 
the way up to super-awesome-happy-unicorns. So both the good and bad side of 
this is YMMV.


Factoring all those things in, I can't see how new syntax==better tooling 
is anything more than an auxiliary supporting argument.


And yeah, I concur with the 99 problems... statement. Working with 
JavaScript every day for the better part of a decade, I can't say that 
JavaScript's syntax issues have ever really tripped me up. Poor API's trip 
me up all the time. Poor handling of async (which can be considered a syntax 
issue!) definitely trips me up regularly. But raw operator syntax for common 
tasks is rarely something that shows up on my radar. There are SO MANY other 
things I wish JavaScript would address first.


Just my 2 cents.

--Kyle


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On I got 99 problems and JavaScript syntax ain't one (was: OnIncremental Updates)

2011-10-04 Thread Kyle Simpson
I'm sorry David, I just have to express a dissenting opinion here. While I 
could see that better tooling! would be a positive side-effect of some 
syntax suggestions, I think it's a overreaching idea to consider such a main 
argument for adding new syntax.


You make a compelling argument of how tooling *could* benefit from the new 
syntax, sure. But it misses a few things:


1. The readability of the language in environments where such tooling 
doesn't or can't really exist.
2. Syntax highlighting of 1-2 char operators is far less visually helpful 
than syntax highlighting of method names, etc.
3. Not all developers use a unified toolset for JavaScript (compared to say 
.NET where the vast majority use VS)


For #1, I personally think | looks awful. I hate it. I wouldn't use it. And 
I'm less than entranced by the idea that I might have to read others' code 
with such confusing looking operators in it. Only if all the tools I was 
using to read  write JS were capable of somehow making that syntax 
useful/beautiful instead of ugly would your argument hold much water for me. 
But that's just opinion and preference.


For #2, which kinda goes with #1, I'm concerned that the readability (even 
with syntax highlighting) of the language as a whole will take a dip for 
several years as developers re-adjust to the new syntax. Syntax highlighting 
is often seen as a way to help the readability of a language, but syntax 
highlighting for new weird unfamiliar operators isn't going to help much at 
all.


Will it eventually get better? Sure, if we don't keep adding new syntax 
every edition, it'll eventually stabilize. But I don't look forward to the 
dip in readability for the short-term, for my own code and for everyone 
else's code that I read.


For #3, I don't use fancy IDE's at all. I use text editors at best. I use 
notepad, PSPad, and Sublime on windows. And on linux I use vi. I doubt any 
of those text editors are ever going to care to pick up on the syntactic 
nuances you suggest. Which is my bigger point. Just because tooling *CAN* 
benefit from new syntax, doesn't mean all (or even most) tooling *WILL* 
benefit. What we'll end up with is a broad range of support from none all 
the way up to super-awesome-happy-unicorns. So both the good and bad side of 
this is YMMV.


Factoring all those things in, I can't see how new syntax==better tooling 
is anything more than an auxiliary supporting argument.


And yeah, I concur with the 99 problems... statement. Working with 
JavaScript every day for the better part of a decade, I can't say that 
JavaScript's syntax issues have ever really tripped me up. Poor API's trip 
me up all the time. Poor handling of async (which can be considered a syntax 
issue!) definitely trips me up regularly. But raw operator syntax for common 
tasks is rarely something that shows up on my radar. There are SO MANY other 
things I wish JavaScript would address first.


Just my 2 cents.

--Kyle


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: {Weak|}{Map|Set}

2011-09-15 Thread Kyle Simpson

If I was a programmer
looking for something like weak referencing in JS for the first time,
weak is what I'd be searching for.


But if you're actually aware of weakrefs (as I am), and you're searching for 
them in JS (as I was), and you see WeakMap (as I did), and you make the 
conclusion that Weak in the name means in fact weak references (as I did), 
then you probably also (as I did) assume that *all* the refs are weak. 
That's a failed conclusion, because only the keyrefs are weak.


The name doesn't do anything to enlighten you that it only offers weak 
keyrefs and not weak valuerefs -- in fact, by your discovery line of 
reasoning, the name is almost a landmine that traps/misleads someone who 
does in fact know about weakrefs -- someone who didn't know about weakrefs 
wouldn't necessarily make the same deductive assumption by seeing weak in 
the name.


Misleading/confusing with an API name is, IMHO, worse than less 
implementation-self-descriptive naming.


--Kyle


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: {Weak|}{Map|Set}

2011-09-15 Thread Kyle Simpson
You say and you're searching for them in JS (as I was). Had the 
abstraction been called ObjectMap or ObjectRegistry, would you have found 
it?


I don't think the API name is the only way someone can discover what they're 
looking for. Proper documentation for ObjectMap which said keyrefs are 
held weakly or something to that respect would probably have ended up on my 
search radar.


Of course, if the WeakMap really was both weak-key and weak-value, then I'd 
absolutely expect the name to be WeakMap (I think weak in this case is a 
useful and non-trivial part of the behavior). So my complaint is not Weak, 
but that Weak implies something more comprehensive than is actually the 
case. It over-promises and under-delivers, so to speak.


I should also clarify my process for how I found this API (I over-simplified 
in my previous email). I actually started out with both the weakref need AND 
the object-as-key (map) need. For the object-as-key need, I had already 
constructed a dual-numerically-indexed-array solution to associating an 
object with a value. But then, realizing that this was not only clunky, but 
also was going to make manual GC (that is, cleaning up the entries if the 
object is removed) also more awkward/less performant, I started looking for 
an API that would do both: Map (object-as-key) and automatically clean up 
the entries in the Map if either the key-object or the value-object (in my 
case, both DOM objects) were GC'd.


In that context (and in support of Allen's early assertions), Map in the 
name was most important to focus me into a (theoretical) class of APIs (if 
there were indeed several different kinds of maps). Then I would have 
searched through the documentation to see if any of the Maps had the weak 
behavior I was looking for.


OTOH, had I *only* been looking for pure Weak References, and not a Map 
structure, then I'd have been looking for some API like WeakRef, and 
actually Map probably would have been confusing or ignorable noise.



As it was, you found the right thing to look at and think about, but you 
needed to read more before you understood whether it serves your actual 
purpose.


I found it because a fellow Mozilla dev said hey, that sounds like 
WeakMaps and I thought awesome, ask and ye shall find. Of course, the 
devil was in the details, because it wasn't actually what I needed 
completely. This was compounded by the fact that the MDN documentation (at 
least at the time) was ambiguous and didn't make it clear that only keys 
were weak. So a well-experienced co-worker and the documentation BOTH were 
confused (as were several others through various IRC chats) as to exactly 
what was and was not weak in the WeakMap.


How did I figure it out? By writing it into my code, and then seeing 
mem-leak tests fail. Thankfully, I eventually found some IRC people who 
clarified that what I was seeing was not a bug but was in fact by-design. 
But, that's a hard way to learn the lesson.


Would a more accurate name have helped? Perhaps. WeakKeyMap certainly 
would have made it obvious that the Map was not fully weak. Would more 
accurate documentation have helped? Absolutely. Would naming *and* 
documentation have helped other co-workers not be misled and consequently 
point me in the wrong path? I hope so.



That's why I like WeakMap best -- it is the mapping that is weak, not 
the keys or the values.


I understand what you're saying here. But as I mentioned before, the way my 
(far less informed) brain thinks about it, the map or link between two 
objects should in fact be weak and ephemeral enough that either side going 
away (being GC'd) should be enough to cause the link between the two to be 
cleaned up. I think it's because I tend to think of Map as more 2-way than 
one-way, though I understand it's technically only 1-way.


Saying it a different way... if the focus is on the map or link itself, and 
the RHS thing the map/link is pointing to is no longer valid/defined, then 
what use is there keeping a link that points to something now undefined?


It just seems a little unfortunate/misleading to me that from an 
implementation perspective, creating the map/link is sufficient to prevent 
the RHS value in question from ever getting to that undefined state. When 
I create a reference using variables/properties, I *expect* a hard reference 
that behaves like that. But when I use a specialized API with Weak in the 
name, I definitely expect the opposite.



--Kyle


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: {Weak|}{Map|Set}

2011-09-14 Thread Kyle Simpson
I too have been confused by the name "weakmap"...partially because the name is misleading, and partially because documentation on it is ambiguous/misleading. Specifically, "weakmap" really means "weakkeymap", because only the key is weak, not the value. But then again, "weakkeymap" would be even more implementation-instead-of-semantics naming.To me, the name "weakmap" means weakkey *and* weakref (weak value), and that is what I'd like: to be able to create a link (map) between two arbitrary objects, where the link only exists if both sides are still valid, and is killed (GC'd) if either or both go away. This type of functionality is useful for a system like a generalized event triggering mechanism, for instance. It's also useful for a devtool like an HTML inspector which needs to link a main-page DOM object to a representation DOM object in the tool. When I brought up this idea in IRC awhile back I suggested the name "ReallyWeakMap". :)Yes, I'm aware of the reasons why weakrefs aren't in JSI just still think it's a shame that we can't figure out a way for GC of weakrefs not to "leak" info to separate security sandboxes.Anyway, why couldn't we just call it "Map" or "KeyMap" and drop any mention of "weak"?--KyleOn Sep 14, 2011 6:20 PM, Mark S. Miller erig...@google.com wrote: On Wed, Sep 14, 2011 at 6:04 PM, Juan Ignacio Dopazo dopazo.j...@gmail.com wrote:
On Wednesday, September 14, 2011, David Bruant david.bru...@labri.fr wrote: Also, I would like to talk a little bit about terminology. WeakMaps have



 their name inspired by the idea of "weak" references which have particular garbage-collection properties. From the developer perspective, this seems to be some sort of implementation detail they



 should not be aware of. As far as I know, current functions/constructors have their name inspired by the contract they fulfill rather than implementation considerations. The difference between current WeakMaps and Maps is



 their contract. In the latter, keys can be enumerated, in the former not. I think that this is the difference that should inspire different names rather than the implementation optimisation that is induced by



 this contract difference.In the last few days I had to write a piece of code that would strongly benefit from WeakMaps. I needed to store information about DOM nodes and retrieve it later, and these nodes aren't in my control so they can be detached at any time by someone else. If the references I kept were weak, I'd be sure that I wouldn't be causing a memory leak. And that's important in this case because the nodes are very likely Flash objects which can easily mean 20-50mb in memory. So knowing that a reference is weak is important information.


I agree.Normally I strongly take the same position David does: emphasize semantics over implementation. But why? It is good when we can label a tool according to its purpose, rather than how it accomplishes that purpose. Associating the tool with its purpose helps us remember the right tool for the right job. Few would reach for the WeakMap tool thinking "I need a non-enumerable table". Granted, there are cases when the non-enumerability is the desired feature, but those cases are rare. The common purpose of a WeakMap is rooted in our understanding, at a high level, of certain implementation costs, and our desire to avoid certain avoidable implementation costs. Generally, that is what a WeakMap is *for*.
--   Cheers,  --MarkM
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A directive to solve the JavaScript arithmetic precision issue

2011-08-15 Thread Kyle Simpson

A directive would have the same benefits than use strict which is to not
break existing code in platform that do not support this directive.


It would also have the same limitation that use strict; does, which is 
that it doesn't play well with (the quite common pattern of) concat'ing 
minified scripts together in build environment deployments, because your 
use stict declaration in one of your files bleeds over (probably 
unintentionally) to affecting other files when they are all combined into 
one file.


Maybe something like:

use sensible arithmetic {
// ..
};

Perhaps? In any case, I'm definitely not a fan of continuing the frustration 
that use strict gives us in concat-js environments.


--Kyle



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A directive to solve the JavaScript arithmetic precision issue

2011-08-15 Thread Kyle Simpson

I intuit that the consequences are less harmful. Strict mode can trigger
some syntax errors or throw runtime errors that wouldn't happen in
non-strict code. Different arithmetic is less likely to cause this kind
of problem.


Sure, it might not cause syntax errors, but it would cause subtle arithmetic 
bugs, that would be nigh impossible to find. Part of the good thing about 
use strict is that syntax errors fail early. What you're talking about 
is a fail really late and subtlely type of error introduction, which makes 
me nervous.




Do you have an idea of a current running program that would behave
significantly differently with an accidental bleed of arithmetic mode?


Seems like any JS which is doing math like for animations, bounds checking, 
etc, may be affected by such things. But I don't have any current code I 
could point to that I know for sure would die.




Are you refering to the pragma proposal [1]? I am not used to it, but if
it's applicable, that's an idea too.
[1] http://wiki.ecmascript.org/doku.php?id=harmony:pragmas


I wasn't specifically aware of/referencing that proposal. I was more getting 
at the idea that some kind of pragma/control command that could be scoped 
with { .. } would be useful, as opposed to the sort of flag it on type 
behavior of the use strict command. What makes use strict specifically 
frustrating is that there's no counter-part to say use lazy or use 
legacy, so one it's encountered, there's no way to tell the interpreter to 
switch back out of strict mode, other than to get to a new file, which makes 
it harder to use concat for build performance optimizations.



--Kyle




___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


how to create strawman proposals?

2011-06-02 Thread Kyle Simpson
Is it available for general public members to register for an account to 
create strawman proposals for ES?


In particular, I'd like to create two proposals for some future discussion:

1. a n (or c) flag for regexp's, that reverses the default capturing 
behavior of ( ) to be non-capturing by default, and thus (?: ) to be 
capturing.


2. a @ (or something like it) operator for what I call 
statement-localized continuations



I couldn't find an obvious place on the wiki for creating an account, so I 
was wondering what the process is for that?



--Kyle



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: how to create strawman proposals?

2011-06-02 Thread Kyle Simpson
Is it available for general public members to register for an account to 
create strawman proposals for ES?


No, it's an Ecma TC39 resource. Ecma needs IPR handoff per its patent 
covenant so this can't be a free-for-all, for better or worse.


So if a non-TC39 member wants to create suggestions and proposals and ideas 
for the community to discuss, am I to understand that not only this list, 
but also the wiki that this list so frequently references, are not really 
intended for that?


There've been several times on this list in various discussion threads where 
it was clear that only official-wiki-based strawmans were things people 
wanted to seriously discuss. And now to find out that such strawmans cannot 
even be authored by any of us non-committee members, it seems like it 
further reinforces the desire that there be some other forum for the Rest 
of usT to use to talk about *our* ideas, not just the ideas of the 
committee members.


What's the best way for such informal and non-committee-sponsored 
discussions to proceed?



In particular, I'd like to create two proposals for some future 
discussion:


1. a n (or c) flag for regexp's, that reverses the default capturing 
behavior of ( ) to be non-capturing by default, and thus (?: ) to be 
capturing.


Is there any precedent for this in other perl-based regexp packages?


Perl6 regular expressions have introduced [ ] as a non-capturing grouping 
operator (instead of (?: ) operator). They moved character classes to [ ]. 
I'm not saying I like that decision (or dislike it), but it's definitely 
nice (and prettier code) to have a single character (on either side, of 
course) operator for the common task (highly common to me anyway) of 
non-capturing grouping.


But more to the point of my intended proposal, .NET has the /n flag for 
turning off capturing for ( ) -- I'm not sure if it then turns on capturing 
for (?: ) or not, someone more familiar with .NET would have to inform here.


In any case, I write far more (like probably 98% vs. 2%) regular expressions 
where I want non-capturing grouping, and it's onerous (and ugly) to always 
add the ?: to those grouping operators, if I could just use a single flag to 
tell the entire regex that by default I don't care about it capturing 
(unless I explicitly opt-in to it with ?: or something like that).


There was a time when I felt like the default should reverse itself, and 
that ( ) should default to non-capturing. I explored those ideas in 
http://blog.getify.com/2010/11/to-capture-or-not/


But it's quite obvious that this is an impossible proposition, as it would 
break probably ~100% of existing regex content. So, it seems (and was the 
main conclusion of that article and comment thread) like a simple next-best 
solution is to make it opt-in with a flag.


Doing so shouldn't have any backwards-breaking functionality, and should, 
for the most part*, only cause unnecessary (and thus sub-optimal 
performance) capturing for older regex engines that ignored the /n flag.


*There's two cases I can think of where an older regex engine ignoring the 
/n flag and still capturing would cause unexpected results (besides the 
performance slowdown of unnecessary capturing):


1. \xx back-references would be incorrectly numbered
2. str.split() behaves a bit differently (includes more results) if there 
are capturing groups in the regex it splits on


Both those incompatibilities seem manageable and not terribly risky, given 
that such a feature would of course have to be introduced into some opt-in 
version of ES like ES.Harmony, and authors just simply shouldn't author such 
regexes if they intend for that code to run in ES5 and below.



2. a @ (or something like it) operator for what I call 
statement-localized continuations


We've been over continuations at length on this list. There is a 
harmony:generators proposal already in ES.next, and a deferred functions 
strawman as well. Dave Herman's shallow continuations  strawman was 
intentionally deferred.


I am well aware of the discussions on this list about continuations, as well 
as the existing strawmans. I am also well aware that my views on that topic 
aren't particularly popular. But nevertheless, I still have issues with the 
existing proposals/ideas, and I'd like a chance to advance some discussion 
about my ideas as an alternative. I thought that creating a more 
structured/formalized proposal would be a decent next-step. I'm also working 
on some way to prototype my idea in some JS engine.




--Kyle



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: how to create strawman proposals?

2011-06-02 Thread Kyle Simpson
So if a non-TC39 member wants to create suggestions and proposals and 
ideas for the community to discuss, am I to understand that not only this 
list, but also the wiki that this list so frequently references, are not 
really intended for that?


I didn't say anything about this list. Discussion on all kinds of ideas 
goes on here. (Yes, people including me get grumpy about overciting, 
filibustering, padding with rehashed motivation, etc. -- so what?)


With all due respect, it's a somewhat common occurrence on this list when 
someone brings up an idea that bears even a remote resemblance to an 
existing official strawman, the instinctual and rapid triage is to shoehorn 
the discussion into the strawman and quickly move on.


That is of course perfectly fine to run *this* list that way, but I was 
asking if there was a place (or could there be a place?) where ideas could 
be discussed in a less formal setting without the need for formalized 
strawman proposals, and certainly without the urge to immediately bucket 
each idea or question into an existing proposal, or dismiss a topic as 
having already been hashed out long ago.




So, dismount that high horse, please.


I don't see any high horse around here. I am coming at this from quite the 
opposite perspective, of being just the common folk outsider looking to get 
a leg up. I'd really prefer it if there could be a bit more benefit of the 
doubt around here. Do these types of discussions really have to so regularly 
devolve into this combative tone?



There've been several times on this list in various discussion threads 
where it was clear that only official-wiki-based strawmans were things 
people wanted to seriously discuss. And now to find out that such 
strawmans cannot even be authored by any of us non-committee members, it 
seems like it further reinforces the desire that there be some other 
forum for the Rest of usT to use to talk about *our* ideas, not just 
the ideas of the committee members.


Yeah, pay-to-play standards bodies suck, news at eleven. This applies to 
just about all the big ones. Ecma is far better than some, and TC39 is 
healthier by many measures than most standards bodies I've seen and 
participated in.


Your sarcasm suggests that you have taken my observations about the tone and 
demeanor of *this list* as personal (or organizational) attacks -- that 
definitely wasn't intended at all. I'm quite aware of how this list works, 
and I have my own opinions about the pluses and minuses. But I think it's 
fine that this list and TC39 operate as they do, and I'm sure it's for good 
reason.


I wasn't suggesting that this list or TC39 should change how they operate. 
I'm merely pointing out that a regular source of frustration on this list 
comes from people like me wanting to discuss ES related topics in a forum 
that has more relaxed rules and standards than this one has.


Why is it that asking for a separate and less formal place to discuss ES 
issues and ideas comes off as threatening or criticizing? It seems to me 
like it'd be useful to have such an informal forum where ideas can percolate 
until they solidify and are ready to be elevated to a more formal discussion 
process such as the ones that most regulars on this list seem to prefer.




Talk here. What is stopping you?


For one, the perception that if I'm not discussing an already accepted 
strawman sponsored by an existing TC39 member, then the tolerance level 
*here* for such discussion and informality is somewhat low. Again, such 
discussions don't often seem to belong (or really be wanted) on *this* list, 
and so I'd rather stop generating the same friction every time I have an 
idea I want to discuss or get feedback on.




Is there any precedent for this in other perl-based regexp packages?

Perl6
But more to the point of my intended proposal, .NET has the /n flag for 
turning off capturing for ( ) -- I'm not sure if it then turns on 
capturing for (?: ) or not, someone more familiar with .NET would have to 
inform here.


That's interesting. I found

http://msdn.microsoft.com/en-us/library/yd1hzczs.aspx
http://msdn.microsoft.com/en-us/library/yd1hzczs.aspx#Explicit

There is no sign of non-capturing syntax (?:...) here at all. This n flag 
seems a bit different from what you propose.


http://msdn.microsoft.com/en-us/library/bs2twtah.aspx#noncapturing_group

Again, I'm not sure if .NET swaps the default behavior as I'm proposing, 
when /n is present. But it seems quite natural to me that /n would do so, 
rather than having a strange asymmetry where without the flag, both 
capturing and non-capturing are possible, but with the flag present *only* 
non-capturing is possible.




As with all things RegExp, I wonder what Steve thinks.


Do you mean Steven Levithan (aka Mr Regex)? If so, he already commented at 
length on that blog post I mentioned. I guess he implies the discussion is 
worth having by saying ...and which can be explored in future 

Default non-capturing regex flag [WAS: how to create strawman proposals?]

2011-06-02 Thread Kyle Simpson
I propose a /n flag for regular expressions, which would swap the default 
capturing/non-capturing behavior between ( ) and (?: ) operators (that is, 
( ) would not capture, and (?: ) would capture).


The /n property would reflect on the RegExp object as `Noncapturing == 
true`.




Is there any precedent for this in other perl-based regexp packages?

Perl6
But more to the point of my intended proposal, .NET has the /n flag for 
turning off capturing for ( ) -- I'm not sure if it then turns on 
capturing for (?: ) or not, someone more familiar with .NET would have 
to inform here.


That's interesting. I found

http://msdn.microsoft.com/en-us/library/yd1hzczs.aspx
http://msdn.microsoft.com/en-us/library/yd1hzczs.aspx#Explicit

There is no sign of non-capturing syntax (?:...) here at all. This n flag 
seems a bit different from what you propose.


http://msdn.microsoft.com/en-us/library/bs2twtah.aspx#noncapturing_group

Again, I'm not sure if .NET swaps the default behavior as I'm proposing, 
when /n is present. But it seems quite natural to me that /n would do so, 
rather than having a strange asymmetry where without the flag, both 
capturing and non-capturing are possible, but with the flag present *only* 
non-capturing is possible.




As with all things RegExp, I wonder what Steve thinks.


Do you mean Steven Levithan (aka Mr Regex)? If so, he already commented 
at length on that blog post I mentioned. I guess he implies the discussion 
is worth having by saying ...and which can be explored in future 
ECMAScript specs.



--Kyle



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Default non-capturing regex flag [WAS: how to create strawman proposals?]

2011-06-02 Thread Kyle Simpson
The /n property would reflect on the RegExp object as `Noncapturing == 
true`.


Lowercase noncapturing, right?


Yeah.


--Kyle



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: prototype for operator proposal for review

2011-05-18 Thread Kyle Simpson

I'm definitely in favor of this | proposal, btw.

That sort of pattern certainly can be repeated if push comes to shove. 
But I believe doing so is far inferior to dedicated, first-class 
syntactical support to make the semantics absolutely unambiguous and 
un-confusable with anything else.


This makes sense.  I just want to make sure that the fundamental 
capability to subclass built-in objects is available via libraries for 
text/javascript, with the new syntax offering the more performant option 
for text/harmony.


I'm pretty sure current `text/javascript` can sub-class safely, as FuseBox 
(from @fusejs) does with sandboxing natives. It's hacky, and relies on the 
non-standard __proto__ in most browsers (iframe in others), but it IS 
possible. Perhaps we should still formalize it, if we think 
`text/javascript` is gonna be around for a long time in parallel with 
ES.Next that has something like |.



--Kyle



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-method functions and this

2011-05-16 Thread Kyle Simpson
d) At runtime, a function invoked as a non-method gets a dynamic 
ReferenceError if it tries to refer to |this|. This would just be kind of 
obnoxious, I think, since if the function wants to test whether it's been 
called as a non-method it has to do something like


   let nonMethod = false;
   try { eval(this) } catch (e) { nonMethod = true }

That seems unfortunate.


Couldn't the error only be issued if `this` was used either in an assignment 
fashion (either as an lvalue or rvalue, or de-referenced with the `.` or 
`[]` operators (which should even be statically determinable, right?)?


That way, `typeof this`, `this == undefined`, etc would be safe, as in your 
example above, but `this.foo`, `var self = this`, etc would throw a 
ReferenceError error (most would anyway, if `this` was truly `undefined`).


OTOH, it seems like this is more a place for JSLint type assertions rather 
than something to be enforced by the language engine (either at compile-time 
or run-time).



- Function.prototype.call() and Function.prototype.apply() would have one 
parameter less.


Yeah, I agree with others that while this might be nice, it would break the 
web (either with compile-time or run-time checking). I always just pass 
`null` for the first param, as I think most people do when it's known that 
`this` isn't going to be used.


It's definitely annoying that in ES3 and ES5 non-strict, such usage will 
still result in `this` defaulting to `window` instead of truly `undefined`.


On a side note, this is also a case where it would be nice if a comma-list 
of parameters to a function call (or array-initialization) could have 
empty locations that default to `undefined`, like 
`Function.prototype.apply( ,...)`, but I doubt that'd ever fly. :)



--Kyle



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: arrow syntax unnecessary and the idea that function is too long

2011-05-09 Thread Kyle Simpson
Do I understand you that the idea here is 'function' without the 
'function' keyword? I think this has a pretty bad 
backwards-incompatibility with ASI:


x = (x)

{ return x }


Which way should this parse?


My reading of Rick's gist was:

(x = (x)
{return x})

The outer ( ) removes the ASI ambiguity. FWIW, I'm not terribly excited by 
this syntax, but I like it better than -.


One thing that troubles me about the goal/movement to have a shorter 
function syntax... It seems like all the examples we exchange for it are, 
on principle, single-line functions. From a readability standpoint, I think 
it's a little deceptive to judge a syntax like that, without considering how 
it will look for a longer, non-trivial function. How easy will it be to scan 
for a function's start/end if the biggest visual signal for a function start 
(aka, function) is gone and is replaced by that rather non-descript - 
which, as was said earlier in a thread, looks a lot like existing operators.


Since in the real world, functions are usually a lot more than a single 
return statement or a single assignment, I think we should at least keep in 
mind the readability (or lack thereof) of how these proposals look when 
there's 10, 20, 100 lines in a function... By the same token, how easy is 
the readability when there's 2-4 levels of nested functions (the module 
pattern, etc)?



--Kyle

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: arrow syntax unnecessary and the idea that function is too long

2011-05-09 Thread Kyle Simpson
Let's ignore popularity level for the moment, no other proposal has analog 
of `=` which is a solution for a real problem:


var self = this;
function callback()  {
   self
}


Maybe I missed something, but didn't Brendan's #-function proposal specify 
lexical `this` binding, so that:


function foo() {
  this.bar = baz;
  return #(x){ this.bar = x; };
}

Isn't that the spirit of what = would give us?

--Kyle


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Object.prototype.* writable?

2011-05-08 Thread Kyle Simpson

From: Dean Landolt
Sent: Sunday, May 08, 2011 10:17 AM

Unfortunately, we're back to the chicken-and-the-egg... if I could 
guarantee that my code was the first to ever run on any page, almost none 
of the problems I'm complaining about would be an issue, because I could 
just make sandboxed copies of what I needed, and store them privately 
inside a closure. Being able to run-first is the key component that 
isn't true, and if it were true (which is required of initSES.js), then 
I wouldn't need initSES.js.


Forgive me if this has come up already and I missed it but wouldn't it be 
enough if there were some mechanism to validate the integrity of 
Object.prototype by asking the host env for a fresh copy and comparing 
identities? Even if the frozen ship has sunk ISTM it ought to be enough to 
be able to reliably detect the hijacking. This would probably be best left 
to a web platform standards body but wouldn't that be a good place to 
inject that kind of unforgeable factory for Object.prototype?


I would definitely support or appreciate a mechanism by which a clean/fresh 
copy of Object.prototype could be arrived at, without the hackiness of 
either launching an iframe or something like that. That's what my 
Object.__prototype__ was kind of getting at, a few messages ago.


I don't think it's enough to just detect that it's bad, if there's no way to 
undo the badness and get at the native functionality. But giving us another 
parallel interface which IS read-only would be, in my mind, a pretty simple 
solution to this problem. Of course, this would need to be true not just for 
Object but all the natives, like String, as well.


I'd be in favor of this as a shorter term solution than SES.

--Kyle


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: arrow syntax unnecessary and the idea that function is too long

2011-05-07 Thread Kyle Simpson
Many people, including me, would disagree. On matters of taste, I'd want 
the committee to listen to all interested parties and try to pick the 
solution that pleases the most people. That appears to be what's happening 
here.


Based on what evidence are we concluding that the majority of the 
javascript developers want - syntax for functions? The fact that 
coffeescript is the hot buzzword? Was there some developer-community wide 
voting or poll that I missed? Or is it that a few vocal people on these 
lists like it, and that's being substituted as what the majority is in 
favor of?


I'm not just being snarky, I'm genuinely curious, on this and a variety of 
other matters related to what's being added to ES-next/harmony... It's clear 
Brendan (and other language cohorts) likes these new syntax sugars, but 
where is the evidence that suggests that all this new syntax sugar is the 
exact sugar that javascript developers want? Is it just enough that everyone 
at JSConf likes it, and thus that means that the whole community is assumed 
to be on board?


There's LOTS of examples where writing less JavaScript is more awesomer, but 
there's also plenty of examples of where writing less is much more uglier. I 
am troubled by the implication that just because we've found a shorter 
syntax sugar for functions, this unequivocally means it's better.


- syntax being shorter is a clear and objective question. No doubt it's 
shorter. But is is prettier or more readable? According to who's opinion do 
we conclude that, because that seems pretty subjective.


--Kyle



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Object.prototype.* writable?

2011-05-07 Thread Kyle Simpson
It's a well known fact that overwriting anything in Object.prototype 
(like

Object.prototype.toString, for instance) is a very bad idea, because it
breaks for-in looping.


Properties 'properly' added/updated using Object.defineProperty
{enumerable: false} do not break for-in afaik.


I wasn't aware you could use Object.defineProperty() on `Object.prototype` 
itself. But, see below, because this part of the conversation is really 
outside the spirit of what I'm asking anyway. (I'm not talking about if my 
responsible code can do it, I'm talking about if other untrusted code does 
it first, before my code runs.)




2. Would it be possible for Object.prototype.* to be read-only for
ES-Harmony (or even just strict mode)?
3. By read-only, I mean that changes to it would just silently be 
discarded.
Alternatively (especially for strict mode), warnings/errors could be 
thrown

if attempting to override them?


Doesn't Object.freeze(Object.prototype) provide exactly this behavior 
already?


It does (I suppose), if you're positive that your code is the first code to 
run on the page. I'm more talking about code out in the wild, where 
malicious/hijacked scripts on your page could alter how the page acts before 
you're more trustworthy code is able to run. Yes, I know that the concept of 
code security is a whole can o' worms to itself, but I am just implying that 
this small thing would be helpful in protecting against some of the affects 
of such behavior.



I think that being able to override something like 
Object.prototype.toString

to lie about objects/values is a security hole we should consider
plugging. For instance, you can lie to
`document.location.href.toString()`... or a call like
`Object.prototype.toString.call(window.opera) == [object Opera]` (a 
common

browser inference for Opera) is easily fake'able.


Doesn't this imply the application deliberately 'lies' to itself? Not
sure to understand how would this be an issue?
It might even be sort of useful for mocking.


(see above)


--Kyle



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Object.prototype.* writable?

2011-05-07 Thread Kyle Simpson

The malicious script could schedule patching newly loaded code
directly without even overwriting Object.prototype (eg. to reuse your
example, it could replace document.location.href occurences with a
string constant in the 'trustworthy' function source directly).


Not if the code in question is inside a self-executing function closure, 
which is a pretty common pattern. In that case, the only vulnerability to 
trusting what you see from `Object.prototype.toString` (or 
`location.href.toString`) is if it's possible that someone paved over them 
earlier, either intentionally or accidentally.




This means forbidding overwriting properties of Object.prototype would
be 'security by obscurity' at best imho.


I already acknowledged that it's only one tiny piece of the overall code 
security discussion... but it would certainly help a few of those use-cases 
to be more secure.


Maybe the more appropriate term instead of secure is reliable. For 
example, the case of testing for the validity of `window.opera`... that 
special object could be more trusted/reliable (not faked or accidentally 
collided with) if it was impossible to fake the output of 
`Object.prototype.toString.call(window.opera)`.




On 1:44 PM, Douglas Crockford wrote:
I agree that the primordials should have been frozen, but as Brendan
says, that ship has sunk. But a smart library can now do that job, so a
page can elect to have the benefit of frozen primordials if it wishes.


Again, a smart library can only do that if it's guaranteed to be the first 
code to run on the page. If not (which is usually the case), then all bets 
are off, unless the language offers some protections.


And ships being sunk not withstanding, I think it's a valid question if 
something like this could be a future candidate for a more strict (opt-in) 
JavaScript, like Harmony/Harmony.Next or strict mode itself.



--Kyle



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Object.prototype.* writable?

2011-05-07 Thread Kyle Simpson
Again, a smart library can only do that if it's guaranteed to be the 
first code to run on the page. If not (which is usually the case), then 
all bets are off, unless the language offers some protections.


All bets are probably still off. The malicious code that's first can load 
the latter virtuous code as data using cross-origin XHR, or, if the script 
isn't served with an Access-Control-Allow-Origin: *, via server-side 
proxying. Then the malicious code can rewrite the virtuous code as it 
wishes before evaling it.


My first reaction to this assertion is to say: so? It's a rather moot 
argument to suggest that code which can be altered before it's run isn't 
trustable... of course it isn't. The malicious coder in that scenario 
wouldn't even need to go to the trouble of overwriting Object.prototype.* in 
the frame, he could just remove the offending if-statement altogether. In 
fact, he wouldn't even need to modify my code at all, he could just serve 
his own copy of the .js file.  ... ...


So what are you suggesting? That regardless of the JS engine, no page's JS 
functionality is actually reliable, if any of the page's JS resource authors 
are dumb and don't configure CORS headers correctly,  because any malicious 
script (if it's first on the page) can completely hijack another part of the 
page? Yup, I agree.


This is a rabbit trail that I'm weary to go down, but I'll just indulge it 
for one quick moment if you are enabling CORS on your server, and not 
protecting your JavaScript code, you're asking for someone to exploit your 
code in some XSS type of attack. The whole original purpose of SOP (same 
origin policy) was to prevent (or cut down significantly) on such things, 
especially as they relate to being able to trick the browser into sending 
along cookies/sessions to locations that allow a server to act as 
man-in-the-middle. If CORS basically completely eliminates any of the 
protections that SOP gave us, then CORS is a failed system.


But CORS is only failed if you do it wrong. I suspect that's part of the 
reason CORS is slow to wide-spread adoption (despite plenty of browser 
support, except Opera), because it's harder to get it right without throwing 
the barn door wide open. FWIW, I see most implementations of CORS only being 
on limited URL locations (sub-domains) which are purely web service/REST 
API's, not general web server roots. That's not to say that noone is doing 
it wrong, but it is to say, that them doing it wrong is irrelevant to this 
discussion, because it moots the whole premise.



All this is a moot discussion though, because malicious take-over's of a 
page are nothing but an exotic edge case, and only enabled if people do it 
wrong. The original request stemmed NOT from the malicious hacker scenario, 
nor from a page doing it wrong (per se), but from the oops, some other 
piece of dumb code earlier on the page accidentally screwed up and collided 
with something I need to be inviolate.



I've been at this for a while, as has Crock. I doubt there's any realistic 
scenario where code loaded later into an already corrupted frame can 
usefully defend its integrity. If you know of a way to defend against this 
rewriting attack, please explain it. Thanks.


Off the top of my head, it would seem at first glance that creating a new 
iframe for yourself might be the only such way (that is, of course, if you 
even *are* yourself, and haven't been transparently modified or replaced --  
see above).


I'm sure both of you are way more experienced at this than me (after my 12 
year web dev career so far). But I think you're trying to derail the narrow 
spirit of my original question by deflecting to much bigger questions. The 
appropriate forum for that type of discussion was when CORS was being 
conceived and brought about. As people love to say on this list: that ship 
has sailed.


None of this exotic what-if scenario indulgence invalidates my original 
request, that a clearly known bad-practice (changing *some*, not all, 
particular behaviors of natives) leads to code that is less than reliable, 
and can we make it a little less so by having the engine protect certain key 
pieces.


---
And btw, contrary to some people on this list who seem to operate almost 
exclusively on theoretical principle, security through deterrence (not the 
same as obscurity) is a long-established and perfectly valid approach. No 
computer system (SSL included) is completely immune to attack... we live 
with somewhat less than ideal theoretical utopia because we construct 
systems which are pretty good at deterrence, and with that we sleep 
peacefully at night.


What I'm suggesting should be viewed as another peg in the system of 
deterrence, and nothing more.


--Kyle


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: arrow syntax unnecessary and the idea that function is too long

2011-05-07 Thread Kyle Simpson
Based on what evidence are we concluding that the majority of the 
javascript developers want - syntax for functions? The fact that 
coffeescript is the hot buzzword? Was there some developer-community wide 
voting or poll that I missed? Or is it that a few vocal people on these 
lists like it, and that's being substituted as what the majority is in 
favor of?


IIRC there were cheers at JSConf this week.


Yeah, I unfortunately wasn't able to attend. I was quite sad about missing 
JSConf for the first time.


But, JSConf has just 150-200 JavaScript developers in attendance. While they 
are certainly some of the most passionate (and intelligent) developers of 
the community, no doubt, they are definitely not a representative sampling 
of the overall community. Making language decisions based on the vocal 
support of JSConf alone is not sufficient. I was certain there had to be 
more behind the claim than just that. So that's what I was asking for.



But you're looking for something that doesn't exist: a way to make 
scientifically sound decisions about language design.


I am not looking for any such thing. I was looking for more detail behind 
Brendan's (and Andrew's) assertions that - is definitively better because 
it's shorter (and for no other stated reason).



There is *no* way to resolve syntax questions perfectly. We welcome 
community input, all community input.


I don't claim that any such perfect system could be devised. I was merely 
responding to Andrew's insinuation that the majority of the community 
(including him) had already voiced support for -. If someone makes an 
implication, I think it's fair game on here to ask for the supporting 
reasoning.


I think I could easily come up with a dozen examples of patterns in 
JavaScript coding which are shorter, but which most of the community would 
say is *not* more readable. So I take issue with the assertion that 
shorter==better unequivocally.



But we are going to have to make a decision, and it simply won't be 
perfect. We're going to listen to everyone, consider the technical issues, 
and at the end of the day, make the best decision we can with imperfect 
information.


From the tone of this thread, and from many other recent postings regarding 
reactions from JSConf this week, it sounded like all of a sudden we'd gone 
from yeah coffeescript has some interesting short-hand syntax to the 
community has spoken, and coffeescript will be adopted into ES.Harmony/Next 
as-is.


I was, and am now, still wondering how we so quickly made the leap from 
Brendan's harmony of my dreams a couple of months ago, where the idea of # 
sounded good, and plausible for inclusion, all the way to Brendan declaring 
that it's basically a done deal that we'll be including a variety of 
function and other shorthands from coffeescript post haste?




--Kyle



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Object.prototype.* writable?

2011-05-07 Thread Kyle Simpson
My first reaction to this assertion is to say: so? It's a rather moot 
argument to suggest that code which can be altered before it's run isn't 
trustable... of course it isn't. [...] because any malicious script (if 
it's first on the page) can completely hijack another part of the page? 
Yup, I agree.


That was my point. Since you agree, I don't understand your point:


My point was that your argument was deflecting from the premise of the 
original request in a way that makes the conversation impossible to proceed. 
If you re-define the premise/assumptions to an unreasonable level, you make 
logical reasoning impossible. For instance, arguing that a virus could 
infect the JS engine of the browser is a *possible* scenario, but not a 
useful one to discuss in this context, because it fundamentally violates the 
premise. Similarly, arguing that a network router could be hacked to change 
code in-transit is conceivable, but moot and pointless to the spirit of my 
question.


The premise of the question is, whatever the state of the code on the page 
is (or how it got there), is there a way to prevent some piece of code from 
intentionally or unintentionally altering some important behaviors of native 
objects such that subsequent code could be tricked or tripped up? 
Furthermore, the spirit of the question was, granted there is no 100% system 
for that, but can we move the ball a little closer to the goal line with a 
very narrow idea of restricting a few things, in opt-in modes of JavaScript?




What protections do you have in mind?


Specifically, I think:
1. All changes to Object.prototype.* should be ignored/error'd, such that 
the predefined members of Object.prototype.* (the native built-ins) are 
immutable. This would not necessarily prevent extensions to 
Object.prototype, simply prevent redefinitions of existing native 
functionality.


2. OR; Let Object.prototype.* members continue to be mutable, but let those 
effects ONLY affect user objects, and not built-in native objects (like 
String, Array, etc). However, a few of Object.prototype.*'s members should 
still be considered for immutability, like for instance `toString()`, which 
is commonly used in the scenario of borrowing its functionality via 
`.apply()`, as described earlier with `window.opera`.


3. OR; There are other more exotic ideas I could advance, such as keeping 
Object.prototype.* mutable, but then exposing an additional interface like 
Object.__prototype__ (which is the original prototype, and is immutable).




And how, were your protections to be adopted


I was specifically suggesting they be part of one of the future opt-in 
modes, either for Harmony/Next, or more appropriately even, perhaps a future 
strict mode... or even call it safe mode. `use safe;` has a nice ring 
to it.



would all bets no longer be off when loading a virtuous library into a 
frame in which malicious code had already run first?


in the narrow context of what I'm suggesting, the protections afforded by 
the JavaScript engine would limit the ability of prior-malicious (or stupid) 
code from influencing some of the natives which my code might need to rely 
on.



As for accident, it depends how dumb that earlier piece of code was. If 
it modifies primordial methods so that they no longer satisfy their 
contracts in way that later code is ignorant of and unprepared for


Not all modifications are easily feature-testable such that later code would 
have any hope of preventing its own ignorance.



OTOH, if these earlier dumb scripts are not that dumb, then the later 
smart library should still be able to succeed.


To me, the notion of a smart library (even for the purposes of safety) is 
rather absurd. A fundamental assumption here is that there are in fact a 
certain subset of things which no JavaScript code should be allowed to do, 
so as to play nicely with other code. Extending Object.prototype has 
almost universally been taken as one such thing. Circumventing 
`location.href` inspection would be another good candidate.


We can't then say well, we'll make an exception for this one smart library, 
and let him much with that stuff... because then the smart library 
becomes the single-point-of-failure(aka attack).


That's why I approached this narrow question as specifically opt-in 
protections from the engine (non-circumventable), rather than in the 
user-space.



--Kyle

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: arrow syntax unnecessary and the idea that function is too long

2011-05-07 Thread Kyle Simpson

But, JSConf has just 150-200 JavaScript developers in attendance.


Right. The JS community has no borders, no government, no constitution, no 
membership cards, no census... We welcome everyone. So we have no way of 
instituting democratic institutions.


they are definitely not a representative sampling of the overall 
community. Making language decisions based on the vocal support of JSConf 
alone is not sufficient.


I can only repeat what I said before. There's no magic way to figure out 
accurately what most people want. The best we can do is publicize, solicit 
feedback, discuss, and make a decision. As we have always done.


OK, it's fair to point out that an attempt is being made. I'm asking though 
for the resulting evidence from that attempt. As far as I can tell (and 
please correct me if I'm wrong), there's been a few discussions on some 
rather-esoteric lists/threads, like es-discuss, a few strawmans, and some ad 
hoc presentations as JSConf. If there's a signficant medium of exchange of 
opinions and ideas about topics that I'm NOT listing there, please do tell 
me. I like to think I keep on the pulse of the JS community (generally 
speaking), and so I'm anxious to hear if and how I'm missing out.


If OTOH those few mediums do constitute the breadth of community opinion 
solicitation thus far regarding specifically the matters of these 
coffeescript shorthands, as I was previously inclined to believe, then my 
original assertion stands, that this doesn't constitute, in my opinion, 
enough of the broader perspectives on what is and is not useful and readable 
JavaScript. With all due respect, Brendan's personal tastes on what kind of 
code he likes to write is not enough. It has to be something that is likely 
to find wide spread support among the JavaScript masses.


And if we're looking for any kind of guide as to what they might like 
(because we cannot scientifically poll all of them, obviously), then might I 
suggest that the community that's grown up around jQuery (and its associated 
projects, plugin ecosystem, etc) is a place to start. I am not in any way 
suggesting jQuery is the only style of code out there, by any means. But it 
clearly represents a wide swath of how JavaScript developers are currently 
using the language. And jQuery is unique enough in its syntactic 
eccentricities (its chaining, etc) that it may offer some insights.


To the extent that jQuery syntax encourages people to take shortcuts, it 
could be seen as support for shorter syntax. And to the extent that jQuery 
uses lots of anonymous functions, it could be seen as an opportunity to 
shorten all that function soup.


But, by the same token, jQuery preserves verbosity in some places for 
readability sake. For instance, event names are known by their full 
canonical names, rather than by some shorthand (similarly with attribute and 
property names). I can say as many times as I write `click(...)` or 
`bind(click...)`, I could see where `c` or `clk` might be nice to have, to 
save on all that typing. But, it would probably be for a loss of readability 
and semantics of the code.


So there has to be a careful balance struck between shortness and 
readability. I think at least a few of us are saying that we're skeptical 
that - is sufficiently readable and semantic, compared to function(){} or 
#(){}. The same goes for the loss of `return`... Having `return` in there is 
a good thing, I think, because it makes it clear what's being returned. I 
often write complex functions with internal branching logic, where there's 
more than one `return` statement, and so it scares me to think how I could 
inspect and understand such code as readily/easily if the `return` was 
implicit, for instance, only the last evaluated expression, etc.



I was merely responding to Andrew's insinuation that the majority of the 
community (including him) had already voiced support for -.


You have no way of knowing Andrew was insinuating that. I saw only the 
eminently reasonable point that we will never be able to please everyone, 
and will have to *try* to please as many people as possible.


Andrew's original message (in part):
I'd want the committee to listen to all interested parties and try to pick 
the solution that pleases the most people. That appears to be what's 
happening here.


The phrase That appears to be what's happening here, following after the 
committee...listen...pick sentence before, led me to believe that Andrew 
was indicating that the movement to adopt Coffeescript-like shorthand was a 
result of the committee *having already listened* and *having already 
picked* a solution that most people agreed with. It was the implication of 
this has already been happening I reacted to. If I misread it, I 
apologize. But my interpretation was fueled strongly by half a dozen blog 
posts and dozens of tweets from JSConf and post-JSConf which seemed to 
suggest that this stuff was already basically a done-deal.





I take 

Re: Object.prototype.* writable?

2011-05-07 Thread Kyle Simpson
Good, we're making progress. Previously I was not responding to your 
original request, I was responding to your response to Crock's message. 
Hence our confusion about premises. Thanks for making your's clearer. As 
for your original request, now that I better understand what you're 
looking for, I think SES is again a good answer. Should SES become a 
directly supported standard, there would be some opt-in as you suggest. 
For now, for SES as implemented on ES5 
http://codereview.appspot.com/4249052/, the interim opt-in is to run 
initSES.js first in the JavaScript context in question. In the case of a 
browser frame, the interim opt-in is to place


script src=initSES.js/script


Unfortunately, we're back to the chicken-and-the-egg... if I could guarantee 
that my code was the first to ever run on any page, almost none of the 
problems I'm complaining about would be an issue, because I could just make 
sandboxed copies of what I needed, and store them privately inside a 
closure. Being able to run-first is the key component that isn't true, and 
if it were true (which is required of initSES.js), then I wouldn't need 
initSES.js.


So, while I agree that the direction of SES seems to be along the lines I am 
asking for, the interim story doesn't really hold much water. The good news 
is that some movement is happening toward it. I hope that continues.


Is the thought that you would have a similar opt-in to SES as you do 
strict mode, eventually? Or something else?


--Kyle

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Escaping of / in JSON

2011-04-13 Thread Kyle Simpson
Many JSON serializer implementations escape the / character, including for 
instance PHP's json_encode(). However, JavaScript's own JSON.stringify() 
does not. If you look at the grammar on json.org, as I read it, the escaping 
of / is **optional**, since it is a valid UNICODE character, and it's not 
, \, or a control character.


I personally find this annoying as I never embed JSON into script tags like 
that, and even if I do, my data never looks like /tag. I wish that JSON 
serializers, including JSON.stringify, had an option to control if you want 
/ to be escaped. It could of course default to whatever each 
implementations current default behavior is, but I think it should be a 
configurable behavior rather than baked in, one way or the other.


--Kyle



--
From: Lasse Reichstein reichsteinatw...@gmail.com
Sent: Wednesday, April 13, 2011 4:26 AM
To: EcmaScript Steen es-discuss@mozilla.org; es5-discuss 
es5-disc...@mozilla.org; Oliver Hunt oli...@apple.com

Subject: Re: Escaping of / in JSON


On Wed, 13 Apr 2011 07:30:58 +0200, Oliver Hunt oli...@apple.com wrote:

It has recently been brought to my attention that a particular use case 
of JSON serialisation is to include JSON serialised content directly 
into an HTML file (inside a script tag).  In this case in addition to 
the threat of strings being terminated by a double quote there's also 
the potential for the string /script to terminate the JS source.


The request i received was to escape the slash character, which is 
allowed as input but per ES5 spec we aren't allowed to emit.


I will say that I don't really like this idea as it leads to why not 
escape #?, etc but I thought I should bring this up on the list and see 
what others think.


My personal opinion is that if you want to embed any string into any
formatted context, you need to be aware of the environment you are 
plugging

things into.

If you put something into HTML, you need to know where in the HTML it is.
If it's an intrinsic event handler, the requirements are different than if 
its
a script tag. In a script tag, it's not just / that's a problem, but 
also, e.g.,

![CDATA[ and !-- if the HTML is actually XHTML or HTML5.

I don't want to start adding exceptions to JSON just to help one usecase.
I'd rather create a function for people to use that can convert a JSON 
string
to valid HTML script element content (but not as part of the language, 
it's too

HTML specific). It would fit better into HTML5, so that it can follow any
changes to the specification.

(On the other hand, RegExp.quotePattern and RegExp.quoteReplacement like 
the Java

versions would make sense to have in ES).
/L
--
Lasse Reichstein - reichsteinatw...@gmail.com
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Existential operator

2011-04-13 Thread Kyle Simpson
See http://wiki.ecmascript.org/doku.php?id=strawman:default_operator --  
the proposal there is ?? and ??= since single ? is ambiguous after an 
expression due to conditional expressions (?:).


The default operator doesn't address a significant part of what Dmitry is 
asking for -- the . in the ?. usage -- which allows the property access to 
be expressed only once and used for both the test and assignment.




let street = user.address?.street

which desugars e.g. into:

street = (typeof user.address != undefined  user.address != null)
   ? user.address.street
   : undefined;


Part of what Dmitry asked for, I'd like to see in the plain ?: operator, and 
it seems like it would be possible to disambiguate from a top-down parser 
perspective. I would like to see the `:` (else condition) portion of a ?: 
expression be optional. For instance:


var a = b ? c;  // aka, `var a = b ? c : undefined`

The other (more awkward/obscure looking) way to do this is:

var a;
b  a = c;

The difference between the sytnax sugar I'm asking for and the default 
operator in the strawman is that ?: (or ) allows for separate expressions 
for the test (`b`) and the success_value (`c`), whereas ?? requires that the 
test expression and success_value be the same expression.


For instance:

var a = (b  5) ? b : undefined;

In this case, the ?? default operator is of no use. But being able to drop 
the `: undefined` part, and also avoid using the more awkward looking  
syntax, would certainly be a useful sugar.



--Kyle


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Existential operator

2011-04-13 Thread Kyle Simpson


The other (more awkward/obscure looking) way to do this is:

var a;
b  a = c;



a = b  c;


That is not the same thing. Your code assigns `b` to `a` if `b` is falsy . 
The other code either leaves `a` as undefined (strictly doesn't assign) if 
the test fails, or assigns it the value of `c` (no matter what the type of 
`c` is) if the test succeeds.



Your suggestion to change the ternary operator is interesting but
creates incompatibility. It is not feasible.


I'm curious what incompatibility you mean? If we're talking about 
backwards compatibility... of course. But a lot of the ES-Harmony (and 
later) stuff is of that same persuasion. I'm simply saying if we're talking 
about adding sugar to these operators for future versions of ES, this is one 
pattern I end up typing a LOT and it would be helpful.


Or is there some ambiguity of top-down parsing that I'm missing?


--Kyle



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Existential operator

2011-04-13 Thread Kyle Simpson
See http://wiki.ecmascript.org/doku.php?id=strawman:default_operator --  
the proposal there is ?? and ??= since single ? is ambiguous after an 
expression due to conditional expressions (?:).


The default operator doesn't address a significant part of what Dmitry 
is asking for -- the . in the ?. usage -- which allows the property 
access to be expressed only once and used for both the test and 
assignment.


This was a one-line FYI, and on-topic in reply to Dmitry's post since he 
brought up ?= (spelled ??= in the strawman). Why are you objecting to it?


I apologize, I thought you were citing the ??/??= strawman in reference to 
Dmitry's original first question, about the ?. operator. I didn't realize 
you were instead referring to his second question, about ?=.



I don't see any ?. use case here, so I'm still not sure what this has to 
do with Dmitry's post or my reply. The topic in the Subject: line is 
CoffeeScript's existential operator, not anything that might be spelled 
with a ?


Yes, I apologize for slightly hijacking the thread. I was really just making 
an aside note that part of what Dmitry was asking for, which the ?. in 
Coffeescript does -- allowing the trailing `: undefined` part to be 
omitted -- is something that I indeed find useful in and of itself, and in 
fact would like to see it on the basic ?: operator, if possible.




First, making : optional introduces a dangling-else ambiguity:

 x = a ? b ? c : d;

This could be (x = a ? (b ? c : d)) or (x = a ? (b ? c) : d).

True, if-else already has this (traditional in C-based languages) 
ambiguity, resolved by associating : with the inner ? and so requiring 
the programmer to use braces if the other association is wanted, or in 
general just to avoid trouble. But why should we add more of the same kind 
of ambiguity, given the benefit of hindsight?


I'm not sure I see how this is really introducing an additional ambiguity? 
As you rightly assert, this ambiguity is already present when you chain a 
series of nested ?: usages together, and you already sometimes have to use 
() to dis-ambiguate, something which is long since familiar to those who 
dare to brave the ?: nested chaining. It seems like it's the same ambiguity, 
not additional. But I suppose that's just a matter of perspective.


For `x = a ? b ? c : d`, It seems pretty reasonable and based on existing 
precedent with ?: operator precedence that it'd be taken as  `x = a ? (b ? c 
: d)  {: undefined}`. If that's what you wanted, then it's not ambiguous, 
and you don't need (). If you want the other way, you simply use () and make 
it so. Not sure why this would be bad additional precedent?


I personally tend to avoid that pattern of coding, as I find the potential 
for mishaps greater than the benefit. But in the times where I do use such a 
pattern, I'm cautious to always use (), even when not strictly necessary, so 
in that respect, there'd be no ambiguity to using ?: with optional :, at 
least in the way I code things. And I don't think adding () to some chains 
where you want to override operator precedence is an undue hardship on 
anyone, as you already (sometimes) have to do that with ?:.




I think the dangling else problem is enough to nix this,


That's a shame. I hope not. But I suppose I'm not surprised if it turns out 
to be so.



I'm also not sure b is falsy but not undefined in practice. It seems 
contrived to want a numeric value sometimes, and undefined others. IOW,


 var a = (b  5) ? b : undefined;

looks like a bug that will result in undefined or NaN values propagating 
via a, some of the time, into other numeric or number to string 
expressions. It looks like a mis-coded min against 5.


In my code, one example where I often use a pattern of something either 
being undefined or having a real value, is in options object 
configurations, like when you pass an options hash to a function. For 
instance, if a property is omitted, or it's present but is `undefined`, then 
it's taken to have not been set at all, and thus is either ignored, or in 
some cases is defaulted to some other value. OTOH, if it's set to an actual 
numeric value, then the numeric value is of course used.


The value `null` is another common value used for the purpose of indicating 
not set or ignore this property. I tend to not like that quite as much, 
since `typeof null == object` (which can be confused with other types if 
you're not careful) where as `typeof undefined == undefined` 
unambiguously.


There's also been a few cases where I've distinguished between a value being 
undefined (aka, not set) and the value being null (aka, set 
deliberately to empty). For instance, if you pass in an option with the 
value as `undefined`, that means not set and it's ok to grab and use a 
default value for that option. But if you explicitly pass in an option with 
value `null`, that means disable or ignore me and don't use the default 
value. I don't use false in this case, as it's easy to 

Optional : in ?: operator [was: Existential operator]

2011-04-13 Thread Kyle Simpson
I'm not sure I see how this is really introducing an additional 
ambiguity?


It is obviously introducing an ambiguity where none exists today. ?: is 
indivisible, unlike if vs. if else.


I was referring to no additional visual ambiguity inside the ?: with respect 
to operator precedence and how to interpret a chain of nested ?:, as 
compared to doing the same task if `:` is optional.


a ? b ? c : d ? e ? f : g : h ? i : j : k
==
a ? (b ? c : (d ? (e ? f : g) : (h ? i : j))) : k

Firstly, that code has no *real* ambiguity, because operator precedence 
tells us how those implied () sets disambiguate it.


Secondly, in my opinion, the visual ambiguity of it is not made any more 
burdensome (and is even perhaps SLIGHTLY *clearer*) by adding () sets to 
disambiguate where you want optional `:`, such as this example (same as 
above, but where `g` and `j` are omitted as placeholders for implied 
`undefined`):


a ? b ? c : d ? (e ? f) : (h ? i) : k
==
a ? (b ? c : (d ? (e ? f : undefined) : (h ? i : undefined))) : k

Moreover, this code wouldn't have any actual ambiguity even if you omitted 
those two sets of () in the original. It would still be a valid expression, 
with a different (but still unambiguous) interpretation:


a ? b ? c : d ? e ? f : h ? i : k
==
a ? (b ? c : (d ? (e ? f : (h ? i : k)) : undefined)) : undefined

Developers who chain/nest ?: together are already familiar with how and when 
they have to use () to disambiguate, and the rules for what happens when 
they don't, and it seems like exactly the same effort for them to do so if 
implied `: undefined` were something they wanted to/could leverage.



In sum, this sounds like an argument against ? as infix operator (implied 
: undefined).


I'm sorry, I'm lost by this statement. I don't understand on what basis you 
conclude that I just argued against the very thing I'm asking for. Can you 
elaborate at all?





var opts = {
 doX: (someX  0  someX  10) ? someX ,   // leaving off the `: 
undefined` (or `: null` if you prefer)

 doY: (someY  0  someY  1) ? someY   // ditto
};


Sorry, I think this is a hazardous pattern. doX suggests a boolean 
value,


How is

doX = (someX  0  someX  10) ? someX

more suggestive of a boolean value than is

doX = (someX  0  someX  10) ? someX : undefined

Or, put another way, how are either of them suggestive of a boolean value 
result? Unless you mean simply the name doX being suggestive of do or 
do not (probably a poor name choice for my example), I don't see how 
either code snippet itself implies a boolean value result.


As far as I am concerned, they both clearly indicate selecting a value (type 
irrelevant) based on a boolean test. The implied part hides one of the 
variable choices, yes, but I don't see how that suggests an entirely 
different type of operation/result?




but you want (number | undefined), a type union.


I'm not sure if you're arguing against:
 a) the pattern of having an `undefined` value in a variable when unset, 
that otherwise stores numbers when it's set; OR

 b) the implied `: undefined`

If (a), this was a long held pattern long before I ever wrote my first line 
of JavaScript. Moreover, it's far more common today on the greater web than 
any code I've ever written. I'd argue it's pretty close to a de facto 
pattern at this point, so I'm not sure what the point is in using that 
argument to contradict my request for an implied `: undefined`. Moreover, 
this code sample I gave is only one such example. If it irks you that much, 
I'm sure I could find other examples to illustrate. But, I'm not really sure 
how that helps or hinders the process.


If (b), I'm not sure what the type union for set vs. unset has to do 
with the request for implied `: undefined`? These seem like different 
discussions altogether.


It sounds like you're basically saying either that my idea is very niche and 
thus irrelevant to consider, or that the supporting reasoning behind the 
idea is flawed/bad code, and thus the idea is necessarily flawed/bad. Is it 
either, both, or neither of those that you are suggesting?



If any consumer fails to discriminate using typeof, they'll get undefined 
which coerces to NaN as number (to 0 as integer). Bad times.


First of all, I'm talking about code where I am the consumer, so I take care 
to make sure the proper checks are done.


Secondly, criticizing the pattern as a substitute for criticizing the 
idea/proposal is a non sequitur, or orthogonal at best. Completely ignoring 
the implied `: undefined` for a moment, there's plenty of other ways that a 
variable might end up with a proper (numeric) value in the set state, and 
an `undefined` value in the unset state. It is therefore a granted that, 
regardless of whether implied `: undefined` were ever to be considered valid 
or not, safely checking variables with proper understanding of type coercion 
is a must.



--Kyle



___
es-discuss mailing list

Re: Optional : in ?: operator [was: Existential operator]

2011-04-13 Thread Kyle Simpson
In an LR(1) grammar, if vs. if-else or ? vs. ?: is a shift-reduce conflict 
(to use yacc terms). It is an ambiguity. It can be disambiguated, but 
please do not confuse disambiguation via shifting with no *real* 
ambiguity.


My point is it IS disambiguated by the definition of operator precedence. 
The adding of an optional nature to the : portion of ?: wouldn't change 
that, as the expression would still hinge on the same definition of operator 
precedence. So it doesn't really introduce additional ambiguity (either 
visual or processing-wise), that isn't already there for ?: usage. It just 
piggybacks on that same operator-precedence-disambiguity which is long-held 
in JavaScript. In fact, based on the definition I'm putting forth, I don't 
think it requires any (or hardly any) additional rules for operator 
precedence or disambiguation.



I'm sorry, I'm lost by this statement. I don't understand on what basis 
you conclude that I just argued against the very thing I'm asking for. 
Can you elaborate at all?


You made an ad-hoc argument for a ? b : undefined but want to write that 
as a ? b. The latter is not obviously the same (Dmitry pointed out the 
potential for confusion with ??, which has a very different result).


OK, fair enough, I can see how some confusion is possible, IF the ?? 
operator is also added. But I am not convinced that possible future 
confusion is, alone, enough for invalidation of an idea. I can name a number 
of things coming down the pike for ES next's that are, at first, a little 
confusing, and take some getting used to. Their value to the language, and 
to syntactic sugaring, withstands the initial confusion.




It's also not obviously a good non-hard case to burn into the grammar.


What is non-hard about saying that, in the processing of any ?: expression, 
if the ? is present but the : is not found where it is expected, then the : 
is implied and is always `: undefined` (or `: void 0`, if you prefer)? 
Perhaps I'm missing it, but I really don't see how it's any harder than that 
alone.




How is

doX = (someX  0  someX  10) ? someX

more suggestive of a boolean value than is

doX = (someX  0  someX  10) ? someX : undefined


The name doX connotes boolean.


OK, so your complaint is about the bad name I gave that variable. Fine. 
Agreed. do was a bad word for a non-boolean. s/do(X|Y)/coord$1. Variables 
names in examples aside, the point of the example (the operator usage) 
should be clear. Not sure why it isn't?




I'm not sure if you're arguing against:
a) the pattern of having an `undefined` value in a variable when unset, 
that otherwise stores numbers when it's set; OR

b) the implied `: undefined`


Both, (b) because of (a). I was explicit about this, there's no confusion.


There is still confusion on my part, because I honestly don't see how (b) 
necessarily and unequivocally follows from (a). I can clearly see that you 
want to connect them, but I don't see why they have to be. (see below).



No, people do not risk converting undefined to NaN (to 0). It's true that 
var hoisting means undefined inhabits the type of all vars, but unless yo 
use an uninitialized var, that's not an issue.


There's many other ways that variables get to `undefined` besides them not 
yet being initialized. For instance:


1. `delete opts.coordX`
2. unsetting variables so the value is GC'd.
3.  setting variables to `undefined` (or `void 0`) to prevent a memory leak, 
for instance on the `onreadystatechange` and `onload` handlers of XHR 
objects, in IE.

4. etc

The pattern of explicitly assigning `undefined` to a variable that otherwise 
holds some real value, and then making logic decisions based on if the 
variable is in a defined or undefined state, is clear, works, and is 
well known (if distasteful to some, including obviously you). I'm surprised 
you're insisting that it's purely niche (it's not, I can find lots of 
examples, even in major js libs) or it's bad/wrong (just because you say 
so?).


But, whether it's right or wrong, it's clearly possible in the language, and 
even if you dislike it, used by a lot of developers besides myself, making 
it (at least a candidate for) a de facto coding pattern.



First of all, I'm talking about code where I am the consumer, so I take 
care to make sure the proper checks are done.


Why don't you set doX (or just x) to the minimized value you want?


Encapsulation. Inside a class, method, module, etc, you define the base 
default values for the configuration params. Then the outside user of the 
API simply passes or doesn't pass what they want in, and the configuration 
mixes in what they pass with what the defaults are. This is a hugely 
standard pattern across tons of different public libs, including major ones 
like jQuery UI, etc.


So, from the outside, simply not passing in a `coordX` property on the 
`opts` config object is the most common pattern, and in general, preferable.


But, as I showed, when defining the config 

Re: Optional : in ?: operator [was: Existential operator]

2011-04-13 Thread Kyle Simpson
Brendan, you've asked for other coding examples where I use the pattern of 
some variable being `undefined` or not to trigger different behavior (that 
is, to use the variable or not). Here's two more:



1. I have a templating engine DSL (called HandlebarJS) I wrote (in JS), 
which includes a very strict minimal subset of a JavaScript-like syntax 
for declaring template variables in the definition-header of a template 
section. In fact, the variable declaration stuff is massaged just slightly 
(for variable namespacing, mostly), and then executed as straight 
JavaScript.


In a template section header, I'm able to define a local variable for the 
section which has the ID reference of another sub-template section to 
include. If you ask to include a sub-template, and the local variable you 
give as the name of the template to include is an empty  or is `undefined` 
or otherwise falsy, then the templating engine simply skips over that 
template section inclusion. Example:


{$: #main | content = data.content ? #content :  $}
   h1Hello World/h1
   {$= @content $}
{$}

{$: #content}
   p
   {$= data.content $}
   /p
{$}

So, if the `data.content` variable is truthy, then the local variable 
`content` gets value #content (a string ID reference to the sub-template 
section of that name), and if not, then it ends up as the empty string . 
Then, after the h1 tag, a template-include is called, with {$= @content 
$}, which basically says, get the value out of that local variable, and if 
it's a reference (by ID) to a template section, then include it. If the 
variable referenced is empty, or undefined, or null, or whatever falsy, then 
simply silently don't include anything.


Of course, the `: ` part of the variable declaration is what's relevant to 
our discussion. I could obviously specify another template ID in that 
string... but in this example I'm giving, if the `data.content` is empty, I 
don't want to include the markup (the p.../p) from the #content template 
at all, so I just set the local variable to , which results in no 
sub-template being included.


Here's where this gets to the point: in my template DSL, you're allowed to 
drop the `: ` from ?: usage, so the declaration for main can look like 
slightly cleaner, like this:


{$: #main | content = data.content ? #content $}

Purely syntax sugar I'm giving there, to help the template not have so much 
code in it. If the  or `undefined` or whatever falsy value in the or 
case can be assumed, it makes the writing of that variable declaration a 
little bit nicer. When I parse the data declarations from my template 
syntax, and then hand it off to JavaScript, I simply substitute in the `: 
undefined` if it's not present. I even take care of nested ?: with the 
optional `:` parts.


To put a finer point on it, it would be nicer/easier if I didn't have to add 
those implied clauses in, because the JavaScript language just supported 
that syntactic sugar directly.


-

2. I've got cases where I have a set of code in an embeddable widget, that 
someone can take the code and embed into their site. My code relies on 
jQuery, but at least jQuery 1.4. So, I first check to see if jQuery is 
already present, and if it's 1.4+, and then if so, I just use the page's 
copy of jQuery. Otherwise, I go ahead and load jQuery dynamically for my 
code to use.


So, a drastically simplified version (yes, I know the version matching logic 
is faulty from the over-simplification) of this code looks like this:


(function(){
  var jq = ($  $.fn.jquery.match(/^1\.[4-9]/)) ? $ : undefined;

  ...

  if (typeof jq == undefined) {
 // go ahead and load jQuery dynamically
  }
})();

So, how can I write this code differently? Of course there's other ways to 
write it.


var jq;
($  $.fn.jquery.match(/^1\.[4-9]/))  jq = $; // jq is either jQuery or 
it's still `undefined`, which I prefer


OR

var jq = ($  $.fn.jquery.match(/^1\.[4-9]/)) ? $ : null; // jq is either 
jQuery or it's `null`, which I like less


OR

var jq = ($  $.fn.jquery.match(/^1\.[4-9]/))  $; // jq is either jQuery 
or it's `false`, which I like less


OR ...

And there's probably half a dozen other patterns too. I could use `null` as 
the sentinel value, I could use `false`, heck I could even use an empty  
string.


The point is, I prefer to use `undefined` in such cases, because 
semantically, I'm saying that my `jq` is in fact undefined (or rather, 
not yet defined) if either jQuery is not present, or the version match 
fails. I don't prefer to use `false` or `null` or `0` or `` as the 
alternate value, I prefer to use `undefined`.


And so, it'd be nice if my `var` statement could be simpler, like:

var jq = ($  $.fn.jquery.match(/^1\.[4-9]/)) ? $;

Why? because it keeps the declaration and initialization all in one 
statement, which is cleaner, and because it preserves that `jq` is only ever 
a valid reference to jQuery, or it's strictly `undefined`... there's no 
other 

Re: Bringing setTimeout to ECMAScript

2011-03-20 Thread Kyle Simpson

I don't see why you can't verify your expectation.

If you think you can verify your expectation, please write ECMAScript
interoperable test cases that show how to test whether an ECMAScript
engine is conform to your scheduling policy or not. It will be enough to
convince me.
Testing one timer (When you do a setTimeout( f, ms ) you know you are
saying fire this @ t = Date.now() + ms ) will not be difficult.
Testing your scheduling policy is a different story.


Forgive my naivety, but would it not be possible to test such a scheduling 
policy with something like this:


var test = 1;

function a() {
  test *= -1;
}
function b() {
  assertEquals(test,-1);
}

setTimeout(a,0);
for(i=0;i1E10;i++){i=i;}  // or any other sufficiently long running 
algorithm

setTimeout(b,0);


--Kyle 


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Re: Bringing setTimeout to ECMAScript

2011-03-20 Thread Kyle Simpson
Nowadays the clamp is there because sites use |setTimeout(f, 0)| when 
they really mean run this at 10Hz and if you run it with no delay then 
they swamp your event loop and possible render wrong (e.g. the text 
disappears before the user has a chance to read it).
I'm not convinced that this  is the meaning. I use |setTimeout(f, 0)| to 
mean schedule f after this event completes or push |f| onto the event 
queue; I think that is the common meaning.   A delay value of zero is 
perfectly sensible, but we mean zero seconds after other queued events, 
not zero seconds after this event.  We assume (without real evidence I 
suppose) that UI and IO events can be queued while our JS (the caller of 
|setTimeout(f, 0)|) runs.


I'd actually say that the most common meaning for `setTimeout(f,0)` is: do 
`f` as soon as possible after the current code finishes. There's a bunch of 
places I do things like that. For instance, in IE (older versions), it's 
prudent to delay any code with a setTimeout(..., 0); that is adding script 
elements to the DOM, to avoid certain crashing race conditions. And there's 
of course dozens of other cases where similar things are a reality on the 
web.


If:

setTimeout(a,0);
setTimeout(b,0);
// some non-trivial computation

... is really to be interpreted that `a` and `b` have a non-deterministic 
ordering, that's quite counter-intuitive and in fact will definitely have 
potential to break some code across the web. On the contrary, I've never 
seen that pattern be unreliable, so I would suspect all the browsers are 
already guaranteeing that order, using some queue (not a stack, 
obviously, as Jorge has been saying) for each target time.


Furthermore, this pattern should also be true here, right?

setTimeout(a,5);
var then = (new Date()).getTime(), now;
do { now = (new Date()).getTime(); } while (now  (then + 5));
setTimeout(b,0);

If in this case `b` can go before `a`, that seems like that violates the 
principle of least-surprise, because as a programmer I've basically tried to 
ensure with that snippet that both timers are set to fire at essentially the 
same time target, and if they are firing at ~ the same time, I can't see any 
expectation that makes sense except first-come-first-served (in other 
words, the time-target queueing).


There are cases where non-determinism are unfortunate reality, like the 
iteration order of objects :), but I don't think that timer/event ordering 
should be one of them, if it can be avoided.




Multiple repeated calls to |setTimeout(f,0)| are bugs


I don't agree with that assertion at all. Two different functions might 
queue up two different snippets to happen as soon as possible, later, 
each of them using their own setTimeout(..., 0).




and setInterval of zero would be a bug.


setInterval(...,0) may be silly, but that doesn't mean it's a bug. It means 
make this happen as fast as possible, just like above where 
setTimeout(...,0)  means make this happen as soon as possible.


The swamping would occur if setInterval(f,0) was actually going to spin in 
sub-millisecond speeds. It could also occur if you fake setInterval with:


function f() {
  // ...
  setTimeout(f,0);
}
f();

But in either of those cases, I don't see why there'd be any reason for 
clamping at anything higher than 1ms (being the smallest unit of time I 
can address with the API anyway)?




--Kyle




___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: About private names

2011-03-20 Thread Kyle Simpson
BTW, if you know that a property name is foo, why would you ever code 
obj[foo] instead of obj.foo?


The most obvious reason is if the name of the property contains a character 
which cannot be an identifier character in the property name... like a 
unicode character, for instance.



Without private names there is no particular reason to say obj['foo'] 
rather than obj.foo but there is a very important distinction between 
obj[foo] and obj.foo.


Are we talking about the difference between obj['foo'] and obj[foo]? I think 
perhaps that was a subtle shift in this conversation that I missed until 
just now?


Without private names, is there (and what is it?) an important distinction 
between:

1. obj[foo] and obj.foo; AND
2. obj['foo'] and obj.foo


--Kyle


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Bringing setTimeout to ECMAScript

2011-03-19 Thread Kyle Simpson
What I was saying is, if I run this program through V8 right now (with 
its theoretical future support of setTimeout() included), then what will 
happen:


function fn() {
 print(hello);
}
for (var i=0; i10; i++) {
 setTimeout(fn,i*1000);
}

That for-loop will finish very quickly (probably 1 ms). Would V8 (or any 
other JS engine) finish in the sense that the calling embedding code 
thinks this program is completely finished, and it returns back control 
to the C/C++ embedding layer when:


a) right after the for-loop finishes; OR
b) after only the first call to `fn`, since it's timeout was effectively 
0, and so would have been immediately after the main program finished; OR
c) after all of the queued up calls to `fn` have finished, about 9 
seconds later?


(a) is the right answer because the event loop is run by the embedding, 
not v8. IMnsHO.


OK, so what I was asserting in my earlier email is, I'd prefer that if 
setTimeout() were going to be added to the official language spec, and in 
fact directly implemented in the engines, that instead of making the 
embedder's job harder (ie, me), that the engine entirely encapsulate and 
manage the event loop.


I realize I'm certainly in the vast minority with that opinion, but I'm 
simply saying that not everyone who embeds JavaScript deals with life in an 
asynchronous/isosynchronous mindset. We're not all clones of Node.js. My 
embedding is entirely synchronous, and always will be, for its use case. 
Which means that if we introduce asynchronicity/isosynchronicity at some 
level, like in the actual JavaScript layer (with setTimeout(), etc), then at 
some higher level, like the engine (my preference), or my embedding (not my 
preference), blocking behavior has to be written so that I can maintain 
the external synchronicity that my embedding provides. Namely, I have to be 
able to ensure that program A is totally done before program B runs.


Could I figure out event-loop semantics in my C/C++ layer, assuming the V8 
API exposed some API way to determine if events were still unfulfilled, and 
basically provide the event-loop functionality *per-program* in my 
embedding? Sure, I guess I could always learn how to code that stuff. Or, I 
could simply go to extra work to disable any such native functions so that 
code running through my embedding cannot create isosynchronous conditions.


Neither of those two are very preferable scenarios to me. What I like about 
my embedding of V8 right now is, I don't have to worry about those details, 
so my embedding is straight-forward and simple.



In other words, to put it simply, if program A can call setTimeout(), and 
I want to run program A and then program B, I have to be able to make 
sure that I don't try to run program B until everything is fully finished 
in A. As V8 stands now, there's no way to do anything non-synchronous, so 
when A finishes, I know it's totally finished. I'm concerned that there'd 
be some new way with setTimeout()'s that this wouldn't be true.


This is not true today with V8 in Chrome, precisely due to setTimeout!

It's not true in Node either.

I'm not sure where you think it's true. V8 embedding in something without 
setTimeout or anything like it, sure. But then you can't write simple 
time-based programs, set alarms, etc., so back in the door comes 
setTimeout or a workalike...


If I embed V8 in a simple C/C++ program, and I try to run a snippet of 
JavaScript that calls `setTimeout(...)`, the V8 engine complains and says 
that setTimeout is undefined. Ergo, my assertion that setTimeout() is *not* 
in core V8, but must be added to V8 by the Chrome/Chromium embedding.


As I said above, I don't currently go to that extra trouble, because for my 
embedding use-case, such functionality is more trouble than it's worth.


What I'd simply prefer not to be forced into is some day the core engine (by 
virtue of the core JS spec requiring it) of something like V8 having 
setTimeout() defined, and *forcing* the embedding to have to deal with the 
event-loop. Not everyone who does JS embedding needs such constructs.


Perhaps my complaint is more an engine/embedding complaint than a JS spec 
complaint. Perhaps what I'm saying is, I'd want for there to be an easy way 
for embedding with V8 (or any other engine) to be able to turn off/disable 
isosynchronous APIs so that the embedding wasn't forced to deal with the 
event-loop.


--Kyle


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Bringing setTimeout to ECMAScript

2011-03-19 Thread Kyle Simpson

* No clamping. Time runs as fast as the platform lets it run.
* The return value is not an integer but a unique unforgeable object for 
canceling the event. No one without

that object can cancel that event.


This last point is something I was about to raise when starting to think 
about extending the Q API.
setTimeout returns a number, so a malicious could loop through numbers 
and call clearTimeout on them. An
unforgeable object sounds like the correct response to this issue. Have 
you considered wrapping
setTimeoutfriends in SES with this solution? It wouldn't break code 
which do not rely on the returned value

being a number (but would break code which does, of course).


Caja does exactly this. So far we haven't found any code that this 
actually breaks. All uses we've encountered[1]
treat the return value of setTimeout/setInterval as simply something they 
can remember and later pass to

clearTimeout/clearInterval.

[1] That I'm aware of. Caja users should speak up if they've hit a 
counter-example. Or anyone else that has seen

code in the wild that actually counts on these values being numbers.


I have a number of different projects where I've used timers (mostly 
intervals rather than timeouts), where the logic that I used relied on being 
able to tell if a value was falsy or not, to know if there's a timer 
attached to some variable (so that you only set the interval once, and not 
multiple times).


So, for instance, a variable like foo = false`, when an interval is set, 
`foo = setInterval(...)`, and when that interval is cleared, I not only call 
`clearInterval(foo)` but I also call `foo = false` again to signal that the 
interval has been cleared (there is no `checkInterval(foo)` API to check 
this). Then, the next time I want to set the interval, I first check to see 
if `foo` is truthy or falsy, and only set if it's indeed falsy. So, all this 
is to say, I rely in those tests on whether the value is truthy or falsy. 
Not exactly relying on its data type, but it's important not to assume that 
noone ever checks those values -- in many cases, I do.




--Kyle


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Bringing setTimeout to ECMAScript

2011-03-19 Thread Kyle Simpson
Kyle: If there was a way to determine which timers are currently queued, 
would that solve your problem? That's probably the only thing I'm missing 
right now from the timer api: Some array with all queued 
timeouts/intervals. Maybe that's to prevent the clear attack mentioned 
before, looping all numbers and clearing them. I don't know. But if you 
had such an array (either with just ints, or even an object with 
{int,time/interval,callback}) would that suffice for you? You could check 
the list and block while there are timers left.


Well, my point was, I'd like the embedding layer not to have to implement 
that kind of logic if there's a way to avoid it. But yes, in terms of 
least-effort, that sounds like the simplest approach to essentially blocking 
on the event loop queue to wait for it to complete. However, we'd have to 
consider all such possible events, not just timers... XHR comes to mind. 
server-sent events comes to mind (aka, server-to-server events). etc.



--Kyle 


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Bringing setTimeout to ECMAScript

2011-03-18 Thread Kyle Simpson

As I understand it, this type of thing was kept out of the language
proper intentionally, because of its strong dependency on host
environment. Some host environments may require tight and overriding
control of any event handling system, and exactly which types of
events (such as timeouts) are suitable to an environment may vary. A
server side host might not want to have to deal with asynchronous
activity at all, for instance.


Speaking as someone who has written and currently maintains a *synchronous* 
server-side JavaScript environment (based on V8), I attest to the statement 
that I would *not* like it if V8 had `setTimeout()` (...etc) in it, because 
unless V8 were going to take care of that completely black-box for me, then 
I'd have to either disable such interfaces, or figure out some more 
complicated functionality in my environment to handle the concurrency.


I prefer they stay out of the engine, unless the engine is going to 
completely take care of it. The most important part of the engine taking 
care of it would be blocking the end of the program to wait for any 
outstanding event loop turns that had not yet fired, etc. Seems like that 
could get really messy.


--Kyle



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Standardizing __proto__

2011-03-18 Thread Kyle Simpson
There's LOTS of sites out there that still (unfortunately) do unsafe 
overwriting/overloading of the native's prototypes. For instance, just a few 
months ago, I ran across a site that was creating a Array.prototype.push() 
implementation that was incompatible with the standard implementation. When 
I injected jQuery onto that page, jQuery failed to work because Sizzle 
relies on being able to call push() with multiple parameters (something the 
page's .push() didn't handle). And there are many, many other examples, like 
adding String.prototype.trim(), etc.


The point? If everyone were in the habit of using sandboxable natives, like 
FuseBox provides, then that page could override it's version of Array all it 
wanted (even the native one), and my code, using Fuse.Array, would be 
perfectly safe.


Sandboxing a native-like object is just as much about preventing my changes 
from affecting others as it is about protecting myself from what others do.


Now, *can* I achieve the same thing without sandboxed natives? Of course. I 
can make fake data structure wrappers for every data type I care about. But 
I lose a lot of the semantics, operators, syntax-sugar of the actual 
natives. For instance, it's REALLY nice that a sandbox'd Array still lets me 
use the [] operator to access indices, etc. Is it perfect? No. But it's a 
LOT better than just choosing some custom namespace for my app and creating 
all new data structure wrappers. And in many cases, it's more 
efficient/performant, too.


To reiterate what John said earlier: The spirit of what FuseBox does doesn't 
require the mutability of the __proto__, but since at the moment there is no 
way to set the [[Prototype]]/[[Class]] of an object at creation time, 
__proto__ is the only option in some browsers (where iframe is buggy). If we 
can agree on something that allows such behavior at creation of an object, 
*including* Function objects (because I personally use a variation of 
FuseBox techniques to sandbox my functions), then __proto__ becomes 
unnecessary.


--Kyle



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Bringing setTimeout to ECMAScript

2011-03-18 Thread Kyle Simpson
Speaking as someone who has written and currently maintains a 
*synchronous* server-side JavaScript environment (based on V8), I attest 
to the statement that I would *not* like it if V8 had `setTimeout()` 
(...etc) in it, because unless V8 were going to take care of that 
completely black-box for me, then I'd have to either disable such 
interfaces, or figure out some more complicated functionality in my 
environment to handle the concurrency.


First, as a matter of principle, if it's in ES6 then V8 will implement. So 
I'm told by people I trust.


Second, what about your embedding is hostile to setTimeout, which is more 
accurately weakly-isochronous than asynchronous?


I prefer they stay out of the engine, unless the engine is going to 
completely take care of it. The most important part of the engine taking 
care of it would be blocking the end of the program to wait for any 
outstanding event loop turns that had not yet fired, etc. Seems like 
that could get really messy.


Read one way (the worst-case interpretation), this shows great confusion 
about threads suck, i.e., setTimeout requires no multi-threading. In no 
scenario would there ever be multi-threaded blocking with races over 
shared-mutalble state. What gave you this idea?


Read another way, if you mean pseudo-threads implemented with setTimeout 
never preempt one another but must all end before some larger notion of 
the program ends, then what is the problem, exactly?



I understand that JavaScript doesn't have threads. I also understand that 
JavaScript doesn't have true concurrency. I furtermore understand that when 
I call `setTimeout(fn,1000)`, it queues up `fn` to run after at least 
1000ms, or later, at the next earliest break where there's a free turn 
for it to run.


What I was saying is, if I run this program through V8 right now (with its 
theoretical future support of setTimeout() included), then what will happen:


function fn() {
  print(hello);
}
for (var i=0; i10; i++) {
  setTimeout(fn,i*1000);
}

That for-loop will finish very quickly (probably 1 ms). Would V8 (or any 
other JS engine) finish in the sense that the calling embedding code 
thinks this program is completely finished, and it returns back control to 
the C/C++ embedding layer when:


a) right after the for-loop finishes; OR
b) after only the first call to `fn`, since it's timeout was effectively 0, 
and so would have been immediately after the main program finished; OR
c) after all of the queued up calls to `fn` have finished, about 9 seconds 
later?


I have a C/C++ program that embeds the V8 API, and it loads up a bit of 
JavaScript from a file, and it tells V8 to execute that bit of code, then it 
loads up another file, and tells it to run that code, etc. What's 
problematic in my mind is if my hosting environment would be signaled that 
the first program finished, if there were still `fn`s that were queued up 
to be called. I would need the V8 execution API from my C/C++ code to be 
blocked and to wait for all of the turns of that code to be exhausted, 
and all the isochronous queued `fn` calls to finish, before letting me go 
on to run my next program.


In other words, to put it simply, if program A can call setTimeout(), and I 
want to run program A and then program B, I have to be able to make sure 
that I don't try to run program B until everything is fully finished in A. 
As V8 stands now, there's no way to do anything non-synchronous, so when A 
finishes, I know it's totally finished. I'm concerned that there'd be some 
new way with setTimeout()'s that this wouldn't be true.


--Kyle



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: iteration order for Object

2011-03-14 Thread Kyle Simpson
Aside from the JSON example of populating a dropdown list given (which I 
will agree is a real if contrived use case), there has been a lot of talk 
of thousands of web developers depending on preserving insertion order, 
but not one concrete example -- do you have one?


Two examples I've seen recently in projects, both relying primarily on the 
for-in iteration order of an object:


1. exporting objects (like JSON, etc) to log files (server-side 
javascript)... needing a reliable order for the keys to be printed to the 
log file output, like the datetime field first, etc. A variation on this 
is using JSON.stringify(obj) and wanting the JSON output to have a reliable 
output order, also for log files.


2. Using an object literal as a UI/form configuration where a each field 
of the object represents a form element in a form-builder UI. If the 
iteration order of the object is different in different engines/browsers, 
the UI ends up being displayed in different orders.



--Kyle 


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: [whatwg] Cryptographically strong random numbers

2011-02-22 Thread Kyle Simpson
I didn't see a way to set the seed of Math.random(), so the 
ECMAScript/Javascript version lacks this useful property.


Butting in, for a moment... I thought JavaScript seeded the PRNG with the 
timestamp by default? Perhaps I'm totally off base, but that's long been my 
assumption, and the reason why I never questioned not having a `seed()` 
function to set it, since the timestamp was fine for most non-crypto needs. 
I also recall that being one of the main complaints about Math.random for 
crypto needs.



But, having both a repeatable random function and a secure random function 
in a language is certainly reasonable.


If it's a choice between repeatable-random and actual random... I vote 
actual random with all my fingers and toes. Test harnesses can override 
Math.random() with fake repeatable sequences for that purpose. In the real 
world (non-testing), repeatable-random is far less desirable, at least I 
would think.


(butting back out)


--Kyle



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


idea: try/catch and rethrow...?

2011-02-01 Thread Kyle Simpson
?I have something that annoys me about how JavaScript try/catch error 
handling currently works. Don't get me wrong, I totally understand why it 
works that way, and it makes sense. But having the option to get around that 
behavior would be really nice.


My idea/proposal is illustrated with this gist:

https://gist.github.com/802625

Essentially, what I'm running into is several different use-cases for the 
need to wrap a try/catch around some call, and observe if it error'ed or 
not. Notice in the code snippet that I'm modifying the error's message 
string before re-throwing it. However, that's only one use-case. I might 
very well not need to modify the error, but simply passively observe (ie, 
not interfere with the bubbling up of that error) and for instance do some 
sort of cleanup or other graceful handling, and then pass the error along to 
continue its bubbling.


The reason for not wanting to interfere is in the sense of wanting the 
original error to maintain its original execution context (almost like an 
error propagation stack if you will), so that when the browser/engine 
reports the uncaught error, it reports it from the correct origination point 
(source file, line-number, etc).


If you try/catch such an error, and then re-throw it, that context is 
lost. I'm not positive, but I'm guessing that perhaps this is intentional 
by-design, and not just a particular implementation detail. If it *is* 
standardized, I'm proposing an alternative/addition. If not, I'm proposing 
we standardize a way to both preserve and explicitly not-preserve (aka, 
override) the original context. For compat reasons, the default would 
certainly stay as it currently is, with my idea being opt-in if the author 
needs it.


Of course, JavaScript doesn't expose the internals like the original context 
source file/line-number/etc (although I kinda wish it would), so for purely 
JavaScript sake it really doesn't matter. But I'm running into this in the 
server-side JavaScript world numerous times, and wishing the engine could 
keep that context. It's also useful for debugging purposes when looking at 
JavaScript errors reported in the browser's error console, for instance.


I'm interested in thoughts on either snippet's approach and the feasibility 
of something like this?



--Kyle Simpson




___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: idea: try/catch and rethrow...?

2011-02-01 Thread Kyle Simpson

?Brendan/all--

I just tested, and the first snippet (just `throw`ing the same error object) 
indeed worked as I wanted (preserved original source/line-number context) 
in:  FF3.6/4, IE9, Saf5, and Op11. It only fails to preserve context in Chr8 
(V8).


So, it would seem that my idea is valid and I'm way late to the game, 
everyone's already done it (except for V8) and that I just need to file a V8 
ticket. Sorry for the premature post without doing proper checking. That's 
what I get for assuming too much about V8 and its standards-compliance.


I would like to know, is this something that is indeed spec'd for 
JavaScript, or just an implementation detail? Should it be spec'd? Could it 
even be spec'd?


--Kyle


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: idea: try/catch and rethrow...?

2011-02-01 Thread Kyle Simpson
?FYI: There was already a similar bug filed with V8. I updated it to 
indicate that I'm still seeing this happen with ReferenceError's.


http://code.google.com/p/v8/issues/detail?id=764

--Kyle


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New private names proposal

2010-12-22 Thread Kyle Simpson

What about adding an attribute to properties that somehow
identify which classes (in the prototype chain for protected)
have access to the object? I'll leave the somehow up in the
air, but you could introduce a [[Private]] attribute which, if not
undefined, says which context must be set (and for protected,
either directly or through the prototypal chain of the current
context) to gain access to this property. And if that context is
not found, some error is thrown. Maybe it would be
[[EncapsulationType]] :: {private, protected, public} and
[[EncapsulationContext]] :: ?. You could also add a simple api
to check for these (isPrivate, isProtected, isPublic,
hasEncapsulatedProperty, etc) depending on how it would affect
in and enumeration.


I’m assuming (perhaps incorrectly) that this suggestion is to model the flag 
of the private vs. non-private as a “property descriptor” that can be set by 
`Object.defineProperty()`. Am I correct?


If so, I think that makes a lot of sense. I would like `private` to work 
that way.


Of course, the setting of `private` would probably have to be one-way, like 
`configurable` is, so that such a property could be made un-private by 
another context.


BTW, pardon (and ignore) me if I just stepped on an ant-bed and confused the 
whole topic. I’ve been following this thread silently and mostly felt like 
it was much more complicated than I could understand. Peter’s post was the 
first one that seemed to make sense. :)


--Kyle


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss