Re: Proposal: Property Accessor Function Shorthand

2019-12-02 Thread Waldemar Horwat

On 11/24/19 9:17 PM, Bob Myers wrote:

FWIW, the syntax `.propName` does appear to be syntactically unambiguous.


It conflicts with contextual keywords such as `new . target`.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Modulo Operator %%

2019-08-14 Thread Waldemar Horwat

On 8/13/19 8:32 PM, Michael Haufe wrote:

On 8/13/19 7:27 AM, Michael Haufe wrote:

I would prefer the syntax be ‘a mod b’ consistent with my wishlist item:


On 8/13/19 9:12 PM, Waldemar Horwat wrote:

This can bring up various syntactic troubles.  What does the following do?

let mod
+3

Is it calling the mod operator on the variable named "let" and +3?  Or is it defining a 
variable named "mod" with no initializer, followed by an expression?


I can't declare 'let' or 'var' as variable names, but even if I could (Say 
non-strict mode or ES3) that form would be a VariableDeclaration followed by an 
ExpressionStatement.

The proposed grammar extension is:

MultiplicativeOperator: one of
 * / % div mod


And I'm saying that's potentially problematic because it changes the meaning of existing 
programs that happen to use "mod" as a variable name.  The above is one example 
that would turn a let statement into a mod expression.  Here's another example:

x = 4
mod(foo)

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Modulo Operator %%

2019-08-13 Thread Waldemar Horwat

On 8/13/19 7:27 AM, Michael Haufe wrote:

I would prefer the syntax be ‘a mod b’ consistent with my wishlist item:


This can bring up various syntactic troubles.  What does the following do?

let mod
+3

Is it calling the mod operator on the variable named "let" and +3?  Or is it defining a 
variable named "mod" with no initializer, followed by an expression?

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Removing the space in `a+ +b`?

2019-06-28 Thread Waldemar Horwat

On 6/28/19 8:41 AM, Isiah Meadows wrote:

Currently, the production `a+ +b` requires a space to disambiguate it from the 
increment operator. However, `a++b` is not itself valid, as the postfix 
increment cannot be immediately followed by a bare identifier on the same line, 
nor can a prefix operator be preceded by one on the same line. Could the 
grammar be amended to include this production and make it evaluate equivalently 
to `a+ +b`

AdditionExpression :: AdditionExpression `++` [no LineTerminator here] 
UnaryExpression


Maybe it could, but the extra complexity really doesn't seem worth it, 
particularly since you'd need the unusual line terminator restriction to avoid 
breaking existing code.  Other uses of + don't have line terminator 
restrictions.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proposal: Static Typing

2019-04-03 Thread Waldemar Horwat

On 3/23/19 1:34 PM, IdkGoodName Vilius wrote:

This is a proposal for static typing. Here is the github repository link: 
https://github.com/CreatorVilius/Proposal-Static-Typing
I think it would be great thing in JS.


We intentionally reserved syntax so that something like that would be possible 
in the future.

I've also spent a lot of time working on past proposals to do such things.  A 
few interesting issues would invariably arise that make both static and runtime 
typing unsound:

- If you allow user-defined types, objects can spontaneously change their type 
by mutating their prototype.  Thus, you can declare a variable x to have type 
Foo and check that you're assigning an instance of Foo to it, but the value x 
can become a Bar (different from a Foo) spontaneously without any operations on 
x.

- Something similar happens with trying to type arrays.  You wrote:

let c: auto = ['a', 'b']  // c is now string[]

but that begs the question of what is the type of just ['a', 'b'].

- Is it string[]?  No, it can't be that because you can replace its second 
element with a number.
- Is it any[]?  Well, in that case c should have type any[], not string[]
- Is it object?  In that case c should have type Object.
and so on.

If you follow this to its logical conclusion and think about what the types of 
various methods that work on arrays should be, you end up with an enormous and 
confusing variety of array and object types, which is something we explored 
years ago.  In some cases you'd want structural types, in some cases you'd want 
'like' types (an array of anything which just happens to hold numbers at the 
moment), and so on.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proposal: switch statement multiple

2019-02-20 Thread Waldemar Horwat

On 02/15/2019 08:02 PM, Juan Pablo Garcia wrote:

I think it would be great if the switch statement allows multiple argument

Example
Switch(a,b)
Case: 1,true
Case: 1,false
Case: 2,true



You need braces for the switch statement, and the colon goes after the expression; I 
assume you meant "case 1, true:"?

The syntax wouldn't work for the simple reason that the language already 
defines that syntax and it does something else.  switch(a,b) evaluates a for 
its side effect and then switches on b.  The same goes for the case expressions.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: NumberFormat maxSignificantDigits Limit

2019-02-04 Thread Waldemar Horwat

There is precedence for using numbers around 20 for significant digit cutoffs 
in the spec.  For example, if you look at how number tokens are parsed in the 
spec (§11.8.3.1), the implementation has the option to ignore significant 
digits after the 20th.  That's not a bug; we did that intentionally in the 
early days of ECMAScript as a compromise between always requiring exact 
precision and cutting off after the 17th digit, which would get a significant 
fraction of values wrong.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New Proposal: Placeholder syntax

2018-12-07 Thread Waldemar Horwat

On 11/28/2018 10:17 AM, Andrew Kaiser wrote:

Hi all,

I have created a short proposal to introduce syntactic sugar for anonymous 
functions in a 'scala-like' manner, linked here 
https://github.com/andykais/proposal-placeholder-syntax.

I am hoping to hear feedback on whether or not this is interesting to people, 
as well as feedback on the proposal itself (e.g. is there a better operator to 
use than ` * `)


This is error-prone:

  const sum = numbers.reduce(? + ?)
transforms into
  const sum = numbers.reduce((x, y) => x + y)

but then:

  const identity = numbers.reduce(?)
transforms into
  const identity = (x) => numbers.reduce(x)
instead of the analogous
  const identity = numbers.reduce((x) => x)

And what would
  const z = numbers.reduce(? + ? > 0)
or
  const z = numbers.reduce(2*(? + ?))
or
  const z = numbers.reduce(foo(? + ?))
transform into?

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: arrow function syntax simplified

2018-10-26 Thread Waldemar Horwat

On 10/25/2018 04:04 PM, manuelbarzi wrote:

The committee has been swamped with numerous such syntax proposals.  While 
any single one may be reasonable in isolation, each one adds significant 
complexity to the language, and the sum of them is too great (plus multiple 
proposals try to grab the same syntax for different purposes).

AFAIS this proposal does not collide with any other one pointing to same syntax.


I've seen informal proposals that do something else with ->.


Given the existing two syntaxes for defining functions (function and =>), 
creating a third one would just add complexity.

would not that "complexity" be worth it, in favor of less bureaucracy and code 
compression?


No.


why would just adding a shorthand syntax to functions with thin-arrows would be that 
"complexity drama"? because people would mix functions and thin-arrows? that's 
already a reality with code using functions and fat-arrows, and AFAIK nobody complains about 
it. the only diff here is just to be aware of when to auto-bind (`=>`) or not to auto-bind 
(`->`). don't see any drama with that.


That's actually the major source of the drama.  The difference between => and 
-> would be subtle enough that it would be hard for casual users to keep track of 
which is which.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Expectations around line ending behavior for U+2028 and U+2029

2018-10-25 Thread Waldemar Horwat

On 10/25/2018 09:24 AM, Logan Smyth wrote:

Yeah, /LineTerminatorSequence/ is definitely the canonical definition of line 
numbers in JS at the moment. As we explore 
https://github.com/tc39/proposal-error-stacks, it would be good to clearly 
specify how a line number is computed from the original source. As currently 
specified, a line number in a stack trace takes U+2028/29 into account, and 
thus requires any consumer of this source code and line number value needs to 
have a special case for JS code. It seems unrealistic to expect every piece of 
tooling that works with source code would have a special case for JS code to 
take these 2 characters into account. Given that, the choices are

1. Every tool that manipulates source code needs to know what type so it can 
special-case JS it is in order to process line-related information
2. Every tool should consider U+2028/29 newlines, causing line numbers to be 
off in other programming languages
2. Accept that tooling and the spec will never correspond and the use of these 
two characters in source code will continue to cause issues
3. Diverge the definition of current source-code line from the current 
/LineTerminatorSequence/ lexical grammar such that source line number is always 
/\r?\n/, which is what the user is realistically going to see in their editor


The Unicode standard is the more relevant one here.  Choice 2 is the correct 
one per the Unicode standard.  Tools that do not consider U+2028/29 to be line 
breaks are not behaving as they should according to the latest Unicode standard.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: arrow function syntax simplified

2018-10-25 Thread Waldemar Horwat

On 10/25/2018 07:55 AM, manuelbarzi wrote:

not focussing on aesthetics, but on reduce of bureaucracy, which not by 
coincidence it's something fat-arrow functions already provide.


The committee has been swamped with numerous such syntax proposals.  While any 
single one may be reasonable in isolation, each one adds significant complexity 
to the language, and the sum of them is too great (plus multiple proposals try 
to grab the same syntax for different purposes).

You will not be able to deprecate `function`, as that's just not web-compatible.  
Given the existing two syntaxes for defining functions (function and =>), 
creating a third one would just add complexity.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: return =

2018-09-05 Thread Waldemar Horwat

On 09/03/2018 11:32 AM, Isiah Meadows wrote:

There is literally only one language I've seen that has anything like
this, and it's Verilog, a hardware description language. (It's also of
questionable utility, and it's restricted to just simulator-only
constructs.) That's not an endorsement, more like the opposite of one.


Pascal works that way too.  You use an assignment statement to assign to the 
name of the function to set a function's return value.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Class data member declarations proposal idea

2018-08-07 Thread Waldemar Horwat

See this proposal, currently at stage 3:  
https://github.com/tc39/proposal-class-fields

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: !Re: proposal: Object Members

2018-08-07 Thread Waldemar Horwat

On 08/03/2018 08:30 PM, Bob Myers wrote:

 > `private`, `protected`, `class`, and a few other such keywords have all been 
part of ES since long be for the TC39 board got their hands on it. They hadn't 
been implemented, but they were all a very real part of ES.

Whoa. Is that just misinformed or intentionally misleading? They have never been "part of ES" in any meaningful sense. 
It was not that they had not been implemented; it was that they had not even been defined. To say they are a "real part of 
ES" is a strange interpretation of the meaning of the word "real". The notion that we would choose features to 
work on based on some designation of certain keywords as reserved long ago, and that they are now "languishing", is 
odd. Why not focus on "implementing" enum, or final, or throws, or any other of the dozens of reserved words?

Having said that, I think it is a valid general principle that as language 
designers we should be very reluctant to use magic characters. `**` is fine, of 
course, as is `=>`, or even `@` for decorators. Personally, I don't think the 
problem of access modifiers rises to the level of commonality and need for 
conciseness that would justify eating up another magic  character. We also don't 
want JS to start looking like Perl or APL.

Speaking as a self-appointed representative of Plain Old Programmers, I do feel 
a need for private fields, although it was probably starting to program more in 
TS that got me thinking that way. However, to me it feels odd to tie this 
directly to `class` syntax. Why can't I have a private field in a plain old 
object like `{a: 1}` (i.e., that would only be accessible via a method on that 
object? We already have properties which are enumerable and writable, for 
example, independent of the class mechanism. Why not have properties which are 
private in the same way?

The problem,of course, is that even assuming the engines implemented the `private` 
property on descriptors, I obviously don't want to have to write `Object.create({}, 
{a: {value: 22, private: true})`. So the problem can be restated as trying to find 
some nice sugar for writing the above.  You know, something like `{a: 
22}`. That's obviously a completely random syntax suggestion, just to show the idea. 
Perhaps we'd prefer to have the access modifiers be specifiable under program control 
as an object itself, to allow something like

```
const PRIVATE = {private: true};

const myObject = {a(: 2; }
```

But what would the precise meaning of such as `private` descriptor property be? 
In operational terms, it could suffice to imagine (as a behavior, not as an 
implementation strategy) that objects would have a flag that would skip over 
private properties when doing property lookups. I think the right 
implementation is to have a private property look like it's not there at all 
when access is attempted from outside the object (in other words, is 
undefined), rather than some kind of `PrivatePropertyAccessError`.

The above approach ought to be extensible to class notation:

```
class Foo (
   bar(): { return 22; }
}
```

which would end up being something like `Object.defineProperty(Foo.prototype, 
"bar", {value() {return 22; }, private: true})`.

Or when classes get instance variables:

```
class Foo {
   bar = 22;
```

Was anything along these lines already brought up in this discussion?


Yes.  There are a couple answers:

- If you have the  marker on both definitions and accesses of the 
property, then you get a proposal that's essentially isomorphic to the committee's 
current private proposal.

- If you have the  marker on definitions but not accesses of the 
property, then the proposal leaks private state like a sieve:

For example:

class Foo (
  x;

  SetX() {this.x = ;}
}

To discover what's written into the supposedly private x, just call SetX on an 
object that's not an instance of Foo.

There are analogous examples for reading instead of writing.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: !Re: proposal: Object Members

2018-08-03 Thread Waldemar Horwat

On 08/03/2018 02:37 PM, Tab Atkins Jr. wrote:

Yes, they were reserved because they were the Java reserved keywords,
with the intention that we might add more Java features later in the
langauge's evolution. That has no bearing on their use today.


That's exactly what we did.  In the early days of ECMAScript we had no plans to 
use those but reserved them just in case.  Some of the ones we reserved later 
accidentally became unreserved.

Introducing a new keyword-based feature when that keyword isn't reserved leads to severe 
complications and arbitrary [no linebreak here] rules.  The nastiest cover grammars are 
some of the consequences of things like being able to use "async" as a function 
name.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: !Re: proposal: Object Members

2018-07-30 Thread Waldemar Horwat

On 07/29/2018 04:37 PM, Isiah Meadows wrote:

BTW, I came up with an alternate proposal for privacy altogether:
https://github.com/tc39/proposal-class-fields/issues/115

TL;DR: private symbols that proxies can't see and that can't be enumerated.


Aside from syntax, the main semantic difference I see between this alternative 
and the main one is that this alternative defines private fields as expandos, 
creating opportunities for mischief by attaching them to unexpected objects.  
Aside from privacy, one of the things the private fields proposal gives you is 
consistency among multiple private fields on the same object.  In the rare 
cases where you don't want that, you could use weak maps.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: proposal: Object Members

2018-07-27 Thread Waldemar Horwat

On 07/26/2018 01:55 PM, Ranando King wrote:

I've just finished updating my proposal with an [Existing 
proposals](https://github.com/rdking/proposal-object-members/blob/master/README.md#existing-proposals)
 section that lists the major differences.


Reading the proposal, I'm not yet sure what it's supposed to do.  Some things 
I've noticed:

- The proposal commingles static and instance private properties.  In the first 
example, you could read or write this#.field2, and it would work.  How would 
you encode the generic expression a#.[b]?

- Worse, the proposal commingles all private properties across all classes.  
There's nothing in the proposed code stopping you from reading and writing 
private properties defined by class A on instances of an unrelated class B.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: JSON support for BigInt in Chrome/V8

2018-07-17 Thread Waldemar Horwat

On 07/17/2018 04:27 AM, Andrea Giammarchi wrote:

actually, never mind ... but I find it hilarious that 
BigInt('55501') works but 
BigInt('55501n') doesn't ^_^;;


That's no different from how other built-in types work.  String('"foo"') doesn't give you 
the same string as the string literal "foo".

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proposal: Pilot-Wave -based execution flow

2018-05-16 Thread Waldemar Horwat

Looks like the process of picking random keywords from one of the 
interpretations of quantum mechanics and stringing them into nonsensical 
phrases.  See also Cuil Theory.

Waldemar


On 05/12/2018 10:52 AM, Michael Luder-Rosefield wrote:

can I be the first to say: what

On Sat, 12 May 2018, 16:31 Abdul Shabazz, > wrote:

A velocity vector can also be added to detect the presence of malware, 
which in turn can effect the mass. If the mass at any point is changed, then 
the pipeline is dropped.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: EcmaScript Proposal – Private methods and fields proposals.

2018-04-27 Thread Waldemar Horwat

On 04/17/2018 05:31 PM, Sultan wrote:

Do you limit classes to creating only the private fields declared in the class, 
or can they create arbitrarily named ones?


Yes, just as you could write arbitrary named fields with the mentioned WeakMap 
approach, for example –

[...] private[key] = value
[...] private(this)[key] = value
[...] registry.get(this)[key] = value

and retrieve arbitrary fields

[...]private[key]
[...]private(this)[key]
[...]registry.get(this)[key]


The full form is expr.#foo, where expr can be `this` or some other expression 
appropriate in front of a dot. The #foo binds to the innermost enclosing class 
that has a private field called foo. If expr doesn't evaluate to an instance of 
that class, you fail and throw.


Is this what you meant?

class A {
   #foo = 1
   constructor() {
     let self = this

     this.B = class B {
       constructor() {
         self.#foo = 2
       }
     }
   }
   get() {
     return this.#foo
   }
   run(program) {
     return eval(program)
   }
}

let a = new A()
let b = new instance.B()


What's instance?  I assume you meant new a.B().


Would this return 1 or 2 or would the previous statement throw?

console.log(a.get())


This would produce 2.


Additionally would this work

a.run('this.#foo = 3')


I have no idea.  It depends on how the details of the scope rules for private 
would work.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: EcmaScript Proposal – Private methods and fields proposals.

2018-04-17 Thread Waldemar Horwat

On 04/17/2018 02:26 PM, Sultan wrote:

In the transpilation you created the field using "registry.set(this, {id: 0})"
in the constructor.  If you then claim that any write to the field can also 
create it, then you get the hijacking behavior which you wrote doesn't happen.


The difference between

class A {
   private id = 0
}

and

class A {
   constructor() {
private.id  = 0
   }
}

is the likened to the difference between

(function (){
   var registry = WeakMap()

   function A () {
     registry.set(this, {id: 0})
   }

   return A
})()

and

(function () {
   var registry = WeakMap()

   function A () {
     registry.set(this, {})
     registry.get(this)["id"] = 0
   }

   return A
})

I don't see how this permits the hijacking behavior previously mentioned, that 
is –

(new A()).write.call({}, 'pawned');

Would still fail in the same way for both of these variants.


OK; you split creation into two phases.  That's fine.  Do you limit classes to 
creating only the private fields declared in the class, or can they create 
arbitrarily named ones?


They just lexically scope the private names in their own separate namespace.  
#foo refers to the innermost enclosing class that has a private field called 
foo.


I'm not sure i understand, Does #foo refer to this.#foo? Can you post a fleshed 
out example of this?


The full form is expr.#foo, where expr can be `this` or some other expression 
appropriate in front of a dot.  The #foo binds to the innermost enclosing class 
that has a private field called foo.  If expr doesn't evaluate to an instance 
of that class, you fail and throw.

Read the proposals.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: EcmaScript Proposal – Private methods and fields proposals.

2018-04-17 Thread Waldemar Horwat

On 04/17/2018 01:50 PM, Sultan wrote:

 >That would contradict your previous answer to the hijacking question.

Can you point out the contradiction? The private field is still being written 
to by the providing class.


In the transpilation you created the field using

  registry.set(this, {id: 0})

in the constructor.  If you then claim that any write to the field can also 
create it, then you get the hijacking behavior which you wrote doesn't happen.


Class B is lexically nested inside class A. You want to refer to one of A's 
privates from within B's body.


Can you provide an example of what this looks like with the current 
public/private fields proposals?


They just lexically scope the private names in their own separate namespace.  
#foo refers to the innermost enclosing class that has a private field called 
foo.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: EcmaScript Proposal – Private methods and fields proposals.

2018-04-17 Thread Waldemar Horwat

On 04/16/2018 05:47 PM, Sultan wrote:

 >An instance has a fixed set of private fields which get created at object 
creation time.

The implications of this alternative does not necessarily limit the creation of 
private fields to creation time, for example writing to a private field in the 
constructor or at any arbitrary time within the lifecycle of the instance.


That would contradict your previous answer to the hijacking question.


 >How do you deal with inner nested classes wanting to refer to outer classes' 
private fields?

Not sure i understood what you mean by this?


Class B is lexically nested inside class A.  You want to refer to one of A's 
privates from within B's body.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: EcmaScript Proposal – Private methods and fields proposals.

2018-04-16 Thread Waldemar Horwat

On 04/13/2018 09:41 PM, Sultan wrote:

 >Writing your private field to an object that's not an instance of your class.
 >and then invoking the above write method with a this value that's not an 
instance of A, such as a proxy.

Given:

class A {
   private id = 0;
   private method(value) {
     return value;
   }
   write(value) {
     private(this)["id"] = private["method"](value);
   }
}

I imagine this means trying to do something along the lines of:

(new A()).write.call({}, 'pawned');

This would fail. The private syntax call site would be scoped to the provider 
class. For example imagine the current possible transpilation of this:

;(function (){
   var registry = WeakMap();

   function A () {
     registry.set(this, {id: 0})
   }
   A.prototype.write: function () {
     registry.get(this)["id"] = 
registry.get(this.constructor)["method"].call(this, value);
   }

   // shared(i.e private methods)
   registry.set(A, {
     method: function (value) {
       return value;
     }
   });

   return A
})();

Trying to do the the afore-mentioned forge here would currently fail along the lines of cannot read 
property "id" of  "undefined".


OK, so that aspect of the proposal looks the same as the existing private 
proposals — an instance has a fixed set of private fields which get created at 
object creation time.  There are tricky additional wrinkles when it comes to 
inheritance, but you can look them up in the existing proposals.

Are the only significant changes the different property naming syntax and that 
you provide a way to map strings to private slots?  How do you deal with inner 
nested classes wanting to refer to outer classes' private fields?

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: EcmaScript Proposal – Private methods and fields proposals.

2018-04-13 Thread Waldemar Horwat

On 04/13/2018 01:38 AM, Sultan wrote:

The proposal is an explainer with regards to an alternative sigil-less syntax 
to back private fields/methods.


What does private(this)[property] do?


"private(this)[property]" and alternatively "private[property]" or "private.property" all invoke access of a private 
"property" on the "this" instance of the class, symmetrical to thesyntax/function nature of both the "super" 
and"import" keywords.


How do private fields come into existence?


Unless i've misunderstood what is meant by "come into existence" the proposals makes use of the 
reserved "private" keyword to define private fields i.e "private id = 1".


I was asking about what creates those fields.


What's private about private fields?


Outside of a private fields provider class, private fields/methods would not be 
accessible.


How do you prevent them from being forged or stuck onto unrelated objects?


What do you mean by this?


Writing your private field to an object that's not an instance of your class.

class A {
  private id = ...;
  private foo = ...;
  write(value) {
private(this)["id"] = value;
private(this)["foo"] = ... my private secret that anyone outside the class 
must not learn ...;
  }
}

and then invoking the above write method with a this value that's not an 
instance of A, such as a proxy.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: EcmaScript Proposal – Private methods and fields proposals.

2018-04-12 Thread Waldemar Horwat

I read that proposal but don't understand what the proposal actually is.  At 
this point it's a bit of syntax with no semantics behind it.  What does 
private(this)[property] do?  How do private fields come into existence?  How do 
you prevent them from being forged or stuck onto unrelated objects?  What's 
private about private fields?

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proposal: if variable initialization

2018-04-06 Thread Waldemar Horwat

On 03/22/2018 11:21 PM, Naveen Chawla wrote:

I'm still not seeing a compelling case for not allowing `const` / `let` 
declarations to be evaluated as expressions. Or I've missed it.


Yes, I think there is a compelling case for not allowing `const` / `let` 
declarations to be evaluated as expressions.


As was noted,

`if(x = 5)` is already allowed.

Is `if(const x = 5)` really that much of a stretch?


That's fine in this case, and I happily use this construct in C++.

But that's *very* different from allowing `const` / `let` declarations anyplace 
you allow an expression.


To answer a concern about a function call like `myFunction(const x = 7)`, of 
course the scope of `x` would be where it is declared.


And here we come the problem: the scope.


It can't be anywhere else (like inside myFunction or something).


We wouldn't want to repeat the var hoisting problems, so the scope of a general 
subexpression declaration would need to be the subexpression in which it's 
contained and not some outer context.  If you don't do that, you'll eventually 
run into the same problems as with var.

However, that's not what the current uses of such declarations do.  For example,

for (var i = expr; ...) {...}

scopes i to the body of the loop, and you get a new i binding for each 
iteration, which is important for lambda capture, even though expr is evaluated 
only once.  Subexpression scoping would be incompatible with that.

As such, we can reasonably allow `const` / `let` declarations in other specific 
contexts such as at the top level of if statement condition expressions, but 
not in subexpressions in general.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: try/catch/else

2018-02-12 Thread Waldemar Horwat

On 02/08/2018 06:50, Alan Plum wrote:

I realise there is some ambiguity in using the else keyword for this (though I can't 
think of a meaningful opposite of "catch" either).


Indeed.  You can't use 'else' without breaking existing behavior.  For example:

if (foo) try {...} catch (e) {...} else {...}

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Arrow function followed by divide or syntax error?

2017-05-24 Thread Waldemar Horwat
x=x=>{}/alert(1)/+alert(2)// is a syntax error. We deliberately specified
it that way in the standard for the precedence reasons stated earlier in
the thread.

Brian Terlson is filing a bug as we speak.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: LR(1) grammar/parser and lookahead-restrictions

2017-02-07 Thread Waldemar Horwat

On 02/07/2017 06:39, Michael Dyck wrote:

On 17-02-06 07:32 PM, Waldemar Horwat wrote:

On 02/04/2017 07:20, Michael Dyck wrote:

On 17-02-03 05:32 PM, Waldemar Horwat wrote:

On 02/03/2017 08:17, Michael Dyck wrote:

On 17-02-02 06:23 PM, Waldemar Horwat wrote:


Lookahead restrictions fit very well into an LR(1) engine


Again: Great, but how? E.g., do you pre-process the grammar, modify the
construction of the automaton, and/or modify the operation of the parser?


For each state × token combination, the automaton describes what happens
when you're in state S and see a token T.  The lookahead restrictions remove
possible transitions; without them there would be ambiguities where a given
state × token combination would want to do two incompatible things.


Okay, so do you generate the automaton (ignoring lookahead restrictions)
and then remove transitions (using lookahead restrictions)? Or do you
integrate the lookahead-restrictions into the generation of the automaton?


It's integrated.  You can't generate a valid automaton without the lookahead
restrictions.


So, when you're making LR items for a production with a lookahead-restriction 
(l-r), do you:

-- treat the l-r as a (nullable) symbol, so that you get an item with the dot 
before the l-r, and one with the dot after, or

-- treat the l-r as occurring between two adjacent symbols, so that you get an item where 
the dot is "at" the l-r, or

-- something else?


It's something else — it's directly tied into the generation of the automaton 
states.  Each automaton state contains a collection of possible places in the 
expansions of grammar rules that the state can represent.  Following a terminal 
symbol T from a state A leads to a state B.  A lookahead restriction prevents 
state B's collection from including expansions that would have been prohibited 
by the lookahead restriction on T.  If that generates an inconsistency (for 
example, if there are two ways to get to an identical state, one with a 
lookahead restriction and one without), the grammar validation fails.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: LR(1) grammar/parser and lookahead-restrictions

2017-02-06 Thread Waldemar Horwat

On 02/04/2017 07:20, Michael Dyck wrote:

On 17-02-03 05:32 PM, Waldemar Horwat wrote:

On 02/03/2017 08:17, Michael Dyck wrote:

On 17-02-02 06:23 PM, Waldemar Horwat wrote:


Lookahead restrictions fit very well into an LR(1) engine


Again: Great, but how? E.g., do you pre-process the grammar, modify the
construction of the automaton, and/or modify the operation of the parser?


For each state × token combination, the automaton describes what happens
when you're in state S and see a token T.  The lookahead restrictions remove
possible transitions; without them there would be ambiguities where a given
state × token combination would want to do two incompatible things.


Okay, so do you generate the automaton (ignoring lookahead restrictions) and 
then remove transitions (using lookahead restrictions)? Or do you integrate the 
lookahead-restrictions into the generation of the automaton?


It's integrated.  You can't generate a valid automaton without the lookahead 
restrictions.


That's different from parametrized rules, which simply macro-expand into
lots of individual rules.


Yup.



But the context-dependentness of lexing is a parse-time problem, not a
validation-time problem, right?


No.


The syntactic level can just assume a stream of (correctly lexed) input
elements.


(I should have said "*Validation* of the syntactic level [etc]")


No!  It's impossible to create a stream of correctly lexed input elements
without doing syntactic level parsing.


I quite agree. I didn't mean to suggest otherwise. What I mean is that, once 
you've generated the automaton for the syntactic grammar, you can just look at 
each state's set of expected terminals, and from that deduce the goal symbol 
that the lexer will need to use to get the next token when the parser is in 
that state. The point being that you can do that *after* generating the 
syntactic automaton. So the context-dependentness of lexing doesn't have to 
affect the process of generating the syntactic automaton.


That's correct.


(That's assuming an LR(1)-ish parser, and an approach where you don't try to 
combine the syntactic and lexical grammars to generate a single [scannerless] 
automaton. Which may not be your approach.)


The parser and lexer stay separate, other than the lexer providing tokens to 
the parser and the parser selecting one of several top-level lexer goal symbols 
for lexing the next token.  I do not use any kind of unified parser-lexer 
grammar; that could run into issues such as the syntactic grammar making lexing 
non-greedy: a+b lexing as a ++ + ++ b instead of the correct a ++ ++ + b 
(which would then generate a syntax error at the syntactic level).


The validator looks for problems such as the syntactic grammar giving the
lexer contradictory instructions. An example would be any syntactic
grammar automaton state where one outgoing syntactic transition would
swallow a regexp token and another outgoing syntactic transition from the
same state would swallow a / token. If any such state exists, the grammar
is broken.


Yup, the spec asserts the non-existence of such states ("syntactic grammar 
contexts").

-Michael


Waldemar


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: LR(1) grammar/parser and lookahead-restrictions

2017-02-03 Thread Waldemar Horwat

On 02/03/2017 08:17, Michael Dyck wrote:

On 17-02-02 06:23 PM, Waldemar Horwat wrote:


Lookahead restrictions fit very well into an LR(1) engine


Again: Great, but how? E.g., do you pre-process the grammar, modify the 
construction of the automaton, and/or modify the operation of the parser?


For each state × token combination, the automaton describes what happens when 
you're in state S and see a token T.  The lookahead restrictions remove 
possible transitions; without them there would be ambiguities where a given 
state × token combination would want to do two incompatible things.

That's different from parametrized rules, which simply macro-expand into lots 
of individual rules.


as long as they're limited to only one token, and that's what I've
implemented in the validator.


So when the grammar has a prohibited lookahead-sequence with more than one 
token (in ExpressionStatement, IterationStatement, and ExportDeclaration), does 
the validator just use the first token?


My LR(1) validator can't actually handle the case of multiple tokens of 
lookahead, and I didn't feel like doing an LR(2) grammar.  I had to check these 
by hand.


You need to be very careful with them if looking more than one token
ahead because lexing of the tokens can vary based on context. For
example, if the next few characters in front of the cursor are )/abc/i+,
then what is the second token? What is the third token? It's actually
context-dependent.


But the context-dependentness of lexing is a parse-time problem, not a 
validation-time problem, right?


No.


The syntactic level can just assume a stream of (correctly lexed) input 
elements.


No!  It's impossible to create a stream of correctly lexed input elements 
without doing syntactic level parsing.  See the example I gave above.


Or do you validate deeper than the syntactic grammar?


Yes.  The syntactic grammar controls the lexer.  The validator looks for 
problems such as the syntactic grammar giving the lexer contradictory 
instructions.  An example would be any syntactic grammar automaton state where 
one outgoing syntactic transition would swallow a regexp token and another 
outgoing syntactic transition from the same state would swallow a / token.  If 
any such state exists, the grammar is broken.


(And it seems to me that multi-token lookahead-restrictions would be hard for a 
validator to handle even if lexing *wasn't* context-dependent, but maybe that 
depends on how you handle them.)

-Michael



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: LR(1) grammar/parser and lookahead-restrictions

2017-02-03 Thread Waldemar Horwat

On 02/02/2017 15:56, Dmitry Soshnikov wrote:

On Thu, Feb 2, 2017 at 3:23 PM, Waldemar Horwat <walde...@google.com 
<mailto:walde...@google.com>> wrote:

On 02/01/2017 10:25, Dmitry Soshnikov wrote:

As mentioned, Cover grammar is usually the process of the grammar 
design itself (as in ES6 spec itself). I'm not aware of automatic 
transformations for this (if you find any please let me know).


Cover grammars are an ugly hack that we had to add when there was no other 
practical choice.  Avoid them as much as possible.  They are only used in 
situations where lookahead restrictions and parametrized grammar rules do not 
work in any practical way.


Yeah, absolutely agreed. The reason why I used cover grammar in that example to 
parse `{` as a `Block` vs. `ObjectLiteral`, and handle `ExpressionStatement` is 
to make the resulting grammar short, and don't introduce a bunch of `NoCurly`, 
etc extra productions for this. That's also said, this ugly hack also forces 
you to do post-processing overhead -- either validation of the nodes, or even 
re-interpretation (rewriting) to other nodes -- in my case I have to actually 
check that all nodes between `{` and `}` are properties, or vice-versa, 
statements, based on the expression/statement position.


Don't use a cover grammar to unify blocks with object literals.  That's a 
really bad idea and you'll likely get it wrong.  If the `{` starts a block, 
then if the next token is a keyword such as `if` then it's parsed as a keyword. 
 If the `{` starts an object literal, then if the next token is a keyword then 
it's parsed as an identifiername.  As we extend the language, the expansions of 
these can later diverge to the point where you won't know whether a `/` starts 
a regular expression or is a division symbol.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: LR(1) grammar/parser and lookahead-restrictions

2017-02-02 Thread Waldemar Horwat

On 02/01/2017 10:25, Dmitry Soshnikov wrote:

As mentioned, Cover grammar is usually the process of the grammar design itself 
(as in ES6 spec itself). I'm not aware of automatic transformations for this 
(if you find any please let me know).


Cover grammars are an ugly hack that we had to add when there was no other 
practical choice.  Avoid them as much as possible.  They are only used in 
situations where lookahead restrictions and parametrized grammar rules do not 
work in any practical way.

When designing the grammar, the preferences are:

- Use standard LR(1) productions
- Use parametrized productions
- Use lookahead restrictions if parametrized productions would introduce too 
many parameters into rules
- Use a cover grammar if the grammar can't be reasonably expressed in any other 
way.  They're a last resort with lots of problems.

Lookahead restrictions fit very well into an LR(1) engine as long as they're 
limited to only one token, and that's what I've implemented in the validator.  
You need to be very careful with them if looking more than one token ahead 
because lexing of the tokens can vary based on context.  For example, if the 
next few characters in front of the cursor are )/abc/i+, then what is the 
second token?  What is the third token?  It's actually context-dependent.

The same problem is even worse for cover grammars.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: LR(1) grammar/parser and lookahead-restrictions

2017-01-23 Thread Waldemar Horwat

On 01/11/2017 10:28, Michael Dyck wrote:

In the past, it has been said (usually by Brendan Eich) that TC39 intends that the ECMAScript 
grammar be LR(1). Is that still the case? (I'm not so much asking about the "1", but more 
about the "LR".)

If so, I'm wondering how lookahead-restrictions (e.g., [lookahead 

I'm the source of the LR(1) condition.  I've been doing that ever since 
ECMAScript Edition 3, and in fact am using the LR parser to help design and 
debug the spec.  I have an implementation of the parser with a few extensions 
to the LR grammar, including support for parametrized productions, lookahead 
restrictions, and lexer-parser feedback used to disambiguate things such as 
what token / will start.  I validate various things such as making sure that 
there is no place where both an identifier and a regular expression can occur.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: stable sort proposal

2016-03-18 Thread Waldemar Horwat

On 03/18/2016 11:10, Tab Atkins Jr. wrote:

If you're planning on pessimistically assuming that legacy
implementations use an unstable sort for Array#sort(), then testing
for the presence of Array#fastSort() (and assuming that when it
appears the Array#sort is stable) is exactly as useful as testing for
the presence of Array#stableSort (and assuming that when it appears
the Array#sort is unstable/fast).  So, polyfilling isn't a reason to
prefer one default vs the other.


That makes no sense.  The presence of fastSort does not indicate that sort is 
stable.

The approach of sometimes using "sort" for unstable sort and sometimes for 
stable sort would cause too much confusion.  Which sort are you getting when you call 
sort?  If you want a sort that's guaranteed stable, call stableSort.

The argument that stable sort should have the shorter name doesn't hold much 
water either.  C++ defines sorts named sort and stable_sort (as well as a few 
others) just fine.  sort is by far the more popular one (by a factor of 20!) 
because most applications don't actually care about stability.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Optional Chaining (aka Existential Operator, Null Propagation)

2016-02-03 Thread Waldemar Horwat

On 02/03/2016 11:56, John Lenz wrote:

Can you reference something as to why the more obvious operators are 
problematic?

?.
?[]
?()
?:


Some of these have problems.  For example, a?[]:b is already valid ECMAScript 
syntax and does something else.  There is an analogous but subtler problem for 
a?(.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: JavaScript Language feature Idea

2016-01-25 Thread Waldemar Horwat

On 01/25/2016 12:00, Andrea Giammarchi wrote:

`Array.prototype.nth(n=0)` looks great indeed, +1 here

About the Symbol ... ugly as hell also we need to write even more and it 
doesn't scale as utility compared to .nth

```js
a[Symbol.last]
a[a.length-1]
```


I fail to see the point of this, other than trying to increase the complexity 
of the language by adding even more cases which do the same things but work 
somewhat differently from existing cases.

We'd have done a lot of things differently if we were starting from scratch.  
But arrays have a large amount of legacy behavior we can't realistically change 
and, given that, this doesn't improve things much.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Backward running version look-behinds

2015-12-30 Thread Waldemar Horwat

On 12/11/2015 13:16, Nozomu Katō wrote:

I wonder if the person who wrote the spec for RegExp is on this list. I
would like to ask one question: Was there any reason why the following
steps were defined in the present order:

21.2.2.4 Alternative
   The production Alternative :: Alternative Term evaluates as follows:
   1. Evaluate Alternative to obtain a Matcher m1.
   2. Evaluate Term to obtain a Matcher m2.

instead of:

   The production Alternative :: Term Alternative evaluates as follows:
   1. Evaluate Term to obtain a Matcher m1.
   2. Evaluate Alternative to obtain a Matcher m2.

or, was it a matter of preference? If any side effect I am missing
exists in the latter order, I need to reconsider or abandon my compact
version.


Those appear to have equivalent behavior.  I just picked one when writing the 
RegExp spec.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Are there any 64-bit number proposals under consideration?

2015-11-16 Thread Waldemar Horwat

There have been proposals for 64-bit integers in TC39 for the last 15 years.  
All of them so far have gotten bogged down by one of:

- extending the scope of the proposal to include value types, general operator 
overloading, etc., after which someone eventually objects to something or it 
becomes deferred to the next version of the standard
- arguments over implicit/explicit coercions (if f is a regular ECMAScript 
Number and n an int64, what should be the result of n + f?  Think carefully.)

This  is somewhat unfortunate.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Existential Operator / Null Propagation Operator

2015-10-29 Thread Waldemar Horwat

On 10/29/2015 12:19, Laurentiu Macovei wrote:

  `foo?.bar` and `foo?['bar']` syntax would work too.


No.  It would break existing code:

  x = foo?.3:.5;

  x = foo?[a]:[b];

On the other hand, turning .. into a token should be fine.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Existential Operator / Null Propagation Operator

2015-10-29 Thread Waldemar Horwat

On 10/29/2015 14:20, Claude Pache wrote:



Le 29 oct. 2015 à 19:32, Eli Perelman  a écrit :

2 dots may be problematic when parsing numbers (yeah, I know it's probably not 
common, but it's still valid):

3..toString()

Eli Perelman


Treating `..` as one token would be a breaking change,


Exactly what existing code would it break?


but I don't think it is a problem in practice, as `3..toString()` would 
continue to work.


It would continue to work for the trivial reason that `3..toString()` doesn't 
contain a .. token.  It's the number 3. followed by a . and then an identifier 
and parentheses.

This is no different from 3.e+2 not containing a + token.

 In some cases – as in `3..toStrign()` –, `undefined` will be produced where an 
error was thrown.

No, this would continue to throw an error.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Why is no line break is needed in ArrowFunction?

2015-10-21 Thread Waldemar Horwat

On 10/21/2015 00:27, Eric Suen wrote:

Try to write a parser for es6, is this the right place for question about 
syntax?

There is no expression or statement start with =>, same goes to yield  [no 
LineTerminator here] * AssignmentExpression ?


  yield [no LineTerminator here] * AssignmentExpression

has the line terminator restriction because yield without the * has the same 
restriction:

  yield [no LineTerminator here] AssignmentExpression

And that one has that restriction because there exists a third form of yield 
that doesn't take an expression:

  yield

This is the same situation as with return statements, which also take a [no 
LineTerminator here] restriction.  It's a convenience to prevent return from 
capturing the expression from the next line for the users (I'm not among them) 
who like to rely on semicolon insertion:

if (error) return
x = y


Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Curried functions

2015-10-15 Thread Waldemar Horwat

On 10/15/2015 12:58, Yongxu Ren wrote:

Sorry I actually didn’t mean to use this for currying
```
const add = a => b => a + b;
```
This was directly copied from Mark's example, I was thinking about making the 
non-nested arrow functional.
My idea is if you define
```
const add = (a,b) => a + b;
```
you will be able to use either ```add(a,b)``` or ```add(a)(b)```


Alexander's point still stands.  This would break compatibility, which makes it 
a non-starter.  It also becomes dubious with variable numbers of parameters.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Look-behind proposal in trouble

2015-10-13 Thread Waldemar Horwat

On 10/13/2015 02:18, Erik Corry wrote:

Yes, that makes sense.

This could be fixed by removing {n} loops from positive lookbehinds.  Or by 
doing the .NET-style back-references immediately.


I think it would be cleanest to do the full reverse-order matching (what I 
think you're calling .NET-style) from the start.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Swift style syntax

2015-10-13 Thread Waldemar Horwat

On 10/13/2015 10:27, Isiah Meadows wrote:

Steve, I have little problem with whatever ends up the case, as long as it's shorter 
than `(x, y) => x + y`. The current idea was inspired by Swift's `list.sort(>)` 
and `list.reduce(0, +)`.


I second the concern with this being far too full of hazards to carry its 
weight.  Let's say you allow something like the list.reduce(0, +) syntax for 
the various arithmetic operators.  Then we get to the fun ones:

list.reduce(0, /) /x

Oops, you've just started a regular expression.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Look-behind proposal in trouble

2015-10-12 Thread Waldemar Horwat

On 10/10/2015 03:48, Erik Corry wrote:



On Sat, Oct 10, 2015 at 12:47 AM, Waldemar Horwat <walde...@google.com 
<mailto:walde...@google.com>> wrote:

It's not a superset.  Captures would match differently.


Can you elaborate?  How would they be different?


If you have a capture inside a loop (controlled, say, by {n}), one of the 
proposals would capture the first instance, while the other proposal would 
capture the last instance.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Additional Math functions

2015-10-02 Thread Waldemar Horwat

On 10/01/2015 23:10, Sebastian Zartner wrote:

While Math.sum() and Math.mean() currently don't exist, they can easily be 
polyfilled:
See 
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce#Sum_all_the_values_of_an_array
 for summarizing the values of an array and the following code for building the 
average of the array values:

let arr = [0, 1, 2, 3];
let total = arr.reduce(function(a, b) {
   return a + b;
});
let mean = total / arr.length;

Calculating the variance and standard deviation would require a bit more code, 
though are also easy to polyfill.
Non-the-less I can see the need for having standard functions for this.

Sebastian


Yes, Math.sum can be polyfilled, but doing so if you want accurate answers 
takes a fair amount of care.  The code above is pretty bad as a polyfill 
because it sometimes produces highly inaccurate answers.  For example, if arr = 
[1, 1e18, -1e18], then this polyfill will return the incorrect value 0 for 
total, while a more careful implementation would return 1.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Additional Math functions

2015-10-02 Thread Waldemar Horwat

On 10/02/2015 13:30, Alexander Jones wrote:

I really don't think I'd want a basic `Math.sum(a, b, c)` meaning anything 
other than `a + b + c`, i.e. `(a + b) + c`. We should all just come to terms 
with the fact that floating point addition is not associative.

Or is there really some simple, O(n) algorithm to do a better (more "careful") 
job?


Kahan summation is simple and O(n).

There exist efficient algorithms to get the exact sum as well.  See, for 
example, http://www.ti3.tuhh.de/paper/rump/RuOgOi07I.pdf

Waldemar


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Additional Math functions

2015-10-02 Thread Waldemar Horwat

On 10/02/2015 16:37, Alexander Jones wrote:

Interesting. I still feel that these algorithms should be given their proper 
names in a math library, because I would feel quite troubled if `Math.sum(a, b, 
c) !== a + b + c`. Maybe I'm alone in this view, though. What do other 
languages do?

On Friday, 2 October 2015, Waldemar Horwat <walde...@google.com 
<mailto:walde...@google.com>> wrote:

On 10/02/2015 13:30, Alexander Jones wrote:

I really don't think I'd want a basic `Math.sum(a, b, c)` meaning 
anything other than `a + b + c`, i.e. `(a + b) + c`. We should all just come to 
terms with the fact that floating point addition is not associative.

Or is there really some simple, O(n) algorithm to do a better (more 
"careful") job?


Kahan summation is simple and O(n).

There exist efficient algorithms to get the exact sum as well.  See, for 
example, http://www.ti3.tuhh.de/paper/rump/RuOgOi07I.pdf

 Waldemar


In the cases where the algorithms produce something other than the best 
practical results, giving them descriptive names would be useful.  However, 
compound naming can then get out of hand if you also include functions such as 
average, standard deviation, etc., which include computing the sum as a 
subroutine.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Danger of cache timing attacks

2015-09-28 Thread Waldemar Horwat

I was asked to share my concerns about how bad this can be.  Here's a paper 
demonstrating how one AWS virtual machine has been able to practically break 
2048-bit RSA by snooping into a different virtual machine using the same kind 
of shared cache timing attack.  These were both running on unmodified public 
AWS, and much of the challenge was figuring out when the attacker was 
co-located with the victim since AWS runs a lot of other users' stuff.  This 
attack would be far easier in shared-memory ECMAScript, where you have a much 
better idea of what else is running on the browser and the machine (at least in 
part because you can trigger it via other APIs).

https://eprint.iacr.org/2015/898.pdf

Chrome currently mitigates this by limiting the resolution of timers to 1µs.  
With any kind of shared memory multicore you can run busy-loops to increase the 
attack timing surface by 3½ orders of magnitude to about 0.3ns, making these 
attacks eminently practical.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Exponentiation operator precedence

2015-09-24 Thread Waldemar Horwat

My preference is for 2, but I don't have objections to 4.  Either works.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Will not make it to this week's meeting

2015-09-22 Thread Waldemar Horwat
I had a medical emergency (broken bones) soon after arriving in Portland
and am flying back to the bay area today for surgery and treatment. I will
unfortunately have to miss this week's TC39 meeting.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: What do you think about a C# 6 like nameof() expression for JavaScript.

2015-09-09 Thread Waldemar Horwat

This would have interesting consequences if you run your code via a minifier.  
Should the minifier return a string with the old name or the new name?

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Exponentiation operator precedence

2015-08-27 Thread Waldemar Horwat

On 08/27/2015 11:58, Alexander Jones wrote:

Ethan is making my point far better than I did, and I agree completely about 
the issue of unary operators visually appearing more tightly bound than binary 
operators.

At this point it seems fair to at least acknowledge the prospect of significant 
whitespace.

```
-x**2 === -(x ** 2)
-x ** 2 === (-x) ** 2
```


Take a look at the Fortress language ☺.  But that one benefits from operators 
and syntax not limited to ASCII.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Directed rounding

2015-08-26 Thread Waldemar Horwat

On 08/26/2015 13:12, C. Scott Ananian wrote:

I think the better idea would be related to value types 
(http://www.slideshare.net/BrendanEich/value-objects2) which brendan is working on for 
ES7.
I fuzzily recall rounding modes being used as an example in one of these slide 
decks, perhaps I am misremembering.

At any rate, one option would be for round down float and round up float to be separate value 
types.  It seems broadly similar to your proposal, except that you could use actual operators.  The lower range in an 
interval would be a round down float and the upper range a round up float.


It doesn't make sense to encode a rounding mode such as round down into a 
value type.  If you multiply it by -1, half of the time it will start doing the wrong 
thing.

Instead, you want functions that perform the primitive operations (+, -, *, /, 
sqrt, sin, etc.) in a rounding mode specified by the name of the function or 
perhaps a third parameter.  This must be stateless, not carrying a rounding 
mode with a value or (even worse) in some global environment.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Exponentiation operator precedence

2015-08-26 Thread Waldemar Horwat

On 08/26/2015 09:09, Mark S. Miller wrote:

I don't get it. The conflict between

* the history of ** in other languages,
* the general pattern that unary binds tighter than binary

seems unresolvable. By the first bullet, -2 ** 2 would be -4. By the second, it 
would be 4. Either answer will surprise too many programmers. By contrast, no 
one is confused by either -Math.pow(2, 2) or Math.pow(-2, 2).


The grammar concerns have been resolved nicely upthread, so I'm not sure what 
your objection is.  The costs are no more significant than in the original 
proposal.  ** now has the same precedence as unary operators and weaker than 
the increment operators, which matches what most other languages that support 
exponentiation do.

There is precedence for unary operators not always binding tighter than binary. 
 yield 3+4 is yield(3+4), not (yield 3)+4.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Exponentiation operator precedence

2015-08-26 Thread Waldemar Horwat

On 08/26/2015 15:08, Mark S. Miller wrote:

The force of that precedent is indeed what my objection is. The yield counter-example 
is interesting, but yield is an identifier not an operator symbol, and so does not as 
clearly fall within or shape operator expectations.

If someone explains a compelling need for ** I would find that interesting. But 
until then...


** is a convenience, and that's the wrong criterion to apply here.  If it were, 
then we wouldn't have useful conveniences like Math.cosh or arrow functions.

I'd rather read

  a*x**3 + b*x**2 + c*x + d

than

  a*Math.pow(x, 3) + b*Math.pow(x, 2) + c*x + d

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Exponentiation operator precedence

2015-08-25 Thread Waldemar Horwat

On 08/25/2015 09:38, Claude Pache wrote:


I think the following grammar could work.
Replace the current (ES2015) PostfixExpression production with:

```
IncrementExpression:
 LeftHandSideExpression
 LeftHandSideExpression [no LineTerminator here] ++
 LeftHandSideExpression [no LineTerminator here] --
 ++ LeftHandSideExpression
 -- LeftHandSideExpression
```

And define UnaryExpression as:

```
UnaryExpression:
 IncrementExpression
 delete UnaryExpression
 void UnaryExpression
 typeof UnaryExpression
 ++ UnaryExpression
 + UnaryExpression
 -- UnaryExpression
 - UnaryExpression
 ~ UnaryExpression
 ! UnaryExpression
 IncrementExpression ** UnaryExpression
```


The above is not a valid grammar.  For example, parsing ++x leads to a 
reduce-reduce conflict, where the ++ can come from either a UnaryExpression or 
an IncrementExpression.


where the following production (which exists only to avoid to confusingly 
interpret, e.g., `++x++` as `+ +x++`):


That makes no sense.  ++ is a lexical token.  The lexer always greedily bites 
off the largest token it can find, even if that leads to a parser error later.  
The parser does not backtrack into the lexer to look for alternate lexings.  
For example,

  x + y;

is a syntax error because it's greedily lexed as:

  x ++ ++ + y;

The parser does not backtrack into the lexer to look for other possible lexings 
such as:

  x ++ + ++ y;



```
UnaryExpression:
 ++ UnaryExpression
 -- UnaryExpression
```

yields a static SyntaxError (or a static ReferenceError if we want to be 100% 
compatible ES2015).


This is a problem.  It makes ++x into a static SyntaxError because x is a 
UnaryExpression.

If you got rid of these two ++ and -- productions in UnaryExpression, that 
would solve that problem (and I think make the grammar valid).

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Exponentiation operator precedence

2015-08-24 Thread Waldemar Horwat

On 08/24/2015 17:24, Jason Orendorff wrote:

On Mon, Aug 24, 2015 at 5:45 PM, Waldemar Horwat walde...@google.com wrote:

Let's not.  As I said at the last meeting, making ** bind tighter than unary
operators would break x**-2.  And making it sometimes tighter and sometimes
looser would be too confusing and lead to other opportunities for precedence
inversion.


Don't you think having `-x**2` mean the same thing as `x**2` is more
confusing? It seems like it will cause problems for the exact
programmers we are trying to help with this feature.

What you're describing as sometimes tighter and sometimes looser I
would call the same precedence. It's even easier to specify than the
current proposal:

 UnaryExpression : PostfixExpression ** UnaryExpression

An expression using both `**` and unary `-` is then parsed right-associatively:

 -a ** -b ** -c ** -d
 means -(a ** (-(b ** (-(c ** (-d))


That has different right and left precedence and is probably the closest to the 
mathematical intent.  However, it does carry other surprises.  What does each 
of the following do?

++x ** y;
x++ ** y;
x ** ++y;
x ** y++;

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generalize do-expressions to statements in general?

2015-07-17 Thread Waldemar Horwat

On 07/16/2015 13:35, Herby Vojčík wrote:



Mark S. Miller wrote:

I echo this. E is a dynamic language with many similarities with JS,
including a similarly C-like syntax. In E I use
everything-is-a-pattern-or-expression all the time. When I first moved
to JS I missed it. Now that I am used to the JS
statements-are-not-expressions restrictions, I no longer do, with one
exception:

When simply generating simple JS code from something else, this
restriction is a perpetual but minor annoyance. By itself, I would agree
that this annoyance is not important enough to add a new feature.
However, if rather than adding a feature, we can explain the change as
removing a restriction, then JS would get both simpler and more
powerful at the same time. Ideally, the test would be whether, when
explaining the less restrictive JS to a new programmer not familiar with
statement languages, this change results in one less thing to explain
rather than one more.


I like the idea those it seems a bit dense and strange on the first look. One 
breaking change is, though, that before the change, semicolon inside 
parentheses is an error, which often catches the missing parenthesis; after the 
change it is not (and manifests itself only at the end of the file; or even two 
errors can cancel each other and make conforming JS but with different 
semantics).


That's my concern as well.  We'd be significantly complicating the syntax (and 
not in a clean way because the rules are not orthogonal), and densifying the 
space of valid but bizarre syntaxes.  More cases that used to be a simple 
syntax error can now turn into something grammatically correct but wrong.

This can also have adverse implications for lexing (the old / 
start-of-regexp-vs-division tokenization issue) and the potential for 
experimenting with macro systems, which are strongly negatively affected by 
anything that complicates the / issue in lexing.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: typed objects and value types

2014-04-07 Thread Waldemar Horwat

On 04/02/2014 07:32 AM, Niko Matsakis wrote:

I just wanted to let people on es-discuss know about two of my recent
blog posts concerning typed objects. The first is a kind of status
report:

http://smallcultfollowing.com/babysteps/blog/2014/04/01/typed-objects-status-report/

and the second details some (preliminary) thoughts on how one could
build on typed objects to support user-defined value types:

http://smallcultfollowing.com/babysteps/blog/2014/04/01/value-types-in-javascript/


We've been tossing things like this around for a while.  I'd personally love to 
have some notion of value types along with int64 and friends (after having the 
proposals out for 15 years).  Note that these generally got bogged down on one 
of four issues:

- Handling of NaN's and ±0
- Direction of binary operator coercion (if x is an int64, is x+1 an int64 or a 
regular double Number?  What about x+0.5?)
- Built-in int64 vs. providing a library that allows folks to implement int64 
(whichever one was proposed, folks wanted the other one)
- Cross-realm behavior

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: if-scoped let

2013-12-03 Thread Waldemar Horwat

On 12/03/2013 05:30 PM, Mark S. Miller wrote:

What's ^^ ?


a^^b would essentially be the same as !a!=!b except that it would return the 
actual truthy value if it returns true.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generators Grammar and Yield

2013-11-26 Thread Waldemar Horwat

On 11/25/2013 04:48 PM, Brendan Eich wrote:

Brendan Eich wrote:

Kevin Smith wrote:

This makes for wtfjs additions, but they all seem non-wtf on
reflection (or did to us when Waldemar threw them up on a
whiteboard last week). By non-wtf, I mean anyone who groks that
yield is reserved only in function* can work them out.

The star after function really helps. ES5's use strict directive
prologue in the body applying to its left (even in ES5 --
duplicate formals are a strict error) is goofy.


Agree on all counts, but not quite understanding yet.

Say I'm parsing this, and the token stream is paused at the #:

function(a = # yield

I assume that we're not un-reserving yield in strict mode.  That means that I 
don't know whether to treat `yield` as an identifier or reserved word until I 
get to that goofy prologue.


Ouch, you're right. We can't handle this without backtracking. Waldemar should 
weigh in.


Well, we can handle it. We know due to lack of * after function that yield, whether reserved (due 
to use strict; in body prologue) or not, can't be yield-the-operator. So it's either an 
identifier (no use strict;) or a reserved word (and an error due to lack of * after 
function).

So we parse it as an identifier, just as we parse duplicate formal parameters. Then if we 
see use strict, we must post-process the parse tree and throw an error. Kind 
of a shame, but there it is.

At least reserving 'let' in ES5 strict did some good!

/be


For another example of why keying off generator/non-generator instead of strict 
mode for the parsing of yield is the right thing to do:

function*(a = yield/b/g) {
  a = yield/b/g;
}

One of these is a regexp.  The other is a couple divisions.

Get this wrong and you can introduce security problems.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generator Arrow Functions

2013-11-26 Thread Waldemar Horwat

On 11/26/2013 02:28 PM, Claude Pache wrote:

 From the thread [1], I guess that parsing correctly the following thing would 
be obnoxious (at best)?

(a = yield/b/g) =* {}

—Claude


Indeed.

And you can make even more obnoxious parses of the hypothetical combination of 
=*, default parameters, and retroactive yield-scope:

(a = yield//) =* (//g)

Are the two //'s regexps or is /) =* (/ a string token?

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generator Arrow Functions

2013-11-26 Thread Waldemar Horwat

On 11/26/2013 03:00 PM, André Bargull wrote:

On 11/26/2013 02:28 PM, Claude Pache wrote:
/   From the thread [1], I guess that parsing correctly the following thing 
would be obnoxious (at best)?
//
//  (a = yield/b/g) =* {}
//
//  —Claude
/
Indeed.

And you can make even more obnoxious parses of the hypothetical combination of 
=*, default parameters, and retroactive yield-scope:

(a = yield//) =* (//g)

Are the two //'s regexps or is /) =* (/ a string token?

  Waldemar


Are you sure? The `(a = yield/b/g)` part is parsed at first as a parenthesised 
expression and only later (after the `=` token) reinterpreted as an 
ArrowFormalParameters grammar production.

- André


Fine, so do this one instead:

(a = yield//g, b = yield//g) =* {}

Does this generator have one or two parameters?

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generators Grammar and Yield

2013-11-25 Thread Waldemar Horwat

On 11/25/2013 02:03 PM, Kevin Smith wrote:

Apologies for this remedial question, but I find the new grammar parameterized 
around [Yield] to be confusing.  If I understand correctly, this is to allow 
yield within non-strict generators.  If that's correct, then why are non-strict 
generators a good idea?

Thanks!


It's easier to grammatically distinguish between being inside and outside a 
generator than it is to distinguish strict vs. non-strict.

To tell whether you're a strict function or not you need to lex the input.  To 
lex you need to parse it.  To parse it you need to figure out how to parse 
yield.  Hence you get an obnoxious circularity.

Think of cases such as the following non-strict code snippet:

function(a = yield+b) {
  use strict;
}

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Add regular expressions lookbehind

2013-09-30 Thread Waldemar Horwat

No one has yet submitted a well-defined proposal for lookbehinds on the table.  
Lookbehinds are difficult to translate into the language used by the spec and 
get quite fuzzy when the order of evaluation of parts of the regexp matters, 
which is what happens if capturing parentheses are involved.  Where do you 
start looking for the lookbehind?  Shortest first, longest first, or reverse 
string match?  Greedy or not?  Backtrack into capturing results?

Waldemar


On 09/28/2013 01:54 PM, Sebastian Zartner wrote:

I wonder if the discussion about lookbehinds[1] and Marc Harter's proposal for 
them[2] in the past led to anything.
I'd really like to see these implemented in ECMAScript specification and it 
seems I am not the only one.[3][4][5] This even caused people to try to mimic 
them.[6]
So I wanted to pick up the discussion again and ask, what info was missing that 
they didn't get specified?

Sebastian

[1] https://mail.mozilla.org/pipermail/es-discuss/2010-November/012164.html
[2] 
https://docs.google.com/document/pub?id=1EUHvr1SC72g6OPo5fJjelVESpd4nI0D5NQpF3oUO5UM
[3] 
http://stackoverflow.com/questions/12273112/will-js-regex-ever-get-lookbehind
[4] 
http://stackoverflow.com/questions/13993793/error-using-both-lookahead-and-look-behind-regex
[5] http://regexadvice.com/forums/thread/85210.aspx
[6] http://blog.stevenlevithan.com/archives/mimic-lookbehind-javascript


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Comments on Sept Meeting Notes

2013-09-27 Thread Waldemar Horwat

On 09/26/2013 04:22 PM, Yehuda Katz wrote:

On Thu, Sep 26, 2013 at 4:20 PM, Allen Wirfs-Brock al...@wirfs-brock.com 
mailto:al...@wirfs-brock.com wrote:


On Sep 26, 2013, at 4:12 PM, Brandon Benvie wrote:

  On 9/26/2013 4:09 PM, Allen Wirfs-Brock wrote:
  The newness was using using string literals+ concise methods to write 
such meta=level methods.
 
  What it brings to the table is that it address the meta stratification 
issue in a reasonable manner without having to add anything (other than the use of 
the hooks) to the ES5 level language.  (and I'm ignoring the enumerability issues).
 
  I don't see how any of the string key proposals so far are different 
from __proto__, which we agree is not an adequate level of stratification (none 
basically).

It moves the stratified names out of the syntactic space that JS programmers 
typically use for their own names.  The Dunder names don't have that characteristics plus 
various sort of _ prefixing is already used by many programmer at the 
application level.


Agreed, but this problem will come right back in ES7. Private names don't solve 
this issue because of where they trap, so we don't need a temporary patch, but 
a permanent solution.


It's even back in ES6 with then.  I find it truly weird that we're trying to use two 
different mechanisms to get the then and iterate metaproperties.  We should be 
cutting down the complexity, not adding to it in a manner which the unknowns bouncing around this 
thread indicate is reckless.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Sept 17, 2013 TC39 Meeting Notes

2013-09-25 Thread Waldemar Horwat

On 09/24/2013 06:41 PM, Rick Waldron wrote:




On Tue, Sep 24, 2013 at 7:12 PM, Brendan Eich bren...@mozilla.com 
mailto:bren...@mozilla.com wrote:

Rick Waldron wrote:

- Normalize CR, LF, and CRLF to LF
[WH: Is this consensus recorded correctly? I understood the consensus 
to be normalizing all lexical grammar LineTerminatorSequences to LF.]


TC39 members (including me) tend to forget, or turn a blind eye, toward 
LINE_SEPARATOR and PARA_SEPARATOR :-P.


Regardless, this consensus is correctly recorded (confirmed word-for-word). If 
we should address LINE_SEPARATOR and PARA_SEPARATOR as well, then it's an 
agenda item for the future.


Confirmed word-for-word with what?  It's clear that we had different ideas 
about what the alleged consensus was.  We agreed to normalize line terminators 
to LF.  Nothing in the discussion led me to believe that we wouldn't be 
normalizing all line terminators.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Sept 17, 2013 TC39 Meeting Notes

2013-09-24 Thread Waldemar Horwat

On 09/24/2013 05:32 PM, Allen Wirfs-Brock wrote:


On Sep 24, 2013, at 4:12 PM, Brendan Eich wrote:


Rick Waldron wrote:

- Normalize CR, LF, and CRLF to LF
[WH: Is this consensus recorded correctly? I understood the consensus to be 
normalizing all lexical grammar LineTerminatorSequences to LF.]


TC39 members (including me) tend to forget, or turn a blind eye, toward 
LINE_SEPARATOR and PARA_SEPARATOR :-P.


Does it make sense to also normalize other line separators to LF.  Are there 
any known platforms that use anything other than CR, LF, CRLF as their default 
line separator? I tend to think that anybody who explicitly puts one of those 
other separators literally into a template string has a reason for doing so and 
we should just leave it alone.


I'm mainly wary of complicating things by adding yet another exception to a 
rule.  Now we'd have two kinds of LineTerminatorSequences: ones that get 
normalized and ones that don't.  This would make for yet another piece of 
esoteric trivia for users to remember or get surprised by if they missed it.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: [[Invoke]] and implicit method calls

2013-09-11 Thread Waldemar Horwat

On 09/11/2013 03:38 PM, Jason Orendorff wrote:

On Wed, Sep 11, 2013 at 9:08 AM, Tom Van Cutsem tomvc...@gmail.com wrote:

Currently the pattern for this is [[Get]]+[[Call]]. We cannot refactor to
[[Has]] + [[Invoke]] in general, because [[Has]] will return true also for
non-callable values.

If we believe these are call-sites where it is worth avoiding the allocation
of a function, then having an additional internal method like [[GetMethod]]
or [[InvokeConditional]] makes sense, but I doubt it's worth the added
complexity.


But as Allen said, [[Invoke]] is not a performance hack. It's a
correctness hack.

It permits proxies to customize their behavior around `this`, and even
naive .invoke trap users would definitely want those customizations to
apply for implicit .toString() and .then().


Except that [[Invoke]] doesn't solve the correctness problem either.  As we 
discussed at a prior meeting, it fails in the case of passing 'this' as one of 
the arguments.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Questions/issues regarding generators

2013-03-15 Thread Waldemar Horwat

On 03/14/2013 04:14 PM, Brendan Eich wrote:

Andreas Rossberg wrote:

On 14 March 2013 23:38, Brendan Eichbren...@mozilla.com  wrote:

Andreas Rossberg wrote:

That leaves my original proposal not to have generator application
return an iterator, but only an iterable. Under that proposal, your
example requires disambiguation by inserting the intended call(s) to
.iterator in the right place(s).

That's horribly inconvenient. It takes Dmitry's example:

  function* enum(from, to) { for (let i = from; i= to; ++i) yield i }

  let rangeAsGenerator = enum(1, 4)
  let dup = zip(rangeAsGenerator, rangeAsGenerator)  // Oops!

which contains a bug under the Harmony proposal, to this:

  function* enum(from, to) { for (let i = from; i= to; ++i) yield i }

  let rangeAsGenerator = enum(1, 4)
  let dup = zip(rangeAsGenerator[@iterator](), rangeAsGenerator[@iterator]())


No, why? The zip function invokes the iterator method for you.


Sure, but only if you know that. I thought you were advocating explicit 
iterator calls.

A call expression cannot be assumed to return a result that can be consumed by 
some mutating protocol twice, in general. Why should generator functions be 
special?

I agree they could be special-cased, but doing so requires an extra allocation 
(the generator-iterable that's returned).

Meanwhile the Pythonic pattern is well-understood, works fine, and (contra 
Dmitry's speculation) does not depend on class-y OOP in Python.

I guess it's the season of extra allocations, but still: in general when I 
consume foo() via something that mutates its return value, I do not expect to 
be able to treat foo() as referentially transparent. Not in JS!


Does for-of accept only iterables, only iterators, or both?  Presumably a function like 
zip would make a similar decision.  The problem is if the answer is both.

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Pure functions in EcmaScript

2012-11-28 Thread Waldemar Horwat
On Wed, Nov 28, 2012 at 5:39 AM, Marius Gundersen gunder...@gmail.comwrote:

 On Wed, Nov 28, 2012 at 1:20 PM, Andreas Rossberg rossb...@google.comwrote:

 Second, due to the extremely impure nature of JavaScript, there aren't
 many useful pure functions you could even write. For example, your
 'sum' function is not pure, because the implicit conversions required
 by + can cause arbitrary side effects.


 Functions passed to the array methods map, reduce, filter, etc would be
 good candidates for pure/side-effect-free functions. These functions
 shouldn't alter any state; they should only return a new value based on the
 parameter they were sent.


You haven't addressed Andreas's point: Almost any function you write is
nonpure, including your sum example. As a fun exercise, go ahead and write
a pure version of your sum example.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: new function syntax and poison pill methods

2012-10-26 Thread Waldemar Horwat

On 10/26/2012 03:23 PM, Kevin Reid wrote:

On Fri, Oct 26, 2012 at 3:13 PM, David Bruant bruan...@gmail.com 
mailto:bruan...@gmail.com wrote:

I think the oddity I note is a consequence of the too loose paragraph in 
section 2:
A conforming implementation of ECMAScript is permitted to provide additional 
types, values, objects, properties, and functions beyond those described in this 
specification. In particular, a conforming implementation of ECMAScript is permitted to 
provide properties not described in this specification, and values for those properties, 
for objects that are described in this specification.

Instead of having an there is no 'caller' nor 'arguments' property at all 
rule, maybe it would be a good idea to refine this paragraph to say what's permitted and 
what is not.
For instance, mention that for function objects, there cannot be a property 
(regardless of its name!) providing access to the caller function during 
runtime, etc.
With this kind of refinement (potentially reminded as a note in the 
relevant subsections), it may be easier to share and document the intent of 
what is acceptable to provide as authority and more importantly what is not.


How about: there must be no /nonstandard non-configurable properties/ of 
standard objects.


Wouldn't that just preclude us from ever adding new standard non-configurable 
properties to standard objects in the future?

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: JS syntax future-proofing, Macros, the Reader (was: Performance concern with let/const)

2012-09-24 Thread Waldemar Horwat

On 09/18/2012 09:47 AM, Brendan Eich wrote:

2. Tim Disney with help from Paul Stansifer (Mozilla grad student interns) have 
figured out how to implement a Reader (Scheme sense) for JS, which does not 
fully parse JS but nevertheless correctly disambiguates /-as-division-operator 
from /-as-regexp-delimiter. See

https://github.com/mozilla/sweet.js

This Reader handles bracketed forms: () {} [] and /re/. Presumably it could 
handle quasis too. Since these bracketed forms can nest, the Reader is a PDA 
and so more powerful than the Lexer (a DFA or equivalent), but it is much 
simpler than a full JS parser -- and you need a Reader for macros.


That's not possible.  See, for example, the case I showed during the meeting:

boom = yield/area/
height;

Is /area/ a regexp or two divisions and a variable?  You can't tell if you're 
using a purported universal parser based on ES5.1 and are unaware that yield is 
a contextual keyword which will be introduced into the language in ES6.  And 
yes, you can then get arbitrarily out of sync:

boom = yield/a+3/ string?  ...

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Whitelist WeakSet

2012-09-24 Thread Waldemar Horwat

On 09/24/2012 01:24 AM, David Bruant wrote:

Le 24/09/2012 10:04, Tom Van Cutsem a écrit :

2012/9/24 David Bruant bruan...@gmail.com mailto:bruan...@gmail.com

Le 23/09/2012 22:04, Herby Vojčík a écrit :
 Hello,

 maybe I missed something, but how will you secure the whitelist
 itself? Malicious proxy knowing righteous one can steal its whitelist,
 afaict.
I'm sorry, I don't understand what you're saying here. Can you be more
specific and provide an example of an attack?

As far as I'm concerned, I consider the design secure, because it's
possible to easily write code so that only a proxy (or it's handler to
be more accurate) has access to its whitelist and nothing else.


Right. Perhaps what Herby meant is that the proxy might provide a malicious 
whitelist to steal the names being looked up in them. This will be prevented by 
requiring the whitelist to be a genuine, built-in WeakSet. The proxy will use 
the built-in WeakSet.prototype.get method to lookup a name in that whitelist, 
so a proxy can't monkey-patch that method to steal the name either.

True. I think a lot of that part depends on how WeakSet/Set are spec'ed. It 
might be possible to accept proxies wrapping WeakSets (which is likely to be 
helpful with membranes) and perform the check on the target directly, bypassing 
the proxy traps. Or maybe consider the built-in WeakSet.prototype.get method as 
a private named method on the weakset instance and only call the 
unknownPrivateName trap.


Yes.  This was bothering me during the meeting and (as far as I know) didn't 
get resolved.  What if someone passes in a proxy to a WeakSet instead of an 
actual WeakSet?

A.  Allowed.  Then the security protocol is utterly broken.

B.  Disallowed.  Now we have a class that can't be proxied.

Neither sounds good.  What am I missing?

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


The joys of United flight 242

2012-09-17 Thread Waldemar Horwat
I went to SFO today to fly to Boston for tomorrow's ECMA TC39 meeting. The
flight was listed as about a half hour late — no big deal. Got on the plane
uneventfully. Noticed that one of the passengers had a service dog.

Crew goes on the intercom and states that the flight will be delayed
because the plane has the wrong kind of safety instruction cards in the
seat back. Crew collects all pink safety cards and exits with them. After a
long time they come back with grey ones and hand them out to everyone.

We taxi out.

Crew reports that we must taxi back to the gate because of a minor
mechanical problem in the cockpit. We do and they begin to work on it.

Crew reports that we're ready to go but we must wait while they find a
passenger who went back to the gate to walk her service dog.

Nothing much happens for the next hour.

Crew turns off the seat belt sign and several of them leave the plane in a
huff. Not a word is said.

Nothing much happens for the next half our or so, except that the elite
passengers one by one get up, take their bags, and leave the plane. Not a
word is said.

Someone goes on the intercom and tells everyone to grab their bags and exit
the plane. No comment on the status of the flight (although it was obvious
what was happening when the elite passengers started leaving). What's left
of the crew is not interested in talking to the passengers.

Huge line appears in the rebooking center. Spend another couple hours in
that line and later retrieving my checked bag.

When I speak with the United representative over the phone, I get very long
periods of hold. They're asking me to go (drive?) to LA for a flight that
leaves LAX at midnight and arrives in Boston at about 5pm (yes, pm, not am)
on Tuesday. What?

Current status is that I'm rebooked on another SFO-BOS flight that arrives
Tuesday evening.

Wonder if I'll get my baggage fee back, since my bag never left SFO.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: minutes, TC39 meeting Tues 5/22/2012

2012-05-23 Thread Waldemar Horwat

Since you're speculating on my position, here it is:

- Classes don't hang together unless we have agreement on some declarative way to specify 
properties, referred to as const classes in the meeting notes.
- It's fine for that to not be the default, but we must have agreement on how 
to do it.

I'll be out of the office for a few days, but we can discuss it when I'm back.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: March 29 meeting notes

2012-03-30 Thread Waldemar Horwat

On 03/29/2012 08:30 PM, Allen Wirfs-Brock wrote:

I don't think the report on maximally minimal classes fully reflections the 
discussion:


Maximally minimal classes:


Alex and Allen initiated the discussion as a status up-date to TC-39..  We 
pointed out that this proposal had recently been extensively discussed on 
es-discuss and that it appear to have considerable support from most of the 
participants in that discussion.


Luke:  These aren't good enough to be a net win.


I'm not sure whether this is an exact quote.


It is.


Luke certainly did raise the issue of whether classes, as defined by this 
proposal, added enough functionality to ES to justify the inherent complexity 
of a new feature.

Allen and Alex reiterated that this proposal is only trying to address the most 
common class definition use cases but in a way that allows for future 
extensions to address a broader range of use cases. These is significant value 
in what the proposal provides even if it doesn't do everything any might want.

dherman stated he has some minor design issues he wants to further discuss, but 
that overall the level of functionality in this proposal was useful and would 
be a positive addition. He supports it.


Waldemar:  These don't address the hard problems we need to solve.
Concerned about both future-hostility (making it cumbersome for future
classes to declare, say, object layout without cumbersome syntax by
taking over, say, const syntax) and putting developers into a quandry


We discussed this concern quite a bit and did not identify any specify ways in which the 
current proposal would block future extensions.  Waldemar was asked to provide specific 
examples if he comes up with any.   Allen pointed out that future syntactic additions can 
also enforce new semantics.  For example addition of a per instance state declarations 
and a  const keyword to the constructor declaration could cause ad hoc 
this.property assignments to be statically rejected, if that was a desired semantics.


-- if they want to do anything more sophisticated, they'll need to
refactor their code base away from these classes.  Unless one choice
is clearly superior, having two choices (traditional and extended
object literals) is better than having three (traditional, extended
object literals, and minimal classes).  Minimal classes currently
don't carry their weight over extended object literals.  Perhaps they
can evolve into something that carries its weight, but it would be
more than just bikeshedding.


The above is a statement of Waldemar's opinion. Other opinions expressed in the 
discussion aren't record in the original notes.


Yes, I marked it as such.   Most of the other opinions had already been 
eloquently expressed on es-discuss.


Alex:  We need to do something.


Allen and Alex also expressed that it is unlikely that any class proposal that 
significantly goes beyond will be accepted for ES6.


I don't remember that.  Probably just missed it, but there wasn't much 
discussion of that option.


Debated without resolution.


In summary:

Waldemar should identify any specific ways that the syntax or semantics of this 
proposal would be future hostile.

Waldemar, Luke, and MarkM expressed varying levels of concern as to whether the 
user benefit of the proposal was sufficient to justify its inclusion. In order 
to resolve this question, both sides of the issue really need to provide better 
supporting evidence for the next meeting.

Allen


Thank you for the corrections.  It's sometimes hard for me to participate and 
take notes at the same time, and I welcome corrections.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: March 28 meeting notes

2012-03-29 Thread Waldemar Horwat
On Wed, Mar 28, 2012 at 11:47 PM, Erik Corry erik.co...@gmail.com wrote:
 2012/3/29 Waldemar Horwat walde...@google.com:
 Wild debate.
 Poll of who objects to which problem in the proposal:
 1.  AWB, MM, AR, AR, LH, DC, WH, DH
 2.  AWB, MM, AR, AR, LH, BE, DC, WH, DH
 3.  AR, AR, LH, DC, BE
 5.  AWB, MM, AR, AR, LH, BE, DC, WH, DH

 Given that there were two ARs it is good that Andreas always agreed
 with the other one!  :-)

AR = Andreas Rossberg
AR = Alex Russell

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: March 28 meeting notes

2012-03-29 Thread Waldemar Horwat
For me it was a tradeoff. I prefer consistency. The treatment of
'this' was done TCP-style, so I'd have preferred for the other
language constructs to also behave TCP-style. However, practical
gotchas begin to form:

- yield cannot be done TCP-style within the framework of what we're doing.
- what should break, continue, and return do if they're done
TCP-style? They'd throw some exception across to unwind the stack to
the destination of the break, continue, or return. That exception
could be intercepted by try/finally blocks inside functions
dynamically on the stack in between the source and target function,
which then brings up the questions of what it would reflect as, etc.
(If you just make it bypass catch and finally clauses, you create even
worse problems.)

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


March 28 meeting notes

2012-03-28 Thread Waldemar Horwat
Here are my rough notes from today's meeting.

    Waldemar

-
IPR discussion
Intel changed their ECMAScript patent declaration to RANDZ.
Now they wonder why no one else made a RANDZ declaration.
Istvan described the history.
Mozilla is also unhappy with the current state of affairs.  Even
though this instance turned out well, it shows the potential for
problems.
Lots more IPR discussion

Rick Hudson, Stephan Herhut:  River Trail proposal
Proposal will appear on wiki shortly.
Deterministic, except for things such as array reduction order if the
reduction operation is nonassociative.
Parallel arrays are immutable.
Various parallel methods take kernel functions to operate on subcomponents.
MarkM: Are the requirements on the kernel functions to allow them to
be optimized well-defined?
Rick: Yes

var a = new ParallelArray(...)
var b = a.map(function(val) {return val+1;});
Allen: This can be done today sequentially by replacing ParallelArray
with Array.

var b = a.combine(function(i) {return this.get(i)+1;});
var sum = a.reduce(function(a, b) {return a+b;});

Competitors:
OpenCL: GPUs do poor job of context-switching.
WebCL: Too many things undefined.
Browser-provided webworkers (task parallelism).

Waldemar: Can the kernels do nested parallel operations, such as
what's needed for a matrix multiply?
Rick: Yes
Waldemar: What data types can you use?
Rick: This builds on the typed array work.

Some desire for not having too many array times.  Can we unify
ParallelArray with Array?
DaveH: No. Holes alone cause too much of a semantic dissonance.
Waldemar: Would like to unify ParallelArray with typed arrays.
DaveH: No, because ParallelArrays can hold objects and typed arrays can't.
Waldemar: Why not?

Discussion back to knowing which kernels can be optimized.
DaveH, MarkM: Nondeterministic performance degradation is preferable
to nondenterministic refusal to run the code.  This leaves
implementations space to grow.
What about throwing exceptions only for functions that can never be
optimized because they mutate state?

Waldemar:  Is this optimized?  (Note that there are several different
issues here.)
let y = ...;
function f(x) {return x+y;}
a.map(f)

Note that merely reading mutable state is not necessarily a cause for
deoptimization because parallel functions don't run in parallel with
other code, so that state stays fixed for the duration of the parallel
operation.

Allen: Concerned about every closure carrying along sufficient
information to do the kind of abstract interpretation needed to
optimize it as a ParallelArray kernel.
Allen's issue summary:
- Do we want to do this?
- If so, how do we structure the activity (separate standard or part
of ESnext or ESnextnext)?
- Data parallelism or parallelism in general?
Rick: Our goal is to get it into browsers.
Debate about whether to do this for ES6 or ES7.
Brendan, DaveH: Let's just do the work.  The browsers can ship it when
it's ready, regardless of when the standard becomes official.  Need to
get new features user-tested anyway.
Structurally this will part of the mainline ECMAScript work
(es-discuss + meetings), not in separate meetings as was done with
internationalization.

Allen's spec status.

Olivier:  Concerns about latency issues related to module fetches
blocking.  Multiple script tags can fetch their scripts concurrently;
modules have latency problems such as:
script src=A.js
script src=B.js
script src=C.js
vs.
script src=C.js
where C.js is:
module A at A.js
module B at B.js
// use A and B

Alex:  Have modules block document.write.

Long debate about asynchronous loading approaches.

Olivier:  To get better latency, you can make your first module load
simple, but after that you'll need to use the AMD syntax again.
DaveH: Change the HTML semantics.
Alex: Evaluate the outer module asynchronously in script blocks such
as the one below, because there is no non-module code after the
module:
script
module M {
  module A at A.js
  ...
}
/script

Olivier: This will error out if followed out by the otherwise correct:
script
M.foo();
/script

DaveH proposal: Bifurcate grammar so that scripts (whether in script
tags or in files included from those) cannot contain static module
load statements, not even in syntactically nested modules.  Modules
can include such statements but can only be parsed via dynamic loads.
script
System.load(interp.js, function(m) {...});
/script

interp.js contains:
module stack = stack.js;
export function evaluate(...) {...}

stack.js contains:
export class Stack {...}

The body of an eval would be treated like a script body, not a module
body.  This avoids the tarpit of dealing with synchronous i/o in eval.


For-of design variants:
Variant 1:
import keys from @iter
for let k of keys(o)

Variant 2:
for own (let k in o)

Current way:
Object.keys(o).forEach(function(...){...});

Picked variant 1.

DaveH: The people want Array.of and Array.from and don't want to wait
for the ... syntax.
Array.of(a, b, c) ≡ [a, b, c]
Array.from(a) ≡ [... 

Re: Nested Quasis

2012-02-07 Thread Waldemar Horwat

On 02/06/2012 06:49 PM, Mark S. Miller wrote:

On Mon, Feb 6, 2012 at 3:26 PM, Waldemar Horwat walde...@google.com 
mailto:walde...@google.com wrote:

On 02/03/2012 08:07 PM, Mark S. Miller wrote:

On Fri, Feb 3, 2012 at 12:58 PM, Waldemar Horwat walde...@google.com 
mailto:walde...@google.com mailto:walde...@google.com 
mailto:walde...@google.com wrote:

On 02/02/2012 06:27 PM, Waldemar Horwat wrote:

[...]

Note that this is more complex than just having the parser switch 
modes for the treatment of / as division vs. regexp.  Here comments and white 
space are also affected, which can in turn the structure of the lexer upside 
down.  The kinds of cases I'm thinking of are:

`abc$/*comment*/identifier//
`
(here we have a /**/ comment and a // comment)


There is no valid quasiHole above, so the whole thing matches a 
QuasiOnly. The QuasiOnly includes all characters between the backticks. Nothing 
is taken to be a comment, just like it wouldn't be if it appeared within a 
string.


According to which lexical grammar?  According to the one you provided 
earlier in this thread, `abc$ is a QuasiOpen token:

  QuasiOpen ::
` QuasiChar* $


Parsing further, /*comment*/identifier is a single identifier token as far 
as the syntactic grammar is concerned.


I was imprecise. I'll try again, using only lexical grammar concepts and making 
explicit where whitespace, comments, etc may appear.

 Token ::
 IdentifierName
 Punctuator
 NumericLiteral
 StringLiteral
 Quasi

 Quasi ::
 QuasiOnly
 QuasiOpen QuasiHole (QuasiMiddle QuasiHole)* QuasiClose

 QuasiOnly ::
 ` QuasiChar* `

 QuasiOpen ::
 ` QuasiChar* $

 QuasiMiddle ::
 QuasiChar* $

 QuasiEnd ::
 QuasiChar* `

 QuasiChar ::
 SourceCharacter *but not one of $ or `*
 $ $
 $ `
 $ \ EscapeSequence

 QuasiHole ::
 Identifier
 { Spacing* (BalancedCurlySequence Spacing*)* }

 BalancedCurlySequence ::
 Token *but not one of { or }*
 { Spacing* (BalancedCurlySequence Spacing*)* }

 Spacing ::
 WhiteSpace
 LineTerminator
 Comment

Within a Quasi, no character sequences are interpreted as whitespace or 
comments except where indicated by Spacing above.


That's going back to the previous approach of treating the whole quasi as a 
single token.  This doesn't work because it's not possible to specify the 
BalancedCurlySequence production as a lexical grammar.  You're confusing the 
lexical with the syntactic grammars here.

Examples of why BalancedCurlySequence doesn't work:

{/[{]/}
(interior parses as five single-character tokens but no matching closing 
bracket)

{ainb}
(interior parses as three tokens: a in b)

{3.toString()}
(interior parses as 3 . toString ( ))

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nested Quasis

2012-02-07 Thread Waldemar Horwat

On 02/07/2012 02:51 PM, Mark S. Miller wrote:

On Tue, Feb 7, 2012 at 1:52 PM, Waldemar Horwat walde...@google.com 
mailto:walde...@google.com wrote:
[...]

That's going back to the previous approach of treating the whole quasi as a 
single token.  This doesn't work because it's not possible to specify the 
BalancedCurlySequence production as a lexical grammar.  You're confusing the 
lexical with the syntactic grammars here.


Hi Waldemar, I am first of all trying to make clear what we're actually 
proposing, and to resolve any genuine ambiguity. As for how we phrase this 
proposal so that it fits with the rest of our spec language, what do you 
suggest?


Examples of why BalancedCurlySequence doesn't work:

{/[{]/}
(interior parses as five single-character tokens but no matching closing 
bracket)


Yes, and therefore a program consisting of

 `{/[{]/}`

fails to lex and fails to parse. That seems like the correct outcome.


Why?  It's just a regexp.


{ainb}
(interior parses as three tokens: a in b)

Why doesn't it parse as one token: ainb ?


The point is that a in b is one valid parse.  I don't need to show that there 
are no other valid parses.  In fact, there are lots of other valid parses 
because the grammar is very ambiguous.


{3.toString()}
(interior parses as 3 . toString ( ))

Why? That's not what the JS lexer does anywhere else?


That's the problem with the rule you gave.


I don't at all see how you arrived at your conclusions. Is it actually unclear 
what I am trying to say, or are you simply taking issue with how I'm saying it? 
If you find Erik's way of specifying ok, let's just use that. As I just said in 
reply to him, it does capture my actual intent more directly.


The bug is in what you're trying to say, not in how you're saying it.  You're 
confusing the lexical and syntactic grammars.  Due to this confusion you're 
trying lexical productions such as

BalancedCurlySequence ::
Token *but not one of { or }*
{ Spacing* (BalancedCurlySequence Spacing*)* }

To illustrate the problem, consider a simpler lexer rule:

TokenSequence ::
  Token*

This will lex ainb as many things, including for example a in b.  The existing 
lexer resolves it by always chomping the largest sequence of characters to bite 
off as the next lexical token.  Once it accepts a token, it doesn't backtrack 
if it later finds an alternative parse for that token that would have made 
future tokens work better.  On the other hand, if you allow productions such as 
a TokenSequence inside a lexical token, then you get full backtracking and 
ambiguity across the Tokens that make up the TokenSequence because they are all 
part of one lexical token.

I was favorable to splitting up a quasi into multiple tokens, where this 
problem for the most part doesn't arise.  If you want to make the whole quasi 
into one token, then you'll need to solve this problem.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nested Quasis

2012-02-06 Thread Waldemar Horwat

On 02/03/2012 08:07 PM, Mark S. Miller wrote:

On Fri, Feb 3, 2012 at 12:58 PM, Waldemar Horwat walde...@google.com 
mailto:walde...@google.com wrote:

On 02/02/2012 06:27 PM, Waldemar Horwat wrote:

[...]

Note that this is more complex than just having the parser switch modes for 
the treatment of / as division vs. regexp.  Here comments and white space are 
also affected, which can in turn the structure of the lexer upside down.  The 
kinds of cases I'm thinking of are:

`abc$/*comment*/identifier//
`
(here we have a /**/ comment and a // comment)


There is no valid quasiHole above, so the whole thing matches a QuasiOnly. The 
QuasiOnly includes all characters between the backticks. Nothing is taken to be 
a comment, just like it wouldn't be if it appeared within a string.


According to which lexical grammar?  According to the one you provided earlier 
in this thread, `abc$ is a QuasiOpen token:

  QuasiOpen ::
` QuasiChar* $


Parsing further, /*comment*/identifier is a single identifier token as far as 
the syntactic grammar is concerned.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nested Quasis

2012-02-03 Thread Waldemar Horwat

On 02/02/2012 06:27 PM, Waldemar Horwat wrote:

On 02/02/2012 04:15 PM, Mark S. Miller wrote:



On Thu, Feb 2, 2012 at 2:00 PM, Waldemar Horwat walde...@google.com 
mailto:walde...@google.com wrote:

OK. This introduces yet another lexing context, in which all productions 
*except* QuasiMiddle and QuasiEnd are disallowed, and white space and comment 
handling is funny. That works if the expressions must be one of the two forms:

$id
${expr}

Is that the exhaustive list, or are we looking at other forms such as $$, $id.id 
http://id.id, $id[expr], etc.?



I'll let Mike speak for the details of what he really wants to propose. But 
here are the answers from E:

escapes with the quasi literal text are taken care of by the QuasiChar 
production, much like the existing definition of DoubleStringCharacter:

QuasiChar ::
SourceCharacter but not one of $ or `
$ $
$ `
$ \ EscapeSequence

So that `$$` === $, `$`` === `, and `$\n` === \n, respectively.

Regarding `...$id.id...` and `...$id[expr]...`, only the first id in each case 
in in the quasiHole. All the text afterwards is part of the QuasiClose.


Good. I'll have to think about this a bit more, but there's a chance you 
converted me.


Note that this is more complex than just having the parser switch modes for the 
treatment of / as division vs. regexp.  Here comments and white space are also 
affected, which can in turn the structure of the lexer upside down.  The kinds 
of cases I'm thinking of are:

`abc$/*comment*/identifier//
`
(here we have a /**/ comment and a // comment)

`abc$/**/{/**//re//**/}/**/def`
vs:
`abc$/**/{/**//re//**/}/*def`
(in the former all four /**/'s are comments.  Not sure what the latter would 
do.)

`abc$id def`
`abc$ id def`
(the lexer removes spaces before all tokens, so the quasi would not contain a space 
before the def)

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nested Quasis

2012-02-02 Thread Waldemar Horwat

On 02/02/2012 11:03 AM, Mark S. Miller wrote:

On Thu, Feb 2, 2012 at 5:09 AM, Douglas Crockford doug...@crockford.com 
mailto:doug...@crockford.com wrote:

On 11:59 AM, Waldemar Horwat wrote:

On 02/01/2012 11:35 AM, Allen Wirfs-Brock wrote:
Here's one which I couldn't express in a lexer grammar: How to restart
the quasi after an included expression is over.


If quasis are not nested, then the lexical rule is really simple: Just 
match the `s, and within the literal, match the {}s.

I would prefer to keep it simple, unless there is a compelling requirement 
to provide nesting. If we do the simple version now, we could allow the nested 
case in the future.


When we came up with this simplification, I thought I could live with it. 
Now, having tried to write some examples within these restrictions, I find it unusable.

I think we're overestimating the parsing difficulty. I'll let Mike speak for 
the real plan. But I'd like to explain what I do in E, so that we can see that 
none of this need be complicated. It does involve an interaction between the 
parsing and lexing levels, but much less complex than you may expect, and 
comparable (IMO less) than the existing unclean interaction that JS already has:

Lexing grammar has four new token types.

 QuasiOnly ::

 ` QuasiChar* `

 QuasiOpen ::

 ` QuasiChar* $

 QuasiMiddle ::

 QuasiChar*

 QuasiEnd ::

 QuasiChar `

(presumably you forgot a * in QuasiEnd?)

That's not a valid lexer grammar.  The input

  if

is now ambiguous -- it can lex as either a keyword or a QuasiMiddle.  The input

  3+`

will now lex as QuasiEnd, which may or may not be what you want.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nested Quasis

2012-02-02 Thread Waldemar Horwat

OK.  This introduces yet another lexing context, in which all productions 
*except* QuasiMiddle and QuasiEnd are disallowed, and white space and comment 
handling is funny.  That works if the expressions must be one of the two forms:

$id
${expr}

Is that the exhaustive list, or are we looking at other forms such as $$, 
$id.id, $id[expr], etc.?

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nested Quasis

2012-02-02 Thread Waldemar Horwat

On 02/02/2012 04:15 PM, Mark S. Miller wrote:



On Thu, Feb 2, 2012 at 2:00 PM, Waldemar Horwat walde...@google.com 
mailto:walde...@google.com wrote:

OK.  This introduces yet another lexing context, in which all productions 
*except* QuasiMiddle and QuasiEnd are disallowed, and white space and comment 
handling is funny.  That works if the expressions must be one of the two forms:

$id
${expr}

Is that the exhaustive list, or are we looking at other forms such as $$, $id.id 
http://id.id, $id[expr], etc.?



I'll let Mike speak for the details of what he really wants to propose. But 
here are the answers from E:

escapes with the quasi literal text are taken care of by the QuasiChar 
production, much like the existing definition of DoubleStringCharacter:

 QuasiChar ::
 SourceCharacter but not one of $ or `
 $ $
 $ `
 $ \ EscapeSequence

So that `$$` === $, `$`` === `, and `$\n` === \n, respectively.

Regarding `...$id.id...` and `...$id[expr]...`, only the first id in each case 
in in the quasiHole. All the text afterwards is part of the QuasiClose.


Good.  I'll have to think about this a bit more, but there's a chance you 
converted me.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nested Quasis

2012-02-01 Thread Waldemar Horwat

On 01/31/2012 03:04 PM, Allen Wirfs-Brock wrote:


On Jan 31, 2012, at 2:36 PM, Waldemar Horwat wrote:


On 01/28/2012 02:54 PM, Erik Arvidsson wrote:

Under the open issues for Quasi Literals,
http://wiki.ecmascript.org/doku.php?id=harmony:quasis#nesting , the
topic of nesting is brought up.

After implementing Quasi Literals in Traceur it is clear that
supporting nested quasi literals is easier than not supporting them.
What is the argument for not supporting nesting? Can we resolve this?


This has been hashed out in committee before.  Do you have a solution to the 
grammar problems, such as having a full ECMAScript parser inside the lexer?  
You can't just count parentheses because that breaks regexps.


I would think the solution to this is pretty straightforward.  Basically, a 
Quasi is not a single token.   the grammar in the proposal can almost be read 
that way right now.   It should only take a little cleanup to factor it into a 
pure lexical part and a syntactic part.


I'd love to see this little cleanup.  I thought about it for a while and 
couldn't come up with it myself; I'm not sure it can even be done.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nested Quasis

2012-02-01 Thread Waldemar Horwat

On 02/01/2012 11:35 AM, Allen Wirfs-Brock wrote:


On Feb 1, 2012, at 11:28 AM, Waldemar Horwat wrote:


On 01/31/2012 03:04 PM, Allen Wirfs-Brock wrote:


On Jan 31, 2012, at 2:36 PM, Waldemar Horwat wrote:


On 01/28/2012 02:54 PM, Erik Arvidsson wrote:

Under the open issues for Quasi Literals,
http://wiki.ecmascript.org/doku.php?id=harmony:quasis#nesting , the
topic of nesting is brought up.

After implementing Quasi Literals in Traceur it is clear that
supporting nested quasi literals is easier than not supporting them.
What is the argument for not supporting nesting? Can we resolve this?


This has been hashed out in committee before.  Do you have a solution to the 
grammar problems, such as having a full ECMAScript parser inside the lexer?  
You can't just count parentheses because that breaks regexps.


I would think the solution to this is pretty straightforward.  Basically, a 
Quasi is not a single token.   the grammar in the proposal can almost be read 
that way right now.   It should only take a little cleanup to factor it into a 
pure lexical part and a syntactic part.


I'd love to see this little cleanup.  I thought about it for a while and 
couldn't come up with it myself; I'm not sure it can even be done.


Was there some particular issue you were running into?


Here's one which I couldn't express in a lexer grammar: How to restart the 
quasi after an included expression is over.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nested Quasis

2012-01-31 Thread Waldemar Horwat

On 01/28/2012 02:54 PM, Erik Arvidsson wrote:

Under the open issues for Quasi Literals,
http://wiki.ecmascript.org/doku.php?id=harmony:quasis#nesting , the
topic of nesting is brought up.

After implementing Quasi Literals in Traceur it is clear that
supporting nested quasi literals is easier than not supporting them.
What is the argument for not supporting nesting? Can we resolve this?


This has been hashed out in committee before.  Do you have a solution to the 
grammar problems, such as having a full ECMAScript parser inside the lexer?  
You can't just count parentheses because that breaks regexps.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: January 19 meeting notes

2012-01-20 Thread Waldemar Horwat

On 01/19/2012 10:51 PM, Jon Zeppieri wrote:

On Thu, Jan 19, 2012 at 11:02 PM, Brendan Eichbren...@mozilla.org  wrote:


Yes kids, this means we are going with MarkM's lambda desugaring from:

https://mail.mozilla.org/pipermail/es-discuss/2008-October/007819.html


Is there a version of this desugaring that deals with recursive
bindings in the initializer expression of the loop?

In my post 
(https://mail.mozilla.org/pipermail/es-discuss/2008-October/007826.html),
I used an example like:

for (let fn = function() { ... fn(); ...};;)

There are other, related cases, like:

   for (let [i, inc] = [0, function() {i++;}]; i  n; inc()) ...

In that earlier post, I wrote that the modifications [to MarkM's
desugaring] needed to make these work are pretty straightforward,
though I can't recall what I had in mind at the time.

Waldemar's option (above) solves the recursive function case, but not
the local-inc case. Even as the loop rebinds i and inc, the latter
will continue to refer to (and increment) the initial binding of i.


Yeah, I know about that.  If the updater is something like i++, you want the 
closures to capture the value of the i before the updater runs.  I didn't bring 
up that case (and we didn't discuss it during the meeting) because, if we 
choose to do this approach, its resolution is pretty straightforward and we 
were short on time.  I recall seeing Mark's desugaring that already handles it. 
 We'll revisit the details in the future.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Jan 18 meeting notes

2012-01-18 Thread Waldemar Horwat
My rough notes from today's meeting.

Waldemar

--
DaveH: One JavaScript (Versioning debate)
It's inevitable (and necessary) that ES6 will have some breaking changes
around the edges.  How to opt-in?
DaveH's versioning proposal: A module block or file include is the only
in-language ES6 opt-in.  Modules can appear only at the top level or inside
another module.  This avoids the problem of a use strict nested in a
function inside a with.

Brendan:
var obj = get_random_obj();
var x, prop = 42
with (obj) {
  x = function() {
use strict;
return prop;
  }();
}

Differences between the de facto (traditional) semantics and ES6 (i.e.
semantic changes instead of mere syntax additions):
- ES5 strict changes
- static scoping (really means static checking of variable existence; see
below)
- block-local functions
- block-local const declarations
- tail calls  (yikes - it's a breaking change due to Function.caller)
- typeof null
- completion reform
- let
DaveH: Thinks we may be able to get away with enabling completion reform
and let for all code.

Allen: Would a class be allowed outside a module?
DaveH: Yes, but it would not support static scoping, block-local functions,
etc.
MarkM: Classes should not be allowed in traditional semantics.  If you want
a class, you need a use strict or be inside a module.

Waldemar: Given that you can't split a module into multiple script blocks,
making modules be the only in-language opt-in is untenable.  Programmers
shouldn't have to be forced to use the broken scope/local function/etc.
semantics to split a script into multiple script blocks.
DaveH: Use out-of-language opt-ins.

MarkM: Wants a two-way fork (non-strict vs. strict) instead of a three-way
fork (non-strict vs. strict vs. ES6-in-module).
MarkM: Does a failed assignment inside a non-strict module throw?

DaveH: Most of the differences between strict and non-strict are code bugs.
Luke, MarkM: No.  Their developer colleague experience shows that there are
plenty of changes to non-buggy code that need to be made to make it work
under strict mode.

Allen, Waldemar: It's important to support the use case of someone writing
global code using the clean new semantics and not having to learn about the
obsolete traditional compatibility semantics.

Can use strict be the ES6 opt-in?

What DaveH meant by static scoping (i.e. static checking):
What happens when there's a free variable in a function?

Nonstrict ES5 code:
- Reference error if variable doesn't exist at the time it is read; creates
a global if doesn't exist at the time it is written.

Strict ES5 code:
- Reference error if variable doesn't exist at the time it is read or
written.

Static checking goal for ES6 modules:
- Compilation error if variable doesn't exist at the time module is
compiled.
- Reference error if variable doesn't exist at the time it is read or
written.
(It's possible to get the latter error and yet have the module compile
successfully if someone deletes a global variable outside the module
between when the module is compiled and when the variable is read or
written at run time.)

Discussion of whether it is important to support non-statically-checked
binding in modules.

MarkM: typeof is used to test for the existence of globals.  If the test
succeeds, they then proceed to use the global directly.  This would then be
rejected by static checks.

DaveH: Doesn't see a way to do static checking with strict code (due to,
for example, the with case illustrated by Brendan earlier).

MarkM: The cost of having three modes is higher than the cost of not
supporting static checking early errors.

DaveH's new proposal:  Other than static checking, attach the incompatible
ES6 semantics to the strict mode opt-in.  These semantics are
upwards-compatible with ES5 strict mode (but not ES5 non-strict mode).  The
semantics inside a module would be the strict semantics plus static
checking.

Do we want other new ES6 syntax normatively backported to work in
non-strict mode?
Waldemar, MarkM:  Not really.  This requires everyone to be a language
lawyer because it's introducing a very subtle new mode:  ES6 with nonstrict
scoping/const/local semantics.  If an implementation wants to backport, the
existing Chapter 16 exemption already allows it.
DaveH, Brendan:  Yes.  People won't write use strict.  Don't want to
punish people for not opting in.
Alex:  Split the middle.  Backport new ES6 features to non-strict features
where it makes sense.

Waldemar, DaveH:  Want to make it as easiy as possible to make a strict
opt-in for an entire page instead of littering opt-ins inside each script.

Allen:  Backporting increases spec complexity and users' mental tax.  The
main costs are in making lots of divergent scoping legacy issues possible.

Doug:  Modules are sufficient as an opt-in, without the need for a use
strict opt-in.
Waldemar:  No.  Having multiple scripts on a page would require each one to
create its own module, and then issues arise when they want to 

Re: Nov 17 meeting notes

2011-11-18 Thread Waldemar Horwat

On 11/17/2011 10:03 PM, Dominic Cooney wrote:



On Fri, Nov 18, 2011 at 9:40 AM, Waldemar Horwat walde...@google.com 
mailto:walde...@google.com wrote:

Array destructuring and length:
let [a, b, c, d, ... r] = {2: 3} | [1, 2]
Obvious: a is 1; b is 2.
What are c, d, and r?
c = 2.


Nit: This should be c = 3, because {2: 3} means ({2: x} | [1, 2])[2] is x, 
right?


Correct.  Sorry for the typo.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nov 16 meeting notes

2011-11-17 Thread Waldemar Horwat
On Thu, Nov 17, 2011 at 3:49 AM, Axel Rauschmayer a...@rauschma.de wrote:

 Given that Array already uses `length`, it seems like the obvious choice.

length is my choice as well, for the same reason. It's not writable
in Maps and Sets, so the concerns about the semantics of writing it
don't apply.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nov 16 meeting notes

2011-11-17 Thread Waldemar Horwat
On Thu, Nov 17, 2011 at 12:12 AM, Gavin Barraclough
barraclo...@apple.com wrote:
 On Nov 16, 2011, at 5:19 PM, Waldemar Horwat wrote:

 Map/Set:
 Size property should be a getter property with no matching setter.  It's
 defined on the property.
 What is its name?  size, count, or length?  Decide on es-discuss.

 Hi Waldemar,
 I'm unclear what It's defined on the property means, should this read
 It's defined on the prototype?
 thanks,
 G.

Yes, that's a typo that should be prototype.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Nov 17 meeting notes

2011-11-17 Thread Waldemar Horwat
Array destructuring and length:
let [a, b, c, d, ... r] = {2: 3} | [1, 2]
Obvious: a is 1; b is 2.
What are c, d, and r?
c = 2.
d = undefined.
r = empty.

Fixed property destructuring doesn't rely on length.
Vararg r destructuring uses length.
The semantics of length will match that of slice.

Allen: We may upgrade ToUint32 to ToInteger in various array semantics.

What should the semantics be if we allow fixed properties in the
middle of a destructuring?
[a, ... r, b] = [42]
What are the values of a, r, and b?
a = 42
r = []
b = undefined

Brendan:
[a, ... r, b] = [, 43] | [42]
What are the values of a, r, and b?
a = 42
r = []
b = 43 or undefined?


Array.from discussion:  What happens if you subclass Array?
Subarray = Array | function() {...}
Subarray.from(arraylike)

DaveH:
Array.from = function(x) {
  var result = new this();
  for (var i = 0, n = x.length; i  n; i++)
result[x] = x[i];
  return result;
}

Array.of = function(... x) {
  var result = new this();
  for (var i = 0, n = x.length; i  n; i++)
result[x] = x[i];
  return result;
}

The above should skip holes.

MarkM: Now these functions are this-sensitive and will fail if
extracted and called indirectly.
DaveH: Replace 'new this()' with 'new (this || Array)()' above.
MarkM: Of all of the static methods in ES5, not one of them is
this-sensitive.  The simple extraction of a static method fails,
thereby making static methods not be first-class.  If Math.sin did
this, you couldn't map it over an array.  With this, you can't map
Array.of over an array.
Doug: Concerned about the use of the word 'of'; confusion with for-of.
Wild debate over class hierarchies and class-side inheritance.
Deferred Array.from and Array.of due to concerns over this-sensitivity
until we figure out a proper class-side abstraction mechanism.

Array.from(a) is superfluous because it's expressed even simpler as
[... a].  DaveH withdrew it.

Array.pushAll:
Debate over whether this is a workaround for poor implementations of
using Array.push with spread or apply, or whether we should directly
have a second set of methods.
Brendan: Let's implement spread and optimize it first.  Later we can
always add pushAll if it's needed.  This isn't ... paving cowpaths;
this is a mountain goat that went too high.

DaveH: Very opposed to .{ .

Cut 'fine points of for-of' from this meeting due to time.

Batch assignment:
Is this ES6 or ES7?  This is new, not discussed in May.
Can't discuss batch assignment without also discussing .{ .
Was .{ part of the May object literal proposal?
MarkM: Two kinds of .{ collisions to worry about.  The object literal
just statically disallows them.  .{ can have run-time property
collisions.
DaveH: Like the functionality but not the .{ syntax.

Example from .= page:

let element = document.querySelector('...');
element.{
  textContent: 'Hello world'
}.{
  style.{
color: 'red',
backgroundColor: 'pink'
  }
}.{  // back on element
  onclick: alert
};

Waldemar: Can you replace }.{'s with commas?  Brendan: Not in general.
 }.{'s do property redefinitions on property name conflicts, while
commas produce errors on conflicts.
Waldemar: Can you distribute the middle section above into the following?
}.{
  style.{color: 'red'},
  style.{backgroundColor: 'pink'}
}.{  // back on element
Answer: Maybe.

DaveH: Bind operator syntax strawman.
softBind strawman.
[A bunch of different discussions going on simultaneously, which I
couldn't track.]


Direct Proxies slide show.

Discussion about what hidden or implementation properties are passed
from the target through a direct proxy and how a proxy handler would
find out about all of them.  The author of a proxy needs to keep up to
date about picking the correct target as we add hidden properties.
For example, to make an Array-like proxy object, a proxy should start
with an Array instance as the proxy target.  Same with Date, etc.
Allen: There's no way to bootstrap -- can't define an Array-like proxy
if you don't have an Array target to start with.
Discussion about proxying the [[class]] name.

No more fundamental vs. derived traps.  (Almost) all traps default to
the target object's behavior if not overridden.  An exception is the
construct trap, which by default calls the call trap instead of
forwarding to the target object.
Allen: Should just pass through to the target.
Allen worried about other derived traps.
Waldemar: Always defaulting to the target will prevent us from ever
defining new non-leaf traps in the future, as that would break
existing proxies.  For example, if we have a current trap API where
the proxy defines only the trap GET, and we later wish to evolve the
language to refactor the API to call the derived trap HAS followed by
GET, where an object's HAS is defined in terms of GET, then defaulting
to the target will break proxies because HAS will invoke the target's
GET instead of the proxy's GET.
MarkM: This is forwarding vs. delegation.  The issue applies to many
traps, not just call.  All 

Re: The class operator: a bridge between object and function exemplers

2011-11-14 Thread Waldemar Horwat

On 11/14/2011 12:16 PM, Allen Wirfs-Brock wrote:

/UnaryExpression/ :
*class* /UnaryExpression/
...

The semantics are:
1. if /UnaryExpression/ is undefined or null, return the value of 
/UnaryExpression/.
2. Let /obj/ be ToObject(/UnaryExpression/)
3. Return the result of calling the [[Get]] internal method of /obj/ with 
argument 'constructor'

Using the class Operator

Prefixing an object exemplar with the class operator turns it into a class 
exemplar:

let Point = class {
x:0,
y,0,
constructor(x,y} {
this.x=x;
this.y=y;
}
};


I don't understand what you're accomplishing here.  The above appears to be 
equivalent (modulo the typos) to:

let Point = function(x, y) {
  this.x = x;
  this.y = y;
};

The other stuff inside the class is ignored.

Waldemar
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


  1   2   3   >