Re: for-in evaluation order

2010-12-27 Thread David-Sarah Hopwood
On 2010-12-27 19:15, David Herman wrote:
 Dave, under the spec for Operation OwnProperties(obj) step 1, you don't 
 explicitly state that these index properties are to be enumerated in numeric 
 order. An oversight?
 
 Oops, yes, thanks for catching that. I've updated the wiki.

The given algorithm seems to order index-properties before non-index
properties for each object on the prototype chain, rather than ordering
all index-properties (for all objects on the chain) before all non-index
properties. I'm not disagreeing with this choice, but was it intentional?

Nitpick: the '[ props, ...' notation could be misinterpreted as saying that
the previous 'props' list is the first element of the new list. It should be
something like 'props ++ [...'.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Apology (was: New private names proposal)

2010-12-23 Thread David-Sarah Hopwood
On 2010-12-23 06:01, David-Sarah Hopwood wrote:
 On 2010-12-23 05:08, Brendan Eich wrote:
 You seem to have problem owning up to mistakes.
 
 *I* have a problem owning up to mistakes?
 
 https://secure.wikimedia.org/wikipedia/en/wiki/Psychological_projection

I'm sorry, that was uncalled for. I retract any suggestion that Brendan
is engaging in pyschological projection. I should not have responded to
his ad hominem, and apologize to the group for doing so.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New private names proposal

2010-12-23 Thread David-Sarah Hopwood
On 2010-12-23 13:53, Kevin Smith wrote:
 If I might ask a side-question:  what's the value in making an object
 non-extensible in ES5?  I understand the value of making properties
 non-configurable or non-writable, but I don't yet see a reason to prevent
 extensions.

Suppose that the object inherits properties from a parent on the prototype
chain. Then extending the object could override those properties, even
if they are non-configurable or non-writable on the parent. So making an
object non-extensible is necessary in order to make inherited properties
effectively non-configurable and/or non-writable.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Name syntax

2010-12-23 Thread David-Sarah Hopwood
On 2010-12-23 16:36, thaddee yann tyl wrote:
 The private names proposal has a lot of good ideas, but their use is
 not obvious.
 The reasons I see for that are:
 The private a; declaration:
 * changes meaning of all obj.a in scope
 * looks like a scoped variable, not a private element of an object
 * is not generative-looking
 ... which makes it harder to understand, and use.

I agree with these criticisms.

 I find that David Herman's proposal fixes those issues:
 But your idea suggests yet another alternative worth adding to our growing 
 pantheon. We could allow for the scoping of private names, but always 
 require them to be prefixed by the sigil. This way there's no possibility of 
 mixing up public and private names. So to use an earlier example from this 
 thread (originally suggested to me by Allen):

   function Point(x, y) {
   private #x, #y;
   this.#x = x;
   this.#y = y;
   }
 
 I understand that the number sign gets really heavy and annoying after
 some time. As a result, I suggest a simpler syntax, private
 .secret;:
 
[...]
   private .a;
   k..a = o;

I find this less readable, and I think it would be easy to miss the
difference between . and .. in larger expressions. Also, the .. operator
is used in other languages for ranges.

In any case, let's not bikeshed about this yet. Either .# or @ is fine
for discussion. ('.#' is perhaps more suited to being viewed as a
variant of '.' with a private field selector, and '@' as an operator
distinct from '.')

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Name syntax

2010-12-23 Thread David-Sarah Hopwood
On 2010-12-23 20:44, Brendan Eich wrote:
 On Dec 23, 2010, at 11:59 AM, Dmitry A. Soshnikov wrote:
 On 23.12.2010 22:39, Brendan Eich wrote:
 
 The .. is wanted by other extensions, and already used by ECMA-357
 (E4X), FWIW.
 
 JFTR: and also in ECMA-262:
 
 1..toString()
 
 Yes, although if we added any .. as in Ruby or CoffeeScript it would, by
 the maximal munch principle, be tokenized instead of two dots.

You'd actually have to also change the StrUnsignedDecimalLiteral production,
since that munches the first dot before the '..' is tokenized.
Anyway, the use of '..' in E4X and as a range operator in other languages is
sufficient reason not to use it here.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New private names proposal

2010-12-23 Thread David-Sarah Hopwood
On 2010-12-23 21:02, Brendan Eich wrote:
 On Dec 23, 2010, at 12:11 PM, Mark S. Miller wrote:
 
 You've said this apples to oranges thing many times. I just don't get it.
 
 You've read the recent messages where it became clear only [], not the . 
 operator,
 was ever mooted for soft fields on the wiki.

That's false; the examples at
http://wiki.ecmascript.org/doku.php?id=strawman:names_vs_soft_fields
show otherwise.

 And how [] can't be a local transformation, [...]

Indeed it can't, but I don't see the relevance of that to the
'apples to oranges thing'. We don't know whether [] will be changed
at all. (In the proposal to add a @ or .# operator, it isn't.)

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New private names proposal

2010-12-23 Thread David-Sarah Hopwood
On 2010-12-23 23:51, Allen Wirfs-Brock wrote:
 I believe that your  camp wants to think of soft fields, stored in a
 side-table, as extensions of an object.  My camp thinks of such
 side-tables as a means of recording information about an object without
 actually extending the object.

These are obviously alternative views of the same thing -- as MarkM and
I have made clear throughout. It really doesn't matter whether you view
the object has having been extended or not, if that is semantically
unobservable.

(And I don't like people trying to tell me what camp I'm in, thankyou.)

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New private names proposal

2010-12-23 Thread David-Sarah Hopwood
On 2010-12-23 23:55, David Herman wrote:
 On Dec 23, 2010, at 4:27 PM, David-Sarah Hopwood wrote:
 
 We don't know whether [] will be changed
 at all. (In the proposal to add a @ or .# operator, it isn't.)
 
 Hm, this looks like a pretty serious misunderstanding of the private names 
 proposal.

I was not referring to the private names proposal, but to the more recent
suggestions from various people to add a @ or .# operator instead of
changing []. (I should not have referred to those suggestions as a proposal.
Careless editing, sorry.)

 In every variant of the proposal, the object model is changed so that private 
 name
 values are allowable property keys. This means that in every variant of the
 private names proposal, [] can't be defined via a local transformation. This
 has *nothing* to do with the @ or .# operators.

Changes to [] are not needed if @ or .# is added (or if [# ] is added).

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New private names proposal

2010-12-23 Thread David-Sarah Hopwood
On 2010-12-24 00:02, Oliver Hunt wrote:
 As a question how do soft fields/private names interact with an object
 that has had preventExtensions called on it?

For soft fields: there is no interaction, a new soft field can be added
to an object on which preventExtensions has been called.

For private names: new names are prevented from being added.

This is a useful feature of soft fields. There is no loss of security
or encaspulation as a result, for the same reason that there is not
for adding a soft field to a frozen object. (Freezing is equivalent to
preventing extensions, marking all properties as non-Configurable, and
marking all data properties as non-Writable.)

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New private names proposal

2010-12-23 Thread David-Sarah Hopwood
On 2010-12-24 00:11, David Herman wrote:
 On Dec 23, 2010, at 5:03 PM, David-Sarah Hopwood wrote:
 
 On 2010-12-23 23:55, David Herman wrote:
 On Dec 23, 2010, at 4:27 PM, David-Sarah Hopwood wrote:
 
 We don't know whether [] will be changed at all. (In the proposal to
 add a @ or .# operator, it isn't.)
 
 Hm, this looks like a pretty serious misunderstanding of the private
 names proposal.
 
 I was not referring to the private names proposal, but to the more
 recent suggestions from various people to add a @ or .# operator instead
 of changing []. (I should not have referred to those suggestions as a
 proposal. Careless editing, sorry.)
 
 a) I don't recall seeing people suggesting adding a .# operator instead of 
 changing '[]', but rather instead of changing '.'.

Lasse Reichstein did so:

# Mark Miller wrote:
# Currently is JS, x['foo'] and x.foo are precisely identical in all
# contexts. This regularity helps understandability. The terseness
# difference above is not an adequate reason to sacrifice it.
#
# Agree. I would prefer something like x.#foo to make it obvious that it's
# not the same as x.foo (also so you can write both in the same scope), and
# use var bar = #foo /* or just foo */; x[#bar] for computed private name
# lookup. I.e. effectively introducing .#, [# as alternatives to just .
# or [.

MarkM responded with a similar proposal, using a single operator:

# The basic idea is, since we're considering a sigil anyway, and
# since .# and [# would both treat the thing to their right as something
# to be evaluated, why not turn the sigil into an infix operator instead?
# Then it can be used as .-like []-like without extra notation or
# being too closely confused with . or [] themselves. [...]

 b) You're shifting the terms of the debate anyway. You can't decide for
 yourself what you want others to propose so you can argue with your
 favorite strawman.

As shown above, I haven't.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New private names proposal

2010-12-23 Thread David-Sarah Hopwood
On 2010-12-24 00:39, Brendan Eich wrote:
 On Dec 23, 2010, at 3:27 PM, David-Sarah Hopwood wrote:
 On 2010-12-23 21:02, Brendan Eich wrote:
 On Dec 23, 2010, at 12:11 PM, Mark S. Miller wrote:
 
 You've said this apples to oranges thing many times. I just don't
 get it.
 
 You've read the recent messages where it became clear only [], not the
 . operator, was ever mooted for soft fields on the wiki.
 
 That's false; the examples at 
 http://wiki.ecmascript.org/doku.php?id=strawman:names_vs_soft_fields 
 show otherwise.
 
 You're right, I missed that. Thanks for pointing it out, but brace yourself
 for some push-back.
 
 The longstanding wiki page (created 08/14) that I was referring to is:
 
 http://wiki.ecmascript.org/doku.php?id=strawman:inherited_explicit_soft_fields#can_we_subsume_names

  The one you cite is a recent clone (started 12/12) of Allen's examples,
 with translations where possible to soft fields.
 
 Since the new page is a clone of Allen's private_names strawman, of course
 it clones the private x examples and shows . and :-in-literal being
 used.
 
 It's not clear how this new page helps eliminate private_names as a
 proposal.

What it does is adapt the private_names syntax to inherited explicit soft
fields, exactly as it claims to do. That removes a lot (not all, since some
is associated with the syntax) of the specification complexity from that
proposal. Because of the soft field semantics, the resulting mechanism
provides strong rather than weak encapsulation.

 Note in particular the place in this new page where Mark does not create a
 polymorphic property access function:
 
 Enabling the use of [ ] with soft fields requires the kludge explained at
 can we subsume names. If we can agree that this polymorphism does not need
 such direct support, 
 
 We cannot agree, that's the point! Orange, not apple, wanted and proposed
 by private names. Polymorphism wanted. Difference!

It is not comparing apples and oranges to suggest that a specific
subfeature might not be worth its complexity. The phrase comparing apples
and oranges specifically refers to comparing things that are so different
as to be incomparable.

Note that the polymorphism referred to (being able to look up either a
private name or a string property) is also achieved by the @ or .# operator
approach, but without losing the x[id] ≡ x.id equivalence, and while being
more explicit that this is a new kind of lookup.

 So even with . as well as [] thanks to this recent page, we still have
 observable let's say encapsulation differences between the proposals.

Of course, that's the main reason why I favour the soft fields semantics,
because it provides strong encapsulation.

 Moreover, since you are citing a recently added page, and (below) also
 adducing mere es-discuss sketching of novelties such as @ as somehow moving
 the proposals forward, even though @ has not yet been proposed in the wiki,
 I argue that fair play requires you to keep current in all respects: we
 proponents of both weak maps etc. *and* private names have argued recently
 that soft fields should *not* have syntax looking anything like property
 access.

Yes, I know. I don't know why you are determined to paint me as having
some kind of ideological dispute with the proponents of private names,
as opposed to merely having strong technical objections to that proposal.

 Shifting the terms of the debate mid-conversation (across recent weeks,
 with new pages alongside older ones, and new messages in the list) cuts
 both ways.
 
 Our rejection of property syntax for soft fields makes this whole map one
 (subset, in the case of private names) syntax to two (subset, in the case
 of private names) semantics argument obsolete, at least when it comes to
 property access syntax. So, can we move past this?

Yes, please! (I barely have any idea what you're talking about when you
refer to shifting the terms of the debate. Isn't that just adapting to
the current context of discussion?)

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New private names proposal

2010-12-22 Thread David-Sarah Hopwood
On 2010-12-22 07:57, Brendan Eich wrote:
 On Dec 21, 2010, at 10:22 PM, David-Sarah Hopwood wrote:
 On 2010-12-21 22:12, Brendan Eich wrote:

 It's tiresome to argue by special pleading that one extension or 
 transformation (including generated symbols) is more complex, and
 less explanatory, while another is less so, when the judgment is
 completely subjective. And the absolutism about how it's *always*
 better in every instance to use strong encapsulation is, well,
 absolutist (i.e., wrong).
 
 I gave clear technical arguments in that post. If you want to disagree
 with them, disagree with specific arguments, rather than painting me as
 an absolutist. (I'm not.)
 
 Here's a quote: As you can probably tell, I'm not much impressed by this
 counterargument. It's a viewpoint that favours short-termism and code that
 works by accident, rather than code that reliably works by design.
 
 How do you expect anyone to respond? By endorsing bugs or programming based
 on partial knowledge and incomplete understanding? Yet the real world
 doesn't leave us the option to be perfect very often. This is what I mean
 by absolutism.

That isn't what absolutism generally means, so you could have been clearer.

What I said, paraphrasing, is that weak encapsulation favours code that
doesn't work reliably in cases where the encapsulation is bypassed. Also,
that if the encapsulation is never bypassed then it didn't need to be weak.
What's wrong with this argument? Calling it absolutist is just throwing
around insults, as far as I'm concerned.

 When prototyping, weak or even no encapsulation is often the
 right thing, but you have to be careful with prototypes that get pressed
 into products too quickly (I should know). JS is used to prototype all the
 time.

OK, let's consider prototyping. In the soft fields proposal, a programmer
could temporarily set a variable that would otherwise have held a soft
field to a string. All accesses via that variable will work, but so will
encapsulation-breaking accesses via the string name. Then before we
release the code, we can put back the soft field (requiring only minimal
code changes) and remove any remaining encapsulation-breaking accesses.
Does this address the issue?

 So rather than argue for strong encapsulation by setting up a straw man
 counterargument you then are not much impressed by, decrying short-termism,
 etc., I think it would be much more productive to try on others' hats and
 model their concerns, including for usable syntax.

Weak vs strong encapsulation is mostly independent of syntax. At least,
all of the syntaxes that have been proposed so far can provide either
strong or weak encapsulation, depending on the semantics.

 There is a separate discussion to be had about whether the form of 
 executable specification MarkM has used (not to be confused with the 
 semantics) is the best form to use for any final spec. Personally, I like
 this form of specification: I think it is clear, concise (which aids 
 holding the full specification of a feature in short-term memory), easy 
 to reason about relative to other approaches, useful for prototyping, and
 useful for testing.
 
 I don't mind at all that the correspondance with the implementation is 
 less direct than it would be in a more operational style; implementors 
 often need to handle less direct mappings than this, and I don't expect a
 language specification to be a literal description of how a language is 
 implemented in general (excluding naive reference implementations).
 
 Once again, you've argued about what you like, with subjective statements
 such as I don't mind.

Yes, I try very hard not to misrepresent opinions as facts.

 With inherited soft fields, the ability to extend frozen objects
 with private fields is an abstraction leak (and a feature, I agree).
 
 How is it an abstraction leak? The abstraction is designed to allow
 this; it's not an accident (I'm fairly sure, without mind-reading
 MarkM).
 
 If I give you an object but I don't want you adding fields to it, what do
 I do? Freezing works with private names, but it does not with soft fields.

What's your intended goal in preventing adding fields to the object?

If the goal is security or encapsulation, then freezing the object is
sufficient. If I add the field in a side table, that does not affect your
use of the object. I could do the same thing with aWeakMap.set(obj, value).

If the goal is concurrency-safety, then we probably need to have a
concurrency model in mind before discussing this in detail. However,
adding fields in a side table does not affect the concurrency-safety
of your code that does not have access to the table or those fields.
It might affect the concurrency-safety of my code that does have that
access; so I shouldn't add new fields and rely on my view of the object
to be concurrency-safe just because the object is frozen. This doesn't
seem like an onerous or impractical restriction.

 With private names, the inability

Re: New private names proposal

2010-12-22 Thread David-Sarah Hopwood
On 2010-12-22 18:59, Brendan Eich wrote:
 On Dec 21, 2010, at 11:58 PM, Brendan Eich wrote:
 
 ... which is strictly weaker, more complex, and less explanatory.
 
 So is a transposed get from an inherited soft field. Soft fields
 change the way square brackets work in JS, for Pete's sake!
 
 They do not.
 
 Ok, then I'm arguing with someone else on that point.
 
 Many of us were wondering where my (shared) square brackets change for soft
 fields memory came from, and re-reading the numerous wiki pages. Finally we
 found what I had recollected in writing the above:
 
 http://wiki.ecmascript.org/doku.php?id=strawman:inherited_explicit_soft_fields#can_we_subsume_names

  There, after Mark's demurral (I (MarkM) do not like the sugar proposed
 for Names, ...), is this:
 
 If we wish to adopt the same sugar for soft fields instead, then
 
 private key; ...  base[key] ...
 
 could expand to
 
 const key = SoftField(); ... key.get(base) 
 
 If there are remaining benefits of Names not addressed above, here would be
 a good place to list them. If we can come to consensus that soft fields do
 subsume Names, then “Name” becomes a possible choice for the name of the
 “SoftField” constructor. -
 
 This is clearly pitting soft fields against names in full, including a
 change to JS's square bracket syntax as sugar.
 
 The hedging via If and the demurral do not remove this from the soft
 fields side of the death match I've been decrying, indeed they add to it.
 This is making a case for dropping names in full in favor of soft fields in
 (mostly -- no dot operator or object literal support) comparable fullness.
 
 For the record, and in case there's a next time: I don't think it's good
 form to chop up proposals made on the wiki (inherited soft fields, explicit
 soft fields, inherited explicit soft fields), put arguments about
 orthogonal syntax issues (the demurral even says orthogonal), and then
 use only some of the pieces to refute someone's argument based on the
 entirety of the wiki'ed work and the clear thrust of that work: to get rid
 of private names with something that is not a complete replacement.
 
 It doesn't really matter what one correspondent among many wants (IOW, it's
 not all out you [or me]). The argument is about a shared resource, the
 wiki at http://wiki.ecmascript.org/, and the strawman proposals on it that
 are advancing a particular idea (soft fields, with variations *and
 syntax*), and by doing so are trying to get rid of a different proposal
 (private names).
 
 In arguing about this, I have this bait-and-switch sense that I'm being
 told A+B, then when I argue in reply against B, I'm told no, no! only A!.
 (Cheat sheet: A is soft fields, B is transposed square bracket syntax for
 them.)

This criticism is baseless and without merit.

In order to compare the two semantic proposals,
http://wiki.ecmascript.org/doku.php?id=strawman:inherited_explicit_soft_fields#can_we_subsume_names
considers what they would look like with the same syntax. In that case,
soft fields are semantically simpler.

This should not in any way preclude also criticising the syntax.

If your criticisms of soft fields plus the change to [] depended on the fact
that the syntax change was layered on soft fields, then you might have a
point. But in fact those criticisms apply to the syntax change regardless
of which proposal it is layered on.

There was and is no bait and switch.

 I'm not saying this was any one person's malicious trick. But it's clear
 now what happened; the wiki and list record and diagram how it played out.
 It leaves a bad taste.

You have willfully assumed bad faith, despite clear explanations. That
certainly does leave a bad taste.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New private names proposal

2010-12-22 Thread David-Sarah Hopwood
On 2010-12-23 00:40, Brendan Eich wrote:
 On Dec 22, 2010, at 2:56 PM, David-Sarah Hopwood wrote:
 
 What I said, paraphrasing, is that weak encapsulation favours code that 
 doesn't work reliably in cases where the encapsulation is bypassed.
 Also, that if the encapsulation is never bypassed then it didn't need to
 be weak. What's wrong with this argument?
 
 The reliability point is fine but not absolute. Sometimes reliability is
 not primary.
 
 You may disagree, but every developer knows this who has had to meet a
 deadline, where missing the deadline meant nothing further would be
 developed at all, while hitting the deadline in a hurry, say without strong
 encapsulation, meant there was time to work on stronger encapsulation and
 other means to achieve the end of greater reliability -- but *later*, after
 the deadline. At the deadline, the demo went off even though reliability
 bugs were lurking. They did not bite.

How precisely would weak encapsulation (specifically, a mechanism that is
weak because of the reflection and proxy trapping loopholes) help them to
meet their deadline?

(I don't find your inspector example compelling, for reasons given below.)

 The second part, asserting that if the encapsulation was never bypassed
 then it didn't need to be weak, as if that implies it might as well have
 been strong, assumes that strong is worth its costs vs. the (not needed, by
 the hypothesis) benefits.
 
 But that's not obviously true, because strong encapsulation does have costs
 as well as benefits.

What costs are you talking about?

 - Not specification complexity, because the proposal that has the simplest
   spec complexity so far (soft fields, either with or without syntax changes)
   provides strong encapsulation.

 - Not runtime performance, because the strength of encapsulation makes no
   difference to that.

 - Not syntactic convenience, because there exist both strong-encapsulation
   and weak-encapsulation proposals with the same syntax.

 - Not implementation complexity, because that's roughly similar.

So, what costs? It is not an axiom that proposals with any given desirable
property have greater cost (in any dimension) than proposals without that
property.

 Yet your argument tries to say strong encapsulation is absolutely always
 worth it, since either it was needed for reliability, or else it wouldn't
 have hurt. This completely avoids the economic trade-offs -- the costs over
 time. Strong can hurt if it is unnecessary.

How precisely can it hurt, relative to using the same mechanism with
loopholes?

 To be utterly concrete in the current debate: I'm prototyping something in
 a browser-based same-origin system that already uses plain old JS objects
 with properties. The system also has an inspector written in JS.

[snip example in which the only problem is that the inspector doesn't show
private fields because it is using getOwnPropertyNames]

Inspectors can bypass encapsulation regardless of the language spec.
Specifically, an inspector that supports Harmony can see that there is a
declaration of a private variable x, and show that field on any objects
that are being inspected. It can also display the side table showing the
value of x for all objects that have that field.

Disadvantages: slightly greater implementation complexity in the inspector,
and lack of compatibility with existing inspectors that don't explicitly
support Harmony.

Note that inspectors for JS existed prior to the addition of
getOwnPropertyNames, so that is merely a convenience and a way to avoid
implementation dependencies in the inspector.

 With soft fields, one has to write strictly more code:

Nope, see above.

 I've also stated clearly *why* I want strong encapsulation, for both 
 security and software engineering reasons. To be honest, I do not know 
 why people want weak encapsulation. They have not told us.
 
 Yes, they have. In the context of this thread, Allen took the trouble to
 write this section:
 
 http://wiki.ecmascript.org/doku.php?id=strawman:private_names#private_name_properties_support_only_weak_encapsulation

  Quoting: Private names are instead intended as a simple extensions of the
 classic JavaScript object model that enables straight-forward encapsulation
 in non-hostile environments. The design preserves the ability to manipulate
 all properties of an objects at a meta level using reflection and the
 ability to perform “monkey patching” when it is necessary.

Strong encapsulation does not interfere with the ability to add new
monkey-patched properties (actually fields). What it does prevent, by
definition, is the ability to modify or read existing private fields to
which the accessor does not have the relevant field object. What I was
looking for was not mere assertion that this is sometimes necessary to
be able to do that, but an explanation of why.

As for the ability to manipulate all properties of objects at a meta
level using reflection, strictly speaking that is still

Re: New private names proposal

2010-12-22 Thread David-Sarah Hopwood
On 2010-12-23 01:11, Brendan Eich wrote:
 On Dec 22, 2010, at 3:49 PM, David-Sarah Hopwood wrote:
 
 In arguing about this, I have this bait-and-switch sense that I'm
 being told A+B, then when I argue in reply against B, I'm told no, no!
 only A!. (Cheat sheet: A is soft fields, B is transposed square
 bracket syntax for them.)
 
 This criticism is baseless and without merit.
 
 In order to compare the two semantic proposals, 
 http://wiki.ecmascript.org/doku.php?id=strawman:inherited_explicit_soft_fields#can_we_subsume_names
 considers what they would look like with the same syntax.
 
 Wrong. That section has
 
 private key; ... base[key] ...
 
 and thus assumes private key creates a private-name value bound to key
 that can be used in brackets. That is *not* how private names as proposed
 by Allen works, nor how the earlier names proposal worked.

That section is clear that it is talking about the syntax proposed in
http://wiki.ecmascript.org/doku.php?id=strawman:names.
(Adapting it to the private_names syntax is trivial, though.)

The Name objects as property names section of that page gives an example
in which 'var name = new Name' creates an object that can be used via
'obj[name]'. The Binding private names section says that in the scope of
a 'private x' declaration, x is also bound as a plain variable to the Name
value.

Therefore, 'private key;' binds the plain variable 'key' to a Name value
which can be used as 'base[key]'. Your interpretation of the names proposal
is wrong and Mark's was correct.

As far as I can see, MarkM has not (at least, not on the wiki) proposed
any new syntax in this discussion that had not already been proposed in
one of Allen's proposals.

 Private names, and names before it, proposes lexical bindings for private
 names which can be used only after dot in a member expression and before
 colon in an object initialiser's property initialiser. A private-declared
 name cannot be used in brackets to get at the property -- not without #. to
 reflect it into runtime.
 
 Clearly, the
 http://wiki.ecmascript.org/doku.php?id=strawman:inherited_explicit_soft_fields#can_we_subsume_names
 gets this wrong.

You apparently missed the statement x is also bound as a plain variable
to the Name value. in the names proposal, which would explain your
confusion on this point.

 Either the names proposal was misunderstood, or the square-bracket-only
 syntax was a compromise. It simply is not and never was the same syntax.
 
 In that case, soft fields are semantically simpler.
 
 I reject all your premises, so it is pointless to argue about conclusions
 that depend on them.

Do you still reject them after being shown that the syntax in MarkM's
proposal is in fact the same syntax?

 First, the simpler semantics for a different, inferior syntax does not win
 over more complex semantics for a simpler and more usable syntax. Users of
 the language are the audience in most need of simplicity, not implementors
 or spec writers. The spec is not the ultimate good to optimize in this
 way.

This argument clearly fails, because the syntax that you're criticising as
inferior is actually the syntax defined in the names proposal.

There is no obstacle whatsoever to the soft fields semantics being used
with any of the syntaxes that have been proposed so far.

 Second, the soft fields semantic model is not simpler when you count
 everything it depends on, and where it shifts complexity (implementors and
 users).

OK, there's an interesting point here, which is the extent to which
reliance on existing language constructs (existing in the sense of
not added as part of the feature under consideration), should be counted
toward a new feature's complexity, relative to reliance on new constructs
added together with the feature.

I think that use of new constructs ought to be charged more in complexity
cost than use of existing constructs, all else being equal. This is an
opinion, but I would have thought it's a rather uncontroversial one.

In any case, I don't find WeakMap (or other constructs used by the
SoftField executable specification) particularly complex. YMMV.

The soft fields model does not shift complexity onto users because their
perception of complexity depends mainly on the syntax, which is the same.
The differences in semantics are unlikely to be noticed in most situations.

The actual implementation complexity is no greater for soft fields.
The soft fields specification has a less direct correspondance to the
implementation, and we disagree on the significance of that.

 Finally, I disagree that an executable spec, aka self-hosted library code
 as spec, wins.
 
 But see below -- at this point, it's clear we should not be arguing about
 soft fields vs. private names as if they are alternatives or in any way
 substitutable.

We're not going to be able to agree on adding both, so they are alternatives.

 I'm not saying this was any one person's malicious trick. But it's
 clear now what happened

Re: New private names proposal

2010-12-22 Thread David-Sarah Hopwood
On 2010-12-23 02:48, Brendan Eich wrote:
 On Dec 22, 2010, at 6:39 PM, David-Sarah Hopwood wrote:
 
 Inspectors can bypass encapsulation regardless of the language spec.
 
 The Inspector is written in ES5. How does it bypass soft field strong 
 encapsulation?

I meant, obviously, that inspectors in general can bypass encapsulation.

It is not clear to me that a usable inspector can be written purely in
ES5 using the reflection API. Doesn't an inspector have to be able to read
variables in any scope? Or maybe you mean by inspector something less
ambitious than I'm thinking of (but then it's not clear that it needs to
be able to read private fields, since it also can't read closed-over
variables).

 As for the ability to manipulate all properties of objects at a meta
 level using reflection, strictly speaking that is still possible in the
 soft fields proposal because soft fields are not properties. This is not
 mere semantics; these fields are associated with the object, but it is
 quite intentional that the object model views them as being stored on a
 side table.
 
 The side table is in a closure environment only, not available to the
 inspector, which uses getOwnPropertyNames:
 
 function MyConstructor(x, ...) {
const mySoftField = SoftField();
mySoftField.set(this, x);
...  // closures defined that use mySoftField
 }

OK, you're assuming that the inspector can't read state from closures.
So why does it matter that it can't read private fields, given that the
programmer would probably have used closures if they were not using
private fields?

 Note that other methods of associating private state with an
 object, such as closing over variables, do not allow that state to be
 accessed by reflection on the object either.
 
 That's right, and that is exactly Allen's point in writing the rationale
 for weak encapsulation that he wrote, and my point in using the example
 ReliableFred relies upon: an inspector hosted in the browser written in ES5.

The constraint that the inspector be written in ES5 seems to be a purely
artificial one. All of the commonly used browsers have debugger extensions.

 Please reply in 500 words.

No, I'm not going to play your word-counting game.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New private names proposal

2010-12-22 Thread David-Sarah Hopwood
On 2010-12-23 05:14, Brendan Eich wrote:
 On Dec 22, 2010, at 7:49 PM, David-Sarah Hopwood wrote:
 
 The constraint that the inspector be written in ES5 seems to be a purely
 artificial one. All of the commonly used browsers have debugger extensions.
 
 Nope, our little startup (mine, MonkeyBob's, and ReliableFred's -- plus the
 boss) is writing a cross-browser framework and app. No native code, let
 alone deoptimizing magic VM-ported code for each top JS VM.

You don't need the debugger to be part of your framework and app, in order
to use it for development.

(There, concise enough this time?)

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New private names proposal

2010-12-22 Thread David-Sarah Hopwood
On 2010-12-23 05:08, Brendan Eich wrote:
 On Dec 22, 2010, at 7:34 PM, David-Sarah Hopwood wrote:
 
 As far as I can see, MarkM has not (at least, not on the wiki) proposed
 any new syntax in this discussion that had not already been proposed in
 one of Allen's proposals.
 
 Wrong again. Allen did not write the original strawman:names proposal.

Fine, one of Allen or Dave Herman and Sam Tobin-Hochstadt's proposals.
Mea culpa. Does it affect my argument at all? No.

 Follow that link and read 
 http://wiki.ecmascript.org/doku.php?id=strawman:names#binding_private_names
 to see only examples using x.key, etc. -- no square brackets.

What does the lack of an example have to do with anything?

Read what it says: in the scope of 'private x',
x is also bound as a plain variable to the Name value.

Combined with the previous example:

  var name = new Name;
  ...
  obj[name] = secret;
  print(obj[name]); // secret

it's clear that the square bracket syntax is valid in the scope of a
private declaration. That is what MarkM's desugaring faithfully emulates.

Perhaps that is not what the authors of the names proposal intended.
If so, how was MarkM supposed to know that?

 Mark's example predates private_names and so may have worked in the old
 names proposal,

It explicitly says that it does; there's no may here.

 but only via square brackets. Not via dot -- so again *not* the
 same syntax as what even strawman:names proposed.

That page doesn't explicitly spell out the desugaring of '.', but MarkM
did so later. There's clearly no conflict with the soft field semantics,
which is the important thing, anyway.

 Never mind the private names proposal that supersedes names -- not faulting
 Mark for lacking clairvoyance here -- I'm faulting you for twisting the
 same syntax from its obvious meaning of all the same syntax to
 the subset that uses square brackets.

Only if you're determined to misinterpret it, can
http://wiki.ecmascript.org/doku.php?id=strawman:inherited_explicit_soft_fields#can_we_subsume_names
be mistaken for a complete proposal of how to desugar the names syntax.
It is obviously a partial outline.

 You seem to have problem owning up to mistakes.

*I* have a problem owning up to mistakes?

https://secure.wikimedia.org/wikipedia/en/wiki/Psychological_projection

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Strong vs weak encapsulation

2010-12-21 Thread David-Sarah Hopwood
 fields and the private names
proposals, the scope in which a private field can be accessed can
be controlled lexically. (In the soft fields case, this does not
depend on the use of the 'private id' syntax.)
This can be used to simulate visibility mechanisms found in other
languages, such as export lists and interfaces with different visibility
in Eiffel, package access in Java, etc.

Not all visibility mechanisms from other languages can be sensibly
simulated this way. For example, friend declarations in C++ allow
arbitrary code to give itself private access to arbitrary other code.
(I'm your friend! Deal with it! :-) That could only be simulated
by putting the 'private' or SoftField declaration at global scope,
which would be pointless. Note that friend declarations are a widely
criticised misfeature of C++ -- e.g. see
http://www.zechweb.de/Joyners_cpp_critique/index003.htm#s03-26 --
precisely because they are incompatible with strong encapsulation.


The private names and soft field proposals are similar in the
visibility mechanisms they can simulate, but soft fields are slightly
more general. In either proposal, visibility can be restricted to a
particular lexical scope. In the soft fields proposal, because
SoftFields are first-class values, it can also be restricted to any
set of objects that can get access to a given SoftField. I don't
claim this to be a critical benefit, but it is occasionally
useful in object-capability programming. For example, in
http://www.erights.org/elib/capability/ode/ode-capabilities.html#simple-money,
a Purse of a given currency is supposed to be able to access a
private field of other Purses of the same currency, but not other
Purses of different currencies. The implementation at
http://www.eros-os.org/pipermail/cap-talk/2007-June/007885.html
uses WeakMaps to do this, and could just as well use soft fields if
transliterated to ECMAScript.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Strong vs weak encapsulation [correction]

2010-12-21 Thread David-Sarah Hopwood
On 2010-12-21 08:27, David-Sarah Hopwood wrote:
 The private names and soft field proposals are similar in the
 visibility mechanisms they can simulate, but soft fields are slightly
 more general. In either proposal, visibility can be restricted to a
 particular lexical scope. In the soft fields proposal, because
 SoftFields are first-class values, it can also be restricted to any
 set of objects that can get access to a given SoftField.

Correction: the #.id syntax also allows private names to be treated as
first-class values, so the proposals are equivalent in this respect.

 I don't
 claim this to be a critical benefit, but it is occasionally
 useful in object-capability programming. For example, in
 http://www.erights.org/elib/capability/ode/ode-capabilities.html#simple-money,
 a Purse of a given currency is supposed to be able to access a
 private field of other Purses of the same currency, but not other
 Purses of different currencies. The implementation at
 http://www.eros-os.org/pipermail/cap-talk/2007-June/007885.html
 uses WeakMaps to do this, and could just as well use soft fields

or private names

 if transliterated to ECMAScript.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New private names proposal

2010-12-21 Thread David-Sarah Hopwood
On 2010-12-21 08:49, Lasse Reichstein wrote:
 On Thu, 16 Dec 2010 23:19:12 +0100, Mark S. Miller erig...@google.com wrote:
 On Thu, Dec 16, 2010 at 1:58 PM, Kris Kowal kris.ko...@cixar.com wrote:
 On Thu, Dec 16, 2010 at 1:53 PM, David Herman dher...@mozilla.com wrote:
 
[...]
  than
 
 function Point(x, y) {
 var _x = gensym(), _y = gensym();
 this[_x] = x;
 this[_y] = y;
 }

 I tend to disagree with most developers, so take it with a grain of
 salt that I find the latter form, with all the implied abilities,
 easier to understand.
 
 I do too. While terseness clearly contributes to understandability,
 regularity and simplicity do too. When these conflict, we should be very
 careful about sacrificing regularity.
 
 While I dislike the private syntax just as much, it does have the advantage
 of being statically detectable as using a private name, both this.foo in the
 scope of private foo, and this[#.foo].

That's not correct in general, since '#.foo' is first-class. (The specific
case expr[#.foo] is more easily optimizable without type inference, but
that's a case in which the #. syntax need not have been used.)

 The gensym syntax requires runtime checks to recognize that _x is a 
 non-string
 property name.

Any expr[p] lookup needs a check for whether p is a string, when that cannot
be determined by type inference. The check that it is a private name or
soft field when when it is not a string is on the infrequent path, so will
not significantly affect performance.

 Currently is JS, x['foo'] and x.foo are precisely identical in all contexts.
 This regularity helps understandability. The terseness difference above is
 not an adequate reason to sacrifice it.
 
 Agree. I would prefer something like x.#foo to make it obvious that it's not 
 the
 same as x.foo (also so you can write both in the same scope), and use
 var bar = #foo /* or just foo */; x[#bar] for computed private name lookup.
 I.e. effectively introducing
 .#, [# as alternatives to just . or [.

If we're going to add an operator specifically for private lookup, we only
need one, for example:

function Point(x, y) {
var x = SoftField(), y = SoftField();
this.#x = x;
this.#x = y;
}

(i.e. 'MemberExpression .# PrimaryExpression' or alternatively
'MemberExpression [ # Expression ]')

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New private names proposal [repost]

2010-12-21 Thread David-Sarah Hopwood
On 2010-12-21 22:12, Brendan Eich wrote:
 On Dec 20, 2010, at 11:05 PM, David-Sarah Hopwood wrote:

Please retain all relevant attribution lines.

 Brendan Eich wrote:
 The new equivalence under private names would be x[#.id] === x.id.

You said under private names here, but it should actually be
under the syntax proposed for private names. It applies to that
syntax with either the soft fields or private names semantics.

 ... which is strictly weaker, more complex, and less explanatory.
 
 So is a transposed get from an inherited soft field. Soft fields change the
 way square brackets work in JS, for Pete's sake!

They do not.

Again you seem to be confusing the inherited soft fields proposal with
the *separate* proposal on desugaring the private name syntax to inherited
soft fields.

The matter at hand is how the proposed syntax changes affect the semantic
equivalences of ECMAScript. I argued against the syntax changes (including
those to the square bracket operator) on that basis. Now you seem to be
arguing as though I supported the syntax changes. To be clear, I do not
support the currently proposed change to how square brackets work in JS,
regardless of whether that change is specified on top of the soft fields
semantics or the private names semantics. I know that some people consider
it to be an improvement in usability, and I disagree that it is sufficient
improvement to justify the increase in language complexity. There may be
alternative syntaxes that obtain a similar or better usability
improvement with a smaller increase in complexity; I hope so.

(One thing is clear to me; driving experts like MarkM away from participating
in syntax discussions is not going to help with that. Please reconsider,
Mark.)

 Talk about more complex and less explanatory. Yes, if you know about weak 
 maps and soft fields, then it follows -- that is a bit too circular, too 
 much assuming the conclusion.

This has absolutely nothing to do with weak maps. We're talking about the
consequences of the syntax changes, on top of either proposal.

[...]
 So, what if we want to understand '_._' in terms of existing constructs?
 Unfortunately, '#.id' must be primitive; there is nothing else that it
 can desugar to because 'private id' does not introduce an ordinary
 variable (unlike 'const id_ = SoftField()', say).
 
 SoftField(), #.id -- something new in either case.

sarcasm
Oh, OK, it obviously doesn't matter what we add to the language, it's
all the same. Library abstractions, new syntax, major changes in
semantics, who cares? Something new is something new. Let's just roll
a bunch of dice and pick proposals at random.
/sarcasm

Sheesh. A library class, specified in terms of existing language constructs,
is not the same as a new primitive construct, and does not have the same
consequences for language complexity.

 And what's this const id_? A gensym?

A possible convention for naming variables holding private names. It doesn't
matter, you're picking on details.

 It's tiresome to argue by special pleading that one extension or
 transformation (including generated symbols) is more complex, and less
 explanatory, while another is less so, when the judgment is completely
 subjective. And the absolutism about how it's *always* better in every
 instance to use strong encapsulation is, well, absolutist (i.e., wrong).

I gave clear technical arguments in that post. If you want to disagree with
them, disagree with specific arguments, rather than painting me as an
absolutist. (I'm not.)

 We should debate strong vs. weak encapsulation, for sure, and in the other 
 thread you started (thanks for that). But without absolutes based on 
 preferences or judgment calls about trade-offs and economics.

Tell you what, I'll debate based on the things I think are important, and
you debate based on the things you think are important. Agreed?

 Rather it introduces an element in an entirely new lexically scoped
 namespace alongside ordinary variables. This is fundamentally more
 complex than id, which is just a stringification of the identifier.
 
 I agree that private x adds complexity to the spec.

Good, that's a start.

To be clear, it's not the syntax itself, but the parallel namespace
introduced by 'private x' that I find problematic in terms of both
specification complexity, and conceptual complexity for programmers.

 It adds something to solve a use-case not satisfied by the existing
 language. There's (again) a trade-off, since with this new syntax, the
 use-cases for private names become more usably expressible.

It isn't at all clear that there aren't alternative syntaxes that would
achieve the usability benefit while not being subject to the criticisms
that have been made of the current syntax proposal. Lasse Reichstein posted
some possibilities (_.#_ or _[#_]). The syntax design space has been barely
explored in the discussion so far.

 The fact that the proposal is entangled with that syntax, so that it is 
 difficult to see

Re: Private names use cases

2010-12-20 Thread David-Sarah Hopwood
On 2010-12-20 17:21, Allen Wirfs-Brock wrote:
 I've seen mentions in the recent thread that the goal of the Private Names
 proposal was to support private fields for objects.  While that may be a
 goal of some participants in the discussion, it is not what I would state as 
 the goal.
 
 I have two specific use cases in mind for private names:
 1) Allow JavaScript programmers, who choose to do so, to manage the direct 
 accessibly of object properties.  This may mean limiting access to methods of 
 a particular instance, or to methods of the same class, or to various 
 friends or cohorts, etc.

Indeed, selective visibility is an important goal of both the private
names and soft fields proposals.

It's important to note that for strong encapsulation, the code
implementing an abstraction must be able to control which other
code can see which fields/properties, but code outside the
abstraction's scope must not be able to decide this for itself.

I.e. +1 for the ability to simulate visibility mechanisms that meet
this criterion, such as Eiffel's export lists, but -1 for the ability
to simulate visibility mechanisms that don't, such as C++'s friend
declarations.

 2) Allow third-party property extensions to built-in objects or third-party 
 frameworks that are guaranteed to not have naming conflicts  with unrelated 
 extensions to the same objects.
 
 Of these two use cases, the second may be the more important.
 
 Note that I emphasized properties rather than a new concept such as 
 private fields.

I think it is a mistake to emphasize that, since it overspecifies the
mechanism. In the soft fields proposal, the fields are not properties,
but that makes little or no visible difference to their use.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New private names proposal

2010-12-20 Thread David-Sarah Hopwood
On 2010-12-17 06:44, Brendan Eich wrote:
 On Dec 16, 2010, at 9:11 PM, David-Sarah Hopwood wrote:
 On 2010-12-17 01:24, David Herman wrote:
 Mark Miller wrote:
 Ok, I open it for discussion. Given soft fields, why do we need private
 names?

 I believe that the syntax is a big part of the private names proposal. It's
 key to the usability: in my view, the proposal adds 1) a new abstraction to
 the object model for private property keys and 2) a new declarative
 abstraction to the surface language for creating these properties.

 I don't like the private names syntax. I think it obscures more than it
 helps usability, and losing the x[id] === x.id equivalence is a significant
 loss.
 
 As Chuck Jazdzewski pointed out, this equivalence does not hold for id not
 an IdentifierName.

Of course not, because the syntax 'x.id' is not valid for id not an
IdentifierName (in either ES5 or ES5 + private_names). Whenever we state
semantic equivalences, we mean them to hold only for syntactically valid
terms. So 'x' necessarily has to be something that can precede both '[_]'
and '.', i.e. MemberExpression or CallExpression. Similarly, 'id' necessarily
has to be something that can both occur within quotes and follow '.', i.e.
it must be an IdentifierName.

(This has nothing to do with any difference between soft fields and private
names. Anyway, for future reference, I rarely state the productions that
syntactic variables range over when they can be unambiguously inferred as
above.)

 The new equivalence under private names would be x[#.id] === x.id.

... which is strictly weaker, more complex, and less explanatory.
Let's simplify things by taking the '===' operator out of the picture.
For ES5 we have x[id] ≡ x.id in any context, and for ES5 + private_names
we have x[#.id] ≡ x.id, where ≡ means substitutability.

x[id] ≡ x.id relates the meaning of existing basic constructs:
string literals, '_[_]', and '_._'. In particular, it *defines* the
semantics of _._ in terms of string literals and _[_], so that _._
need not be considered as being in the core or kernel of the
language [*].

In the case of x[#.id] ≡ x.id, '#.id' is a new construct that is
being added as part of the private names proposal. Furthermore, the
meaning of '#.id' is context-dependent; it depends on whether a
'private id' declaration is in scope. So, what if we want to
understand '_._' in terms of existing constructs? Unfortunately,
'#.id' must be primitive; there is nothing else that it can desugar
to because 'private id' does not introduce an ordinary variable
(unlike 'const id_ = SoftField()', say). Rather it introduces an
element in an entirely new lexically scoped namespace alongside
ordinary variables. This is fundamentally more complex than id,
which is just a stringification of the identifier.


[*] Yes, I know that the ES5 spec doesn't take a kernel language approach
to defining ECMAScript. That doesn't mean it can't be understood that
way, and it's very useful to be able to do so.

 As Mark points out, though, that syntax can be supported with either
 proposal. The private names proposal is more entangled with syntactic
 changes, but that's a bug, not a feature.
 
 No, that is a usability feature.

You're misunderstanding me.

The syntax could be considered a usability feature. Some people like it,
others don't.

The fact that the proposal is entangled with that syntax, so that it is
difficult to see its semantic consequences separate from the syntax,
cannot possibly be considered a feature of the proposal, at the meta level
of the language design process. At that level it's clearly undesirable --
as the course of the discussion has amply demonstrated!

 The inherited soft fields approach is more entangled with its
 reference implementation, which is not the efficient route VM
 implementors can swallow.

I think you're being rather patronising to VM implementors (including
yourself!) if you think that they're incapable of understanding how a
feature specified in this way is intended to be implemented. Of course
they can.

Specifying it in this way has very concrete benefits to VM implementors:

 - A test suite can directly compare this very simple reference
   implementation with the optimized implementation, to check that they
   give the same results in cases of interest. (It's still necessary to
   identify which cases are needed to give adequate test coverage, but
   that's no more difficult than in any other approach.)

 - Disputes about the validity of any proposed optimization can be
   resolved by asking what the reference implementation would do in
   that case.

 - The specification is concise and localised, without being cryptic.
   This kind of conciseness aids understanding.

Of course this style of specification also has *potential* disadvantages,
the main one being a risk of overspecification. However, to argue against
a particular proposal in this style, you need to say why that proposal
overspecifies, not just handwave

Re: New private names proposal

2010-12-16 Thread David-Sarah Hopwood
On 2010-12-17 01:24, David Herman wrote:
 Mark Miller wrote:
 Ok, I open it for discussion. Given soft fields, why do we need private
 names?
 
 I believe that the syntax is a big part of the private names proposal. It's
 key to the usability: in my view, the proposal adds 1) a new abstraction to
 the object model for private property keys and 2) a new declarative
 abstraction to the surface language for creating these properties.

I don't like the private names syntax. I think it obscures more than it
helps usability, and losing the x[id] === x.id equivalence is a significant
loss.

As Mark points out, though, that syntax can be supported with either
proposal. The private names proposal is more entangled with syntactic
changes, but that's a bug, not a feature.

 In fairness, I think the apples-to-apples comparison you can make between
 the two proposals is the object model. On that score, I think the private
 names approach is simpler: it just starts where it wants to end up (private
 names are in the object, with an encapsulated key), whereas the soft fields
 approach takes a circuitous route to get there (soft fields are
 semantically a side table, specified via reference implementation, but
 optimizable by storing in the object).

The private names approach is not simpler. It's strictly more complicated for
the same functionality. You can see that just by comparing the two proposals:
in
http://wiki.ecmascript.org/doku.php?id=strawman:inherited_explicit_soft_fields
the specification consists entirely of the given code for the SoftField
abstraction. In practice you'd also add a bit of non-normative rationale
concerning how soft fields can be efficiently implemented, but that's it.

In http://wiki.ecmascript.org/doku.php?id=strawman:private_names (even
exlcuding the syntactic changes, to give a fairer comparison), we can see
a very significant amount of additional language mechanism, including:

 - a new primitive type, with behaviour distinct from any other type.
   This requires changes, not just to 'typeof' as the strawman page
   acknowledges, but to every other abstract operation in the spec that
   can take an arbitrary value. (Defining these values to be objects
   would simplify this to some extent, but if you look at how much
   verbiage each [Class] of objects takes to specify in ES5, possibly
   not by much.)

 - quite extensive changes to the behaviour of property lookup and
   EnvironmentRecords. (The strawman is quite naive in suggesting that
   only 11.2.1 step 6 needs to be changed here.)

 - changes to [[Put]] (for arrays and other objects) and to object literal
   initialization; also checking of all uses of [[DefineOwnProperty]] that
   can bypass [[Put]].

 - changes to a large number of APIs on Object.prototype and Object,
   the 'in' operator, JSON.stringify, and probably others.

None of these additional mechanisms and spec changes are needed in the
soft field approach.

In addition, the proposal acknowledges that it only provides weak
encapsulation, because of reflective operations accessing private
properties. It justifies this in terms of the utility of monkey patching,
but this seems like a weak argument; it is not at all clear that monkey
patching of private properties is needed. Scripts that did that would
necessarily be violating abstraction boundaries and depending on
implementation details of the code they are patching, which tends to
create forward-compatibility problems. (This is sometimes true of scripts
that monkey-patch public properties. I'm not a fan of monkey patching in
general, but I think it is particularly problematic for private properties.)

There is some handwaving about the possibility of sandboxing environments
being able to work around this deficiency, but the details have not been
thought through; in practice I suspect this would be difficult and error-
prone.


In general, I disagree with the premise that the best way to *specify* a
language feature is to start where it wants to end up, i.e. to directly
specify the programmer's view of it. Of course the programmer's view needs
to be considered in the design, but as far as specification is concerned,
if a high-level feature cannot be specified by a fairly simple desugaring
to lower-level features, then it's probably not a good feature.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module isolation

2010-01-11 Thread David-Sarah Hopwood
Mark S. Miller wrote:
 On Mon, Jan 11, 2010 at 3:03 AM, Kevin Curtis 
 kevinc1...@googlemail.comwrote:
 
 Re isolation, sandboxing - and modules.

 Is there is a case for the ability to 'configure and freeze' a global
 object for sandboxing, SES and maybe modules. Indeed the 'restricted eval'
 can be seen as more specific case of an eval which takes a 'configured and
 frozen global' environment. With a frozen global all bindings should be able
 to be resolved at the time eval is called. Effectively, restricted evaled
 code will have 'const x = object' binding added to it's scope - where 'x'
 is a property from the configured global object.

 N.B - if a restricted eval takes a second param as a string to configure
 the 'global environment' for the evaled code then it would avoid the closure
 peeking issue.

 What's the closure peeking issue?

http://code.google.com/p/google-caja/wiki/EvalBreaksClosureEncapsulation

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module isolation

2010-01-11 Thread David-Sarah Hopwood
David-Sarah Hopwood wrote:
 Brendan Eich wrote:
 On Jan 10, 2010, at 9:30 PM, David-Sarah Hopwood wrote:
 Brendan Eich wrote:
 On Jan 10, 2010, at 1:14 AM, Kevin Curtis wrote:

 From SecureEcmaScript proposal:
 6. The top level binding of this in an evaled Program is not the
 global object, but rather a frozen root object containing just the
 globals defined in the ES5 spec.
 For many current applications, the frozen |this| object is not necessary
 or desirable in global code. The essential characteristic of modules,
 isolation for each module's inside from unimported effects of other
 modules, does not necessarily mean no mutation of primordial objects.
 On the contrary, it does necessarily mean that. If you can mutate
 primordial objects, then there is no isolation of any module. There
 may be a reduction in the possibilities for accidental interference
 between modules, but that should be distinguished from isolation.

 Who said primordial objects are shared between modules?
 
 Having separate copies of primordial objects for each module is not
 sufficient to ensure isolation. If one module has access to some object
 obj of another, it can also get access to that object's prototype chain
 using Object.getPrototypeOf(obj), or obj.constructor.prototype.

Correction: obj.constructor[.prototype] gives access to the constructor
chain. But that doesn't really affect my argument, if constructors are
mutable.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module isolation

2010-01-11 Thread David-Sarah Hopwood
David-Sarah Hopwood wrote:
 David-Sarah Hopwood wrote:
 Brendan Eich wrote:
 On Jan 10, 2010, at 9:30 PM, David-Sarah Hopwood wrote:
 Brendan Eich wrote:
 For many current applications, the frozen |this| object is not necessary
 or desirable in global code. The essential characteristic of modules,
 isolation for each module's inside from unimported effects of other
 modules, does not necessarily mean no mutation of primordial objects.

 On the contrary, it does necessarily mean that. If you can mutate
 primordial objects, then there is no isolation of any module. There
 may be a reduction in the possibilities for accidental interference
 between modules, but that should be distinguished from isolation.

 Who said primordial objects are shared between modules?

 Having separate copies of primordial objects for each module is not
 sufficient to ensure isolation. If one module has access to some object
 obj of another, it can also get access to that object's prototype chain
 using Object.getPrototypeOf(obj), or obj.constructor.prototype.
 
 Correction: obj.constructor[.prototype] gives access to the constructor
 chain.

Ignore this; there was nothing to be corrected here.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module isolation

2010-01-11 Thread David-Sarah Hopwood
Kevin Curtis wrote:
 So, FF3.5 has resurrected the sandboxed eval with the second 'global' object
 parameter - as the closure peeking issue has been fixed. (The second param
 is a live object rather than a string).

I gather, then, that there has been no change in Mozilla developers'
practice of adding unilateral language extensions without consulting anyone,
and in particular without consulting this list.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module isolation

2010-01-11 Thread David-Sarah Hopwood
Brendan Eich wrote:
 On Jan 11, 2010, at 4:37 PM, David-Sarah Hopwood wrote:
 Kevin Curtis wrote:
 So, FF3.5 has resurrected the sandboxed eval with the second 'global'
 object parameter - as the closure peeking issue has been fixed. (The second
 param is a live object rather than a string).

 I gather, then, that there has been no change in Mozilla developers'
 practice of adding unilateral language extensions without consulting
 anyone, and in particular without consulting this list.
 
 Get your facts straight, and get off your high horse.

My facts are straight. FF3.5 replaced one extension with a different,
incompatible one. That's just as bad (worse, actually) as adding a new
extension. That it is intended to be temporary only slightly mitigates
the error.

 The only reason it was kept in Firefox 3.5 and 3.6 was for compatibility
 with add-ons and applications built on the codebase:

It wasn't kept; it was changed to something with different semantics
(but, nonsensically, the same API signature).

The behaviour in 3.5 and 3.6 isn't even documented
(https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Functions/eval)
and the documentation incorrectly states that the second argument has been
removed. If you are going to make incompatible changes, please update
the docs at the same time (or ideally, in advance).

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module isolation

2010-01-11 Thread David-Sarah Hopwood
Brendan Eich wrote:
 On Jan 11, 2010, at 10:53 AM, David-Sarah Hopwood wrote:
 
 Who said primordial objects are shared between modules?

 Having separate copies of primordial objects for each module is not
 sufficient to ensure isolation. If one module has access to some object
 obj of another, it can also get access to that object's prototype chain
 using Object.getPrototypeOf(obj), or obj.constructor.prototype.
 
 I meant what I wrote: [w]ho said primordial objects are shared between
 modules? Shared by passing objects, or by fiat in the implementation,
 does not matter.

If objects cannot be passed directly between modules without breaking
encapsulation, then it's not a particularly useful module system.
Passing only JSON objects, say, is not sufficient: it's necessary to be
able to pass function objects, at least.

 Isolation does not require frozen primordials, no
 matter how often you assume it does to conclude that it does.

What part of I was incorrect in saying that mutable primordials
*necessarily* preclude isolation. did you not understand?

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module isolation

2010-01-11 Thread David-Sarah Hopwood
Kevin Curtis wrote:
 So, FF3.5 has resurrected the sandboxed eval with the second 'global' object
 parameter - as the closure peeking issue has been fixed. (The second param
 is a live object rather than a string). And thus if the second param object
 is frozen (and the primordials and their prototypes etc frozen) FF3.5 eval
 could act as a restricted eval.

FF3.5 eval is undocumented, but if I'm reverse-engineering the source code
patch (http://hg.mozilla.org/releases/mozilla-1.9.1/rev/67944d1b207d)
correctly, it still violates encapsulation.

A restricted eval should be specified from scratch, not based on what a
poorly thought-out vendor extension happens to do.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Module isolation

2010-01-10 Thread David-Sarah Hopwood
Brendan Eich wrote:
 On Jan 10, 2010, at 1:14 AM, Kevin Curtis wrote:
 
 From SecureEcmaScript proposal:
 6. The top level binding of this in an evaled Program is not the
 global object, but rather a frozen root object containing just the
 globals defined in the ES5 spec.
 
 For many current applications, the frozen |this| object is not necessary
 or desirable in global code. The essential characteristic of modules,
 isolation for each module's inside from unimported effects of other
 modules, does not necessarily mean no mutation of primordial objects.

On the contrary, it does necessarily mean that. If you can mutate
primordial objects, then there is no isolation of any module. There
may be a reduction in the possibilities for accidental interference
between modules, but that should be distinguished from isolation.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: api mapping

2009-12-27 Thread David-Sarah Hopwood
memo...@googlemail.com wrote:
 David-Sarah Hopwood wrote at 25th December:
 and there is no need for a 'link' convenience function to be standardized
 given that it is a 5-liner in terms of Object.defineProperty
 
 Just have a look at the following programming code with *sweet* 5-liners:
 
 var Gui = function()
 {
   this.init.apply(this, arguments);
 }
 
 Gui.prototype = new function()
 {
   this.init = function()
   {
   let title = document.getElementById(title);
   Object.defineProperty(this, title,
   {get: function() { return title.value; },
   set: function(x) { title.value = x; },
   enumerable: true
   });
 
   let url = document.getElementById(url);
   Object.defineProperty(this, url,
   {get: function() { return url.value; },
   set: function(x) { url.value = x; },
   enumerable: true
   });
 
   let input = document.getElementById(input);
   Object.defineProperty(this, url,
   {get: function() { return input.value; },
   set: function(x) { input.value = x; },
   enumerable: true
   });
   }
 }

Here's how I would do it in ES5:

function makeGui(doc) {
  /*const*/ var title = doc.getElementById(title),
url = doc.getElementById(url),
input = doc.getElementById(input);

  return Object.freeze({
get title() { return title.value; }
set title(newValue) { title.value = newValue; }
get url()   { return url.value; }
set url(newValue)   { url.value = newValue; }
get input() { return input.value; }
set input(newValue) { input.value = newValue; }
  });
}

(I'd probably do more validation, but that would be a less fair comparison.)

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: api mapping [correction]

2009-12-27 Thread David-Sarah Hopwood
I forgot the commas in the object literal:

David-Sarah Hopwood wrote:
 function makeGui(doc) {
   /*const*/ var title = doc.getElementById(title),
 url = doc.getElementById(url),
 input = doc.getElementById(input);
 
   return Object.freeze({
  get title() { return title.value; },
  set title(newValue) { title.value = newValue; },
  get url()   { return url.value; },
  set url(newValue)   { url.value = newValue; },
  get input() { return input.value; },
 set input(newValue) { input.value = newValue; }
   });
 }

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: api mapping

2009-12-25 Thread David-Sarah Hopwood
memo...@googlemail.com wrote:
 Hello. I want to map
   document.getElementById(title).value
 to
   gui.title
 in my ecmascript application. So I can use several implementations of
 gui (html, gtk+, xul, qt, etc.)
 But there's no way to reference to this title, because it's a
 primitive data type, a String.
 I've tried the following:
   let title = document.getElementById(title);
   gui.__defineGetter__(title, function () {return title.value;});
   gui.__defineSetter__(title, function (x) {title.value = x;return x;});
 But this code looks ugly and isn't abstract programming. So created
 the following helper function:
   function link(obj, prop, target, tprop) {
 obj.__defineGetter__(prop, function () {return target[tprop];});
 obj.__defineSetter__(prop, function (x) {target[tprop] = x;return x;});
   }
 It can be used this way:
   let title = document.getElementById(title);
   link(gui, title, title, value);
 I think it would be easier if such a feature would be built-in in
 ecmascript. If you know a better already-possible way, please let me
 know.

https://mail.mozilla.org/pipermail/es-discuss/2009-February/008875.html

(You've asked variations on this question three or four times now. I don't
mind repeating myself a few times, but not indefinitely. The feature is
already in ES5 for properties, it will not be added for local variables,
and there is no need for a 'link' convenience function to be standardized
given that it is a 5-liner in terms of Object.defineProperty.)

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: array like objects

2009-12-15 Thread David-Sarah Hopwood
Brendan Eich wrote:
 In ES specs and real implementations, internal methods and various
 corresponding implementation hooks are called based on [[Class]] of the
 directly referenced object, in contrast.

In ES specs, there's no indication that [[Class]] can or should be used
for internal method lookup; I don't know where you got that idea.

As for implementation, [[Class]] could be derived from some other type tag
that gives sufficient information to do such lookup, but [[Class]] by
itself is not sufficient.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: quasi-literal strawman

2009-12-15 Thread David-Sarah Hopwood
Mike Samuel wrote:
 2009/12/15 Tom Van Cutsem to...@google.com:
 Hi,

 Could you motivate why you chose to append the string Quasi to the type
 tag identifier? [...]
 I know E employs a similar mechanism, but I don't know if it's worth having
 it for Javascript.
 Essentially, this is a form of name-mangling, which is fine if you're doing
 macros/translation/compilation where the programmer is shielded from the
 mangled name, but is weird when the programmer has to work with both mangled
 and non-mangled names. If the type tag Identifier is going to be resolved
 lexically anyway, and not in some separate namespace, wouldn't it be simpler
 if one could write:

 var html = ...;
 ...
 html`...`
 
 It is immediately apparent, even to someone not familiar with the language
 feature's details, that 'html' will somehow refer to the quasi-function

I'm with Tom on this. I've always thought E's mangling of the tag is ugly.
(Note that one possible reason for doing that in E -- the fact that there
are other namespaces that work in a similar way such as *__uriGetter --
does not apply to ECMAScript.)

 I'm not sure that it's obvious that there would be a linkage between a
 type tag and a function of the same name, but I am worried about
 posible collisions due to the dearth of short type-descriptive names
 available for use as local variable names. Consider: var html =
 html`...`;

One possibility is to make the tags uppercase by convention:

  HTML`...`;
  XML`...`;
  SQL`...`;

Since language names are very often acronyms, this looks perfectly
natural (and I think it looks fine even when the name is not an acronym).

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: array like objects

2009-12-15 Thread David-Sarah Hopwood
Brendan Eich wrote:
 On Dec 15, 2009, at 11:18 AM, David-Sarah Hopwood wrote:
 Brendan Eich wrote:
 In ES specs and real implementations, internal methods and various
 corresponding implementation hooks are called based on [[Class]] of the
 directly referenced object, in contrast.
[...]
 Sorry, I wrote called where I meant defined.
 
 As for implementation, [[Class]] could be derived from some other type
 tag that gives sufficient information to do such lookup, but [[Class]] by
 itself is not sufficient.
 
 I'm not sure what you mean. Sure, [[Class]] in the spec is
 string-valued, so it can't be a vtable pointer. But in implementations
 that use C++, there is not only a class name-string associated with
 every instance, but a suite of internal methods or hooks.

Exactly: [[Class]] is associated with each instance and so are the other
internal methods/properties, but that doesn't imply that other properties
are defined based on [[Class]].

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: [[HasOwnProperty]]

2009-12-12 Thread David-Sarah Hopwood
Garrett Smith wrote:
 [[HasOwnProperty]] is mentioned in one place in the spec: s 15.4.4.11
 Array.prototype.sort (comparefn).
 
 There is no mention of [[HasOwnProperty]] anywhere else.
 
 I also see a [[GetOwnProperty]] definition in Table 8 and a definition
 for own property (s. 4.3.30).
 
 Is there a difference between [[HasOwnProperty]] and own property?
 If not, then one or the other should be used. If so, then
 [[HasOwnProperty]] should be defined somewhere in the spec.

Array.prototype.sort should have been defined in terms of
[[GetOwnProperty]]. That is, the text in 15.4.4.11 should be

# • The result of calling the [[GetOwnProperty]] internal method of
#   proto with argument ToString(j) is not *undefined*.


(Incidentally, I don't see any errata for the published standard at
http://wiki.ecmascript.org/doku.php?id=es3.1:es3.1_proposal_working_draft.)

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: array like objects

2009-12-11 Thread David-Sarah Hopwood
Mark S. Miller wrote:
 On Fri, Dec 11, 2009 at 2:27 AM, Mike Wilson mike...@hotmail.com wrote:
 
 I think Breton mentions something important here; the desire
 to actually detect if something is an array or arraylike to
 be able to branch to different code that does completely
 different things for array[likes] and objects.
[...]
 
 If we're looking for a convention that is
 * does not admit any legacy ES3R non-array non-host objects (to prevent
 false positives)
 * does easily allow ES5 programmers to define new array-like non-array
 objects
 * takes bounded-by-constant time (i.e., no iteration)
 * is a reasonably compatible compromise with the existing notions of
 array-like in legacy libraries as represented by previous examples in this
 thread
 
 then I suggest:
 
 function isArrayLike(obj) {
   var len;
   return !!(obj 
 typeof obj === 'object' 
 'length' in obj 
 !({}).propertyIsEnumerable.call(obj, 'length') 
 (len = obj.length)  0 === len);
 }
 
 Since getting 'length' may have side effects, this is written a bit weird so
 that this get only happens after earlier tests pass.

If you want to avoid side effects:

function isArrayLike(obj) {
  if (!obj || typeof obj !== 'object') return false;
  var desc = Object.getPropertyDescriptor(obj, 'length');
  if (desc) {
var len = desc.value;
return !desc.enumerable  (len === undefined || len  0 === len);
  }
}

This allows any length getter without checking that it will return an array
index, but it still satisfies all of the above requirements.

However, I don't see why the check on the current value of length is
necessary. For it to make any difference, there would have to be a
*nonenumerable* length property on a non-function object, with a value
that is not an array index. How and why would that happen?

 And yes, I'm aware that this usage of Object.prototype.propertyIsEnumerable
 implies that catchalls must virtualize it in order for a proxy to be able to
 pass this test :(.

Same with Object.getPropertyDescriptor in the above.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Catch-all proposal based on proxies

2009-12-10 Thread David-Sarah Hopwood
Mike Samuel wrote:
 Proxy based iterators work well with existing loop constructs though
 while ('next' in iterator) doSomething(iterator.next);

I don't understand what advantage this has over
  while (iterator.hasNext()) doSomething(iterator.next());

If next() has side-effects (moving to the next element), it shouldn't
be a getter.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: AST in JSON format

2009-12-08 Thread David-Sarah Hopwood
Oliver Hunt wrote:
 snip
 
 OTOH, if we standardize an AST format, then presumably we'll be adding
 a source-AST API function that uses the implementation's existing parser.
 
 I'd be worried about assuming that this is an obvious/trivial thing for
 implementations to do, you're effectively requiring that the internal AST
 representation of an implementation be entirely standardised.

Not at all. An implementation could, for example, parse to its internal
AST format and then convert from that to the standard format (which is a
trivial tree walk). This only requires that the internal format not lose
information relative to the standard one. If it does currently lose
information, then changing it not to is relatively straightforward.

In any case, without a source-AST API, what use is a standard AST format?
The existance of that API (and the corresponding AST-source pretty-printing
API) is the main motivation for standardizing the format, AFAICS.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: AST in JSON format

2009-12-08 Thread David-Sarah Hopwood
Breton Slivka wrote:
 On Tue, Dec 8, 2009 at 3:57 PM, David-Sarah Hopwood
 david-sa...@jacaranda.org wrote:
 snip
 That would however depend on an assessment of whether browser
 implementors had succeeded in implementing secure and correct
 ES5-AST parsers (with a mode that accepts exactly ES5 as specified,
 not ES5 plus undocumented cruft and short-cuts for edge cases).
 
 would it make sense to abandon our attachment to using the browser
 native parser, and just implement an ES5 parser/serializer as a
 seperate standard unit, without ties to the js engine itself? Would
 there be significant disadvantage to having two parsers in one ES
 interpreter?

What attachment to using the browser native parser? It's an
implementation detail how the ES5-AST parser is constructed.
However, I wouldn't expect many implementors to want to duplicate
code and effort.

Note that with an event-driven parser, for example, it's trivially
easy to plug in different event consumers to the same parser and
generate different AST formats.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: AST in JSON format

2009-12-08 Thread David-Sarah Hopwood
Oliver Hunt wrote:
 On Dec 8, 2009, at 7:30 PM, Breton Slivka wrote:
 
 Right now there are projects to do this (caja, adsafe), but to do a
 runtime check requires that the user download a full JS parser, and
 validator. If part of the parsing task was built into the browser,
 there would be less code to download, and the verification would run
 much faster. This has real implications for users and developers, and
 would enable new and novel uses for JS in a browser, and distributed
 code modules.
 
 Providing an AST doesn't get you anything substantial here as the
 hard part of all this is validation, not parsing.

That's not entirely accurate. In implementing Jacaranda, I estimate
the split of effort between validation/parsing has been about 60/40.
ECMAScript is really quite difficult to lex+parse if you absolutely
need to do so correctly.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: AST in JSON format

2009-12-07 Thread David-Sarah Hopwood
Mark Miller wrote:
 On Mon, Dec 7, 2009 at 7:45 AM, Maciej Stachowiak m...@apple.com wrote:

 I can see how modifying the AST client-side prior to execution could be
 useful, to implement macro-like processing. But I don't see the use case for
 serializing an AST in JSON-like format, or sending it over the wire that
 way. It seems like it would be larger (and therefore slower to transmit),
 and likely no faster to parse, as compared to JavaScript source code. So
 shouldn't the serialization format just be JS source code?
 
 +1.
 
 While potentially useful, I have no interest in these ASTs as a
 serialization format nor in a compact AST encoding. I am interested in
 having a standard JsonML AST encoding of parsed ES5, and eventually an
 efficient and standard browser-side parser that emits these ASTs. Many
 forms of JS meta-programming that currently occur only on the server
 (e.g., Caja, FBJS, MSWebSandbox, Jacaranda) or have to download a full
 JS parser to the client per frame (ADsafe, JSLint, Narcissus,
 Narrative JS) could instead become lighter weight client side
 programs.

+1.

Note that:
 - although the size of the JSON serialization of the AST is not
   critical for this kind of usage, the size of the in-memory
   representation definitely is.

 - encoding node type strings as integers, as suggested earlier in the
   thread, does not help with this memory usage.

 - any Lempel-Ziv-based compression algorithm will do much better than
   replacing type strings with integers, in the few situations where
   it is useful to serialize the AST and to minimize the size of the
   serialization.

 - JsonML is a reasonable basis for an AST format even when
   serialization for free is of fairly low importance. In particular,
   it is useful that it only uses structures that are common across
   programming languages (for instance, the prototype Jacaranda verifier
   uses it even though it is written in Java). Also, programmers of
   AST-processing applications will see this serialization when
   debugging, and it is likely to appear in test cases for such
   applications and for parsers/emitters.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Conflicts between W3C specs and ES5?

2009-12-03 Thread David-Sarah Hopwood
Maciej Stachowiak wrote:
 On Dec 3, 2009, at 4:06 AM, Jorge Chamorro wrote:
 
 Object.prototype.k= 27;
 console.log(k);
 - 27

 For it's the last place where a reference would ever be looked up...
 or not ?
 (now me ducks and runs :-)
 
 No, the prototype chain is a separate concept from the scope chain.
 Entries in the scope chain are conceptually objects each of which has
 its own prototype chain.

pedantry
That's true in ES3. In ES5, entries in the scope chain are environment
records, which can be either Object Environment Records that each have
a prototype chain, or Declarative Environment Records.
/pedantry

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ES5 left-to-right evaluation order issues

2009-11-20 Thread David-Sarah Hopwood
Allen Wirfs-Brock wrote:
 This is good...Perhaps we should have a design rules section of the Wiki
to capture stuff like this?

I don't have edit rights to the wiki (the former ES4 wiki, that is, not
the Trac wiki), and AFAIK neither does anyone else who is not a member
of TC39.

 -Original Message-
 From: es5-discuss-boun...@mozilla.org 
 [mailto:es5-discuss-boun...@mozilla.org] On Behalf Of David-Sarah Hopwood
 Sent: Friday, November 20, 2009 7:18 PM
 To: es5-disc...@mozilla.org
 Subject: Re: ES5 left-to-right evaluation order issues
 
 Allen Wirfs-Brock wrote:
 So, the main point is that a belief that ECMAScript has or is supposed 
 to have a strict left-to-right evaluation order (as the concept is 
 likely to be understood by typical users of the language) is wrong.
 
 I'm going to have to insist that the understanding attributed to typical 
 users here is an improper understanding of evaluation and coercion in 
 general, and that ECMAScript *does* have left-to-right evaluation order.
 
 In most languages, the issue we're discussing here doesn't arise because if 
 there are implicit coercions, these coercions don't have side effects, and 
 therefore they don't have an observable ordering.
 
 However, in an imperative call-by-value language with left-to-right 
 evaluation where *individual operators* perform coercions with observable 
 side effects, the ECMAScript behaviour is precisely what should be expected.
 That's because the coercions are not part of the definition of evaluation; 
 they are specific to the computations performed by each operator.
 
 For example, the ECMAScript * operator behaves as though it were defined 
 something like the following function:
 
   function *(a, b) {
 return primitive_multiply(ToNumber(a), ToNumber(b));
   }
 
 Exactly as for a function application, the argument subexpressions are 
 evaluated, and then some computation is applied to the argument values.
 The coercions are performed, in order, as part of that computation.
 In fact, the coercions are different for each operator, so it is almost 
 essential that they be defined as part of the computation.
 
 If the change to coercion order that Allen originally suggested were made, 
 then it would be necessary to understand the evaluation of operator argument 
 subexpressions as being influenced by the operator and by the argument 
 position (for example, the shift operators coerce their left subexpression 
 with ToInt32 and their right subexpression with ToUint32), as opposed to 
 being uniform for all subexpressions. This is in some ways even weirder that 
 what ECMAScript currently does.
 
 It would also mean that if any future version of ECMAScript included operator 
 overloading, then either it would not be possible to precisely emulate the 
 semantics of the built-in versions of operators, or else the coercions would 
 have to be baked in and not overloadable (i.e. the overloading function 
 would have to receive its arguments pre-coerced, rather than being 
 responsible for coercing them). Either of these options are undesirable -- 
 the former would be inconsistent, and the latter less flexible and less 
 efficient, since it wouldn't be possible to drop the coercions.
 
 For consistency,
 Dave-Sarah's observation that we first evaluate and then coerce 
 the operands is probably the guideline we should continue to follow if 
 we ever define any additional operators where the distinction is relevant.
 
 Yes, also for the reasons above.
 
 --
 David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Binary data (ByteArray/ByteVector) proposal on public-script-coord

2009-11-05 Thread David-Sarah Hopwood
Charles Jolley wrote:
 This looks like a good approach.  I wonder if the Data/DataBuilder
 distinction could be handled better by using the Object.freeze()
 semantics.  Even if the browser does not support freezing in the general
 sense yet, you could borrow the ideas for data.
 
 Probably the wrong API names, but here is the basic idea:
 
 Data.prototype.copy()
   - returns a mutable form of the Data object
 
 Data.prototype.freeze() or Data.freeze(aDataObject)
   - makes the Data object frozen if it is not frozen already
 
 Data.prototype.frozenCopy()
   - returns the data object but pre-frozen.  For Data object's already
 frozen can return this
 
 Data.prototype.frozen - true when frozen, false otherwise.

I don't know why we wouldn't just use Object.freeze. It is not unreasonable
to require support for the ES5 APIs as a prerequisite for the Data type.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Binary data (ByteArray/ByteVector) proposal on public-script-coord

2009-11-05 Thread David-Sarah Hopwood
Oliver Hunt wrote:
 On Nov 5, 2009, at 4:01 PM, David-Sarah Hopwood wrote:
 Charles Jolley wrote:
 This looks like a good approach.  I wonder if the Data/DataBuilder
 distinction could be handled better by using the Object.freeze()
 semantics.  Even if the browser does not support freezing in the general
 sense yet, you could borrow the ideas for data.

 Probably the wrong API names, but here is the basic idea:

 Data.prototype.copy()
  - returns a mutable form of the Data object

 Data.prototype.freeze() or Data.freeze(aDataObject)
  - makes the Data object frozen if it is not frozen already

 Data.prototype.frozenCopy()
  - returns the data object but pre-frozen.  For Data object's already
 frozen can return this

 Data.prototype.frozen - true when frozen, false otherwise.

 I don't know why we wouldn't just use Object.freeze. It is not
 unreasonable to require support for the ES5 APIs as a prerequisite
 for the Data type.
 
 I disagree here -- i believe mutable vs. immutable data is different
 from unfrozen and frozen objects [...]

Why? What would the hypothetical Data.prototype.freeze do that would be
different to applying Object.freeze to a Data object?

 (though i agree that the function names
 freeze and frozen are just asking for problems in conjunction with ES5
 :D ).  There are plenty of times where I would want to provide immutable
 data (the UA sharing content, etc), but i may still want to modify the
 object itself.

Oh, you mean that you want *read-only* Data objects backed by a mutable
array. That is not the same thing as an immutable (or frozen) Data object.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Binary data (ByteArray/ByteVector) proposal on public-script-coord

2009-11-05 Thread David-Sarah Hopwood
Oliver Hunt wrote:
 On Nov 5, 2009, at 10:14 PM, David-Sarah Hopwood wrote:
 Oliver Hunt wrote:
 I disagree here -- i believe mutable vs. immutable data is different
 from unfrozen and frozen objects [...]

 Why? What would the hypothetical Data.prototype.freeze do that would be
 different to applying Object.freeze to a Data object?

 (though i agree that the function names
 freeze and frozen are just asking for problems in conjunction with ES5
 :D ).  There are plenty of times where I would want to provide immutable
 data (the UA sharing content, etc), but i may still want to modify the
 object itself.

 Oh, you mean that you want *read-only* Data objects backed by a mutable
 array. That is not the same thing as an immutable (or frozen) Data
 object.
 
 No, the issue here is that Charles has conflated object freezing with
 immutable data,

That isn't conflation; they're the same.

 frozen objects and immutable data are not the same thing

You are mistaken. This is a case where terminology across languages is
quite consistent, and is as I've described it. Frozen means exactly the
same thing as immutable, and implies that the state of the object will
never be observed to change [*]. An object is read-only if there is no
means to directly change its state via a reference to it, which does not
necessarily imply that its state cannot be observed to change.

 -- for instance in the DOM I cannot set indices of a NodeList, but the
 NodeList does not need to be frozen.

NodeList objects are read-only.


[*] It is ambiguous whether indirectly referenced state can change; if
it is important that it cannot, say deep-frozen or deeply immutable.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: getter setter for private objects

2009-11-02 Thread David-Sarah Hopwood
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

memo...@googlemail.com wrote:
 2009/11/2 Brendan Eich bren...@mozilla.com:

 Defining accessors on an activation object is nasty,
 If you want private getters and setters, you can put them in an object 
 denoted by a private var:

 So you prefer ugly solutions, because the others are nasty?

Yes. Here ugly just means verbose and inelegant, whereas nasty
means having poorly understood and subtly error-prone consequences.
So ugly beats nasty every time :-)

- --
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.12 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iF4EAREIAAYFAkrv2VwACgkQWUc8YzyzqAe32wEAhpO3jMN9ocd1byz6tdPEBBCH
U9y2ehpsThazZPN+aA4A/2MKGgL1ESuwU7PeOq4HY2jxh18M0GzjWPhWFQA8Tm4/
=wSUQ
-END PGP SIGNATURE-
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: AST in JSON format

2009-10-17 Thread David-Sarah Hopwood
Kevin Curtis wrote:
 In May there was some discussion on the possibility of a standardized
 format for the ECMAScript AST in JSON/JsonML.
 
 The V8 engine has an option (in bleeding_edge) where the internal AST
 tree can be output to JsonML for debugging purposes:
 ./shell_g --print_json_ast file.js
 
 This is V8's internal AST type, which necessarily includes some
 implementation-specific artifacts. That said, the V8 AST is very
 nearly straight out of the ECMA 262 spec, so it's pretty generic.
 (Note: it's an initial version e.g doesn't recur into switch statement
 cases). It could be useful as an input as to what a standard JSON AST
 should look like. (Which, i guess, ECMAScript engines could support as
 an new, additional format to any existing AST serialization formats).

The Jacaranda parser (not released yet) also produces a JsonML AST.
Below is the same example for comparison, also with Jacaranda-specific
artefacts removed.

 Here's an example - with some V8 artefact's removed for clarity. Note:
 the script gets wrapped in a FunctionLiteral and VariableProxy ==
 Identifier.
 
 --- source ---
 
 x = 1;
 if (x  0) {
 y = x + 2;
 print(y);
 }

[SEQ, {},
  [EXPRSTMT, {},
[=, {},
  [REF, {name:x}],
  [NUMBER, {MV:1}]]],
  [if, {},
[, {},
  [REF, {name:x}],
  [NUMBER, {MV:0}]],
[{, {},
  [SEQ, {},
[EXPRSTMT, {},
  [=, {},
[REF, {name:y}],
[+, {},
  [REF, {name:x}],
  [NUMBER, {MV:2}]]],
[EXPRSTMT, {},
  [(, {},
[REF, {name:print}],
[ARGS, {},
  [REF, {name:y}

 --- AST JsonML ---
 
 [FunctionLiteral,
   {name:},
   [ExpressionStatement,
 [Assignment,
   {op:ASSIGN},
   [VariableProxy,
 {name:x}
   ],
   [Literal,
 {handle:1}
   ]
 ]
   ],
   [IfStatement,
 [CompareOperation,
   {op:GT},
   [VariableProxy,
 {name:x}
   ],
   [Literal,
 {handle:0}
   ]
 ],
 [Block,
   [ExpressionStatement,
 [Assignment,
   {op:ASSIGN},
   [VariableProxy,
 {name:y}
   ],
   [BinaryOperation,
 {op:ADD},
 [VariableProxy,
   {name:x}
 ],
 [Literal,
   {handle:2}
 ]
   ]
 ]
   ],
   [ExpressionStatement,
 [Call,
   [VariableProxy,
 {name:print}
   ],
   [VariableProxy,
 {name:y}
   ]
 ]
   ]
 ],
 [EmptyStatement]
   ]
 ]
 
 3

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Strategies for standardizing mistakes

2009-10-13 Thread David-Sarah Hopwood
Brendan Eich wrote:
 On Oct 12, 2009, at 12:23 AM, Maciej Stachowiak wrote:
 
 I don't want to get too deep into this, but I question the claim that
 [Mozilla document.all] is technically compatible with ES5. Yes, it's
 possible for a host object to return any value at any time for a property
 access. But for it to consistently decide this based on the context of
 the accessing code, this essentially means that ES3 [[Get]] (or the ES5
 equivalent) are getting extra parameters that indicate what kind of
 expression contains the subexpression.
 
 No, it means the host object can use a back-channel, or telepathy, or
 something outside of the specs but definitely inside of the implementation.

I agree with Maciej. The implementation-defined operations have clear
specifications of their parameters. I think that it is highly undesirable
to adopt an interpretation in which they can have arbitrary additional
inputs depending on the context in which they are used.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: access to Unicode SMP?

2009-10-13 Thread David-Sarah Hopwood
Peter Michaux wrote:
 ES5 Section 7.8.4 discusses UnicodeEscapeSequence which have four
 hexidecimal digits (i.e. \u to \u) and allows specification of
 characters on the Unicode Basic Multilingual Plane. Is it possible in
 ECMAScript to specify characters on higher planes like the
 Supplementary Multilingual Plane?

It is possible to specify their UTF-16 representations using
\u escapes.

 If not, why was that access excluded?

It wasn't a priority to add specific syntax for supplementary escapes
in ES5 (remember that ES5 has very few syntax extensions in general,
and the ones that it has, such as the get/set object literal syntax,
are taken unchanged from existing implementation precedent). I hope
that such a syntax will be included in Harmony, though, along with
more comprehensive Unicode library support.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Property Iteration in JSON serialization

2009-10-13 Thread David-Sarah Hopwood
Brian Kardell wrote:
 So - keeping in mind that the instance and the serialization are
 separate things - what if I would like to order my serialization keys
 (property names) in natural order?  It has no impact on the parsing or
 internal representation, but imposing a known order on the
 serialization, even a trivial one, makes things easier to find and
 documentation easier to understand.  We've implemented this in a few
 local serializers in multiple languages and, since during
 serialization we are writing them out by iterating the keys, it's
 actually quite trivial to merely call sort() on the keys before
 iteration.  Likewise, since this only has to be done once per type -
 not once per instance, there is very, very minimal overhead (you are
 generally sorting less than 20 keys regardless of the number of
 instances).

Well, JS doesn't have types in that sense. So, unless an implementation
were to exploit any internal optimizations it has for recognizing objects
of the same shape [*], it would indeed have to sort the keys of every
instance.


[*] Do any common implementations actually do that, other than for
packed arrays?

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Dataflow concurrency and promises

2009-09-29 Thread David-Sarah Hopwood
,
   except for the important issue of reduced memory usage.

 - processes can have shared access to declarative structures --
   that is, structures that can be extended but not mutated.
   This is in practice relatively easy to reason about, and does
   not introduce the same programming difficulties as a
   shared-memory model.

   (It does introduce a limited form of nondeterminism: if two
   processes attempt to make a conflicting extension, the program
   will fail. This is a programming error. Programs without such
   errors behave deterministically, and programs with such errors
   deterministically fail, but the side-effects that occur before
   they fail may be nondeterministic.)

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Web IDL Garden Hose

2009-09-27 Thread David-Sarah Hopwood
Brendan Eich wrote:
 On Sep 26, 2009, at 6:08 PM, Maciej Stachowiak wrote:
 
 This may provide a way to implement some of these behaviors in pure
 ECMAScript. The current proposal does allow [[Construct]] without
 [[Call]], but not [[Call]] and [[Construct]] that both exist but with
 different behavior.
 
 Date needs the latter.

That can already be done in ES5. As I've previously suggested:

  function Date(yearOrValue, month, date, hours, minutes, seconds, ms) {
use strict;
if (this === undefined) {
  return TimeToString(CurrentTime());
}
// constructor behaviour
...
  }

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Web IDL Garden Hose

2009-09-27 Thread David-Sarah Hopwood
Cameron McCormack wrote:
 Maciej Stachowiak:
- Note: I think catchall deleters are used only by Web Storage and
 not by other new or legacy interfaces.
 
 Allen Wirfs-Brock:
 Seems like a strong reason to change to the proposed API to eliminate the 
 need for
 a new ES language extension.
 
 When writing Web IDL originally, it didn’t seem at all to me that host
 objects were a disapproved of mechanism to get functionality that can’t
 be implemented with native objects.

That's why we need closer cooperation between TC39 and the standardizers
of WebIDL in future. If TC39 had been consulted, the disapproval of using
this mechanism in this way would have been expressed.

As Allen says, [[Delete]] and other internal properties are not intended
as an extension mechanism for arbitrary use by API bindings to ECMAScript.
Some internal methods, like [[Call]] and [[Construct]], are relatively
safe to override, but for others the invariants that the ES spec depends
on are quite delicate.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Web IDL Garden Hose

2009-09-27 Thread David-Sarah Hopwood
Brendan Eich wrote:
 On Sep 27, 2009, at 10:41 AM, David-Sarah Hopwood wrote:
 Brendan Eich wrote:
 On Sep 26, 2009, at 6:08 PM, Maciej Stachowiak wrote:

 This may provide a way to implement some of these behaviors in pure
 ECMAScript. The current proposal does allow [[Construct]] without
 [[Call]], but not [[Call]] and [[Construct]] that both exist but with
 different behavior.

 Date needs the latter.

 That can already be done in ES5. As I've previously suggested:

  function Date(yearOrValue, month, date, hours, minutes, seconds, ms) {
use strict;
if (this === undefined) {
  return TimeToString(CurrentTime());
}
// constructor behaviour
...
  }
 
 Of course, a variation on the idiom.
 
 This is similar to what many implementations do too, rather than the
 implementation providing analogues of [[Call]] and [[Construct]]
 internal method on a non-function Date object. It works for Boolean,
 Number, String, and RegExp too.
 
 But it is just a bit unsightly!

shrug. It's compatibility guff. Unsightliness in implementation code
is better than adding a language mechanism just to handle a small and
fixed (for all time, hopefully) set of special cases.

(This has probably drifted off-topic for public-{webapps,ht...@w3.org,
sorry about that.)

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Cross posting madness must stop.

2009-09-27 Thread David-Sarah Hopwood
Mark S. Miller wrote:
 Comparing https://mail.mozilla.org/pipermail/es-discuss/2009-September/
 with http://lists.w3.org/Archives/Public/public-webapps/2009JulSep/ and 
 http://lists.w3.org/Archives/Public/public-html/2009Sep/ shows why this
 cross posting madness must stop. Some messages in this thread are only
 posted to one side of the W3C / ECMA divide, indicating that some posters
 only subscribe on one side. These posters are mutually opaque to the posters
 subscribing only on the other side of the divide, leading to a fragmented
 conversation. For example, the excellent posts by David-Sarah Hopwood 
 https://mail.mozilla.org/pipermail/es-discuss/2009-September/author.html#9879
 have generally gotten responses only from the ECMA side. Some later messages
 from the W3C side seem to have missed some of [their] points.

Indeed, I hadn't realized that my cc:s to public-webapps and public-html
were being dropped *silently*, without any bounce message. If that's due
to the configuation of those lists, then it's a rather user-hostile mailing
list behaviour, IMHO -- problems with spam notwithstanding.

A subsequent attempt to subscribe to public-html as per the instructions
at http://www.w3.org/Mail/Request, bounced with error 550 Unrouteable
address (state 14).

Mark, please forward this.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


[[Call]] and [[Construct]]

2009-09-27 Thread David-Sarah Hopwood
Maciej Stachowiak wrote:
 On Sep 27, 2009, at 11:14 AM, Brendan Eich wrote:
 On Sep 27, 2009, at 10:41 AM, David-Sarah Hopwood wrote:
 Brendan Eich wrote:
 On Sep 26, 2009, at 6:08 PM, Maciej Stachowiak wrote:

 This may provide a way to implement some of these behaviors in pure
 ECMAScript. The current proposal does allow [[Construct]] without
 [[Call]], but not [[Call]] and [[Construct]] that both exist but with
 different behavior.

 Date needs the latter.

 That can already be done in ES5. As I've previously suggested:

 function Date(yearOrValue, month, date, hours, minutes, seconds, ms) {
   use strict;
   if (this === undefined) {
 return TimeToString(CurrentTime());
   }
   // constructor behaviour
   ...
 }

 Of course, a variation on the idiom.

 This is similar to what many implementations do too, rather than the
 implementation providing analogues of [[Call]] and [[Construct]]
 internal method on a non-function Date object. It works for Boolean,
 Number, String, and RegExp too.

 But it is just a bit unsightly!
 
 Will this do the right thing if you explicitly bind Date to a this
 value, for example, by calling it as window.Date(), or using call,
 apply, or function.bind, or by storing Date as the property of another
 random object?

Now that I think about it, probably not (it will attempt to set the
[[Class]] of 'this' to Date, which is unsafe). But the problem here
isn't introduced by the fact that [[Call]] and [[Construct]] have
different behaviour: none of the other built-in constructors can be
safely expressed in ES5 for the same reason, regardless of their
[[Call]] behaviour.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ECMA TC 39 / W3C HTML and WebApps WG coordination

2009-09-26 Thread David-Sarah Hopwood
Maciej Stachowiak wrote:
 I think there are two possible perspectives on what constitutes
 magnify[ing] the problem or widening the gap
 
 A) Any new kind of requirement for implementations of object interfaces
 that can't be implemented in pure ECMAScript expands the scope of the
 problem.
 B) Any new interface that isn't implementable in ECMAScript widens the
 gap, even if it is for a reason that also applies to legacy

My view is firmly B, for the reasons given below.

 My view is A. That's why I pointed to legacy interfaces - if the
 construct can't go away from APIs in general, but we wish to implement
 all APIs in ECMAScript, then ultimately it is ECMAScript that must
 change, so using the same construct again doesn't create a new problem.

Yes it does:

 - In many cases, APIs are partially redundant, in such a way that
   developers can choose to avoid some of the legacy interfaces without
   any significant loss of functionality. By doing so, they can avoid the
   problems caused by clashes between names defined in HTML, and names of
   ECMAScript methods. If new APIs also use catch-alls, they are less
   likely to be able to do this.

 - The potential name clashes created by catch-alls also create a forward
   compatibility issue: if a new method is added to an interface, it
   might clash with names used in existing HTML content. In the case of
   legacy interfaces, it is less likely that we want to add new methods
   to them, and so this forward compatibility issue is less of a problem.

 - Implementors of subsets in which the DOM APIs are tamed for security
   reasons can choose not to implement some APIs that are problematic for
   them to support; but if new APIs are equally problematic, they will be
   unable to provide access to that functionality.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: debugging interfaces

2009-08-18 Thread David-Sarah Hopwood
Jordan Osete wrote:
 Also storing references to arguments or variables for later use is
 impractical, as it would slow down execution dramatically. So the main
 issue about the potential inclusion of variable and arguments
 information is that when we still have got it, we don't know if it will
 ever be used. Always including it means wasting performance dramatically
 (and is a potential nightmare for the engine developers, but I'm sure
 they could manage it ;) ), but never including it means that we throw
 away information that could potentially be useful...
 
 Now, how about letting the user ask for that information only at one
 point - when it is still here ? Or better: before.
 It may seem foolish, but if we allow some kind of way to tell that we
 desperately need that information - for example in the try statement -
 then the engine can enter the try statement knowing that we will need it.
 
 try
 {
...
 }
 catch( e, fullStackInformation )   //notice the second parameter here
 {
...
 }

Since the body of the try statement can call arbitrary other code, this
doesn't help to decide which code should be compiled in a way that preserves
extra debugging information. Remember that if we compile code to do that,
it incurs overhead whether or not an exception actually occurs.

It would be possible to compile both optimized and deoptimized versions
of each function, and check in the optimized version whether it is in the
dynamic scope of such a 'try' block. (Actually, there's no need to restrict
it to 'try' blocks if doing that.) However, that would still add the
overhead of the check to the entry code for all optimized (and not inlined)
functions. I think it would be an overspecification to require any such
feature.

As Christian says, we might define a common interface for implementations
that do want to support this, but I don't think it requires changes to
language syntax. A 'runWithMoreDebugInfo(someFunction)' API would suffice.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Operator overloading revisited

2009-07-09 Thread David-Sarah Hopwood
Brendan Eich wrote:
 On Jul 6, 2009, at 6:10 PM, Alex Russell wrote:
 
 This point disturbs me. Making classes frozen solves no existing JS
 programmer problems, introduces new restrictions with no mind paid to
 the current (useful) patterns of WRT the prototype chain, and
 introduces the need for a const that only seems there to make some
 weird security use-cases work. And I say that with all sympathy to
 weird language features and the security concerns in question.

 Why should this stuff be the default?
 
 What Mark said, I agree completely with his post: this stuff is *not*
 the default -- function as constructor is the default, and no one is
 salting that syntax. So why are you against sugar for high-integrity
 programmers? It's not as if those weirdos will take over the world, right?

We security weirdos fully intend to take over the world.
You have been warned.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: extension modules

2009-06-14 Thread David-Sarah Hopwood
kevin curtis wrote:
 Python has a concept 'extension modules' where module can be
 implemented in c/c++. Also the idea of running native code in the
 browser has been put forward by Google's native client for running x86
 in the client. MS - i think - are researching something similar.

The idea of running native code securely in the browser is speculative
and unproven. Nothing should be standardized in this area unless and
until such approaches can be demonstrated to have a reasonable chance
of resisting attack. To do so would be to repeat previous mistakes that
have led to the insecure web we currently have.

 c/c++ isn't going anywhere and the relationship between ecmascript and
 c/c++ is interesting. Are there any proposals for something like
 'extension modules' for ES6 or do the variations in the engine
 implementations preclude such a thing?

As far as a foreign function interface for non-web uses of JavaScript
is concerned, that is something that might in principle be worth
standardizing (probably separately from ES6).

However, the internal C/C++ interfaces typically used by current JS
implementations are highly error-prone, make too many assumptions about
implementation details (particularly memory management), and are not
suitable for wider use.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: How would shallow generators compose with lambda?

2009-05-28 Thread David-Sarah Hopwood
Brendan Eich wrote:
 I think we missed an alternative that comports with Tennent's Oversold
 Correspondence Principle, *and* composes. Thanks to Dave Herman for
 pointing it out.
 
 function gen(x) {
   foo( lambda (x) (yield x*x) );
 }
 
 need not yield from gen if the lambda is called from foo or another
 function -- it can throw the same error it would throw if the lambda
 escaped upward/heapward and was called after gen had returned. There's
 no requirement that yield not throw in any case where the lambda is not
 applied in the context of gen.

Well, that depends on what lambda is expected to be used for.

If it is expected to be used to implement general user-defined control
structures, then this restriction would prevent a yield from appearing
in the body of any such structure.

For the use of lambda in built-in expansions, OTOH, this would probably
be adequate, assuming the check that the lambda is called from the body
of the generator function is applied *after* expansion.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: yield syntax

2009-05-19 Thread David-Sarah Hopwood
Igor Bukanov wrote:
 2009/5/18 Brendan Eich bren...@mozilla.com:
 On May 18, 2009, at 2:25 AM, Igor Bukanov wrote:
 The plus side of this is that an empty generator can be created with a
 straightforward:

  Generator(function() {})

 and not with a rather unnatural

  (function() { if (false) yield; })()
 No one makes empty generators.
 
 For me the problem with the way the generators are defined is that a
 dead code like that if (0) yield; affects the semantic by mere
 presence of it. Surely, this is not the first feature in ES that has
 that property - if (0) var a; is another example. But if (0)
 yield; sets a new record affecting the nature of the whole function.

A more explicit alternative is to require some kind of decoration on the
function definition, e.g. (just a straw man):

  function generator foo() { ... }

-- 
David-Sarah Hopwood ⚥

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: yield syntax

2009-05-17 Thread David-Sarah Hopwood
Brendan Eich wrote:
 On May 17, 2009, at 12:43 PM, Mark S. Miller wrote: 
 On Sun, May 17, 2009 at 11:00 AM, Brendan Eich bren...@mozilla.com
 wrote:
 Analogous to direct vs. indirect eval in ES5 (15.1.2.1.1), there is no
 purely syntactic specification for what Neil proposes. A runtime
 check is required. So I don't see why you are focusing only on syntax here.

 I don't follow. What runtime check? For the eval operator, the runtime
 check is whether the value of the eval variable is the original global
 eval function. It makes no sense to have a corresponding global yield
 function value.
 
 If we reserve yield then you're right. One of the appealing (at least to
 me) aspects of Neil's suggestion was that it would avoid opt-in
 versioning required by reserving yield (which is used in extant web
 content, or was when we tried reserving it without opt-in versioning --
 the particular use was as a formal parameter name, used as a flag not a
 function).

Oh, right. We've been talking at cross-purposes. I assumed that you were
suggesting that 'yield' should be contextually reserved. That is what
I've been saying couldn't work due to ambiguities.

-- 
David-Sarah Hopwood ⚥

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Objects for Number, String, Boolean, Date acts like their native counter part

2009-05-17 Thread David-Sarah Hopwood
Biju wrote:
 [behaviour of wrappers] is weird...
 
 Why cant we make objects for Number, String, Boolean, Date acts like
 their native counter part?
 That will be what an average web developer expecting.
 And I dont think it will make existing web break.

No, this brokenness is heavily relied on. It's not an obscure corner case.

Just don't use wrapper objects [*]. They are totally unnecessary and
useless. If a fix is needed, it is to have the spec say that explicitly.


[*] except where they are generated implicitly as temporaries when a
property of a primitive value is accessed -- although I bet most
JS programmers don't even know that is happening.

-- 
David-Sarah Hopwood ⚥

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Dataflow concurrency instead of generators

2009-05-15 Thread David-Sarah Hopwood
John Cowan wrote:
 David-Sarah Hopwood scripsit:
 
 Then the functionality of a generator can be implemented using a
 process/thread that extends a list or queue constructed from
 dataflow variables.
 
 Quite so.  How, if at all, do these dataflow variables differ from
 Prolog variables?

Prolog itself is a sequential language (although there have been
many concurrent extensions of it). Prolog supports logic variables,
which are a generalisation of single-assignment variables that use
a unification algorithm for update. Dataflow variables are generalised
from single-assignment variables in a different direction, in order to
support sychronization between concurrent threads or lightweight
processes.

(view in fixed-width font)

  single-assignment   + unification   logic
 variables   ---   variables
 |  |
 | + concurrency| + concurrency
 |  |
 v+ unification v
  dataflow variables --- concurrent logic
variables


Historically (and at the risk of oversimplifying), the development
was from logic programming languages such as Prolog, to concurrent
logic languages such as Flat Concurrent Prolog, and later to
concurrent constraint languages such as AKL and Oz. Dataflow
variables are a simplified version of concurrent logic variables
that do not support update by full unification, but do support
being bound more than once to identical values.

 As a less complex option, lambdas + Lua-style semicoroutines.

Why do you say that the option I've suggested is complex?
Anecdotal evidence from programmers with experience of Oz and
other languages supporting dataflow concurrency suggests that
this programming model is typically found to be quite simple to
use. Certainly it is not complicated to specify or implement.

 These are
 first-class and deep; you can yield at any point, not just lexically
 within the coroutine, but don't support resuming arbitrary coroutines,
 only the caller (but it's easy to write a general coroutine dispatcher).
 Lua also provides multiple value returns for both coroutines and
 functions, but currently has no support for native threads.

I don't agree that coroutines are less complex than dataflow
concurrency. In fact, coroutines present almost exactly the same
potential complexities in the programming model as threading with
shared memory (which is significantly more complex than dataflow
concurrency), but without the performance advantage of being able
to support actual parallel execution.

-- 
David-Sarah Hopwood ⚥

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Dataflow concurrency instead of generators

2009-05-14 Thread David-Sarah Hopwood
[I sent this to es5-discuss, when I intended es-discuss. Sorry for the
noise for people subscribed to both.]

David-Sarah Hopwood wrote:
 Jason Orendorff wrote:
 On Thu, May 14, 2009 at 12:25 PM, Mark S. Miller erig...@google.com wrote:
 Given both shallow generators and lambda, I don't understand how they
 would interact.
 This is a good question.
 
 So, why do we need generators anyway?
 
 I know generators are quite popular in languages that support them.
 However, there are other language features that can be used to
 provide similar (or greater) functionality, and that would not
 introduce the same problematic control flow interactions.
 
 For instance, suppose that you have dataflow concurrency, as supported
 by Oz and by recent dataflow extensions for Ruby, Python, and Scala:
 
 http://www.mozart-oz.org/documentation/tutorial/node8.html
 http://github.com/larrytheliquid/dataflow/tree/master
 http://pypi.python.org/pypi/dataflow/0.1.0
 http://github.com/jboner/scala-dataflow/tree/master
 
 Then the functionality of a generator can be implemented using a
 process/thread that extends a list or queue constructed from
 dataflow variables.
 
 This approach avoids any problems due to a generator being able
 to interfere with the control flow of its callers. It also allows
 the producer process to run truly in parallel with the consumer(s),
 possibly taking advantage of multiple CPU cores. The programming
 model is no more complicated for the cases that correspond to
 correct use of generators, because strict use of dataflow variables
 for communication between processes (with no other mutable data
 structures shared between processes) is declarative, i.e. it will
 give the same results as a sequential generator-based implementation
 would have done.
 
 Although it gives the same computational results, the dataflow-
 concurrent approach allows more flexibility in the flow control
 between producer and consumer: for example, the producer process
 can be allowed to run freely ahead of the consumer process, or
 constrained to generate only a bounded number of unconsumed
 elements. The special case where the producer process only
 generates the next unconsumed element and only starts to generate
 it when needed -- effectively sequentializing the producer and
 consumer -- corresponds to a sequential generator or coroutine.
 (This requires by-need dataflow variables, as supported by Oz and
 at least the Ruby library mentioned above.)
 However, I suspect that the bounded queue is likely to be more
 efficient and more often what is really wanted.
 
 This flow-control flexibility can be exercised by passing different
 kinds of dataflow list/queue implementation into the producer process,
 without changing the latter's code. It is possible to construct
 more general dataflow networks that can split or merge streams,
 if needed. Dataflow concurrency can also be extended to more
 expressive concurrency models that introduce nondeterminism,
 and the dataflow features only gain in usefulness in that case.
 
 If asked to pick two language features from
 {TC-respecting lambdas, generators, dataflow concurrency},
 I would always pick lambdas and dataflow concurrency, and drop
 generators as a primitive feature. It is still possible to mimic
 a generator-like API, for programmers who are used to that, in
 terms of the dataflow concurrency features. The semantics will
 be slightly different because a generator will not be able to
 directly access shared mutable data structures (it could still
 access structures containing immutable and dataflow variables),
 but this limitation is IMHO more than outweighed by the greater
 generality and potential parallelism.

-- 
David-Sarah Hopwood ⚥

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: yield syntax (diverging from: How would shallow generators compose with lambda?)

2009-05-14 Thread David-Sarah Hopwood
Brendan Eich wrote:
 On May 14, 2009, at 1:14 PM, Neil Mix wrote:
 
 I have this idea that it would be better for yield expressions to look
 like function calls: 'yield' '(' expression? ')'.  (Note that this is
 only a syntactical suggestion; clearly an implementation wouldn't
 actually treat it like a function.)  That would eliminate the
 precedence issues Brendan cites while also making the syntax backward
 compatible with earlier ES parsers.  Is there any technical reason why
 that wouldn't be possible?
 
 The syntax could work but we need to reserve yield contextually.
 It can't be a user-defined function name and a built-in function. The
 compiler must unambiguously know that yield (the built-in) is being
 called in order to make the enclosing function a generator.
 
 This is reason enough in my view to keep yield a prefix operator and
 reserve it.

But that doesn't help: the argument to yield is an arbitrary expression,
so 'yield (foo)' could be either a function call or a yield-expression.
That means that this approach can at best be no simpler to implement or
specify than the function call syntax.

With the function call syntax, it would be sufficient to keep the
existing ES5 grammar for function calls, and then check after parsing
whether a MemberExpression or CallExpression followed by Arguments is
the string yield. With the operator syntax, it's more complicated
than that because there are more syntactic contexts to consider.

 Another reason is your duck/cow point, which I think is a separate point
 from compiler analyzability. Really, no one writes yield(...) in Python,
 and extra parens hurt (I know RSI sufferers who benefit from lack of
 shifting in Python and Ruby).

Yes, those are separate points that I am not arguing against here.

-- 
David-Sarah Hopwood ⚥

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Dataflow concurrency instead of generators

2009-05-14 Thread David-Sarah Hopwood
Brendan Eich wrote:
 On May 14, 2009, at 4:34 PM, David-Sarah Hopwood wrote:
 
 This approach avoids any problems due to a generator being able
 to interfere with the control flow of its callers.
 
 A generator can't interfere with the control flow of its callers.
 
 Can you give an example of what you meant by that?

I meant this:

Brendan Eich wrote:
 Jason Orendorff wrote:
 In ES5, when you call a function, you can expect it to return or throw
 eventually.  (Unless you run out of memory, or time, and the whole
 script gets terminated.)  With shallow generators, this is still true.
 A 'yield' might never return control, but function calls are ok.  But
 with generators+lambdas, almost any function call *anywhere* in the
 program might never return or throw.  This weakens 'finally', at
 least.
 
 To make this clear with an example (thanks to Jason for some IRC interaction):
 
 function gen(arg) {
 foo((lambda (x) yield x), arg);
 }
 function foo(callback, arg) {
 try {
 callback(arg);
 } finally {
 alert(I'm ok!);
 }
 }
 g = gen(42);
 print(g.next()); // tell the user the meaning of life, etc.
 g = null;
 gc();
 
 I think finally is the only issue, since how else can you tell that foo
 didn't see a return or exception from the callback?

The consequences of this issue are not restricted to code using
'finally'; even without finally, yield+generators complicates the
conceptual model of call-return control flow, in ways that are not
possible with yield alone (since yield is restricted to only making
non-local jumps to lexically enclosing labels) or shallow generators
alone (since they can be modelled as a local transformation).

The recent huge thread on python-ideas about yield-from disabused
me once and for all of the idea that (non-shallow) generator semantics
are simple to understand, even for experts.

-- 
David-Sarah Hopwood ⚥

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: [Caja] Language-Based Isolation of Untrusted JavaScript

2009-05-11 Thread David-Sarah Hopwood
TobyMurray wrote:
 Hi caja folks,
 
 I expect you're all aware of this but I wanted to mention a paper I
 recently came across.

 There is some really interesting formal work being done on secure
 [subsets] of JavaScript. The paper whose title is the subject of this
 post is particularly relevant and is available at:
 http://www.doc.ic.ac.uk/~maffeis/csf09.pdf

I wasn't aware of this paper, thanks.

First a technical question. The paper says in Definition 2 that,
apart from numeric properties, the properties

  toString, toNumber, valueOf, length, prototype,
  constructor, message, arguments, Object, Array, RegExp

can be accessed implicitly. However no 'toNumber' property is
mentioned anywhere in the ECMAScript specs, and I don't know of
any implementation-specific property of that name. Have I missed
something, or is 'toNumber' a figment of the authors' imagination?
(This is unfortunately almost impossible to search for.)

-- 
David-Sarah Hopwood ⚥

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Spawn proposal strawman

2009-05-11 Thread David-Sarah Hopwood
Kris Kowal wrote:
 On Mon, May 11, 2009 at 9:26 AM, Brendan Eich bren...@mozilla.com wrote:
 On May 8, 2009, at 8:49 PM, Kris Kowal wrote:
 (function (require, exports) { + text + /**/\n}
 Nit-picking a bit on names: require : provide :: import : export -- so
 mixing require and export mixes metaphors. Never stopped me ;-).
 
 I agree about mixing metaphors.  The befuddlement of start : stop ::
 begin : end is one that bothers me a lot.  The notion is to desugar
 import and export to these two facets, importing and exporting.
 imports : exports would be proper, but doesn't read well in code.  The
 reason for using the term exports is to ease migration, since:
 
  exports.a = function a() {};
 
 Is easy to transform textually to:
 
  export a = function a() {};
 
 So, I'm inclined to stick with exports instead of provide.  The
 metaphor would be complete if we used imports(id) or import(id).
 Since import is a keyword, it would not be available for the
 desugarred syntax.

Neither import nor export are ES3 or ES5 keywords. However, both
are context-dependent keywords in Mozilla JavaScript:

https://developer.mozilla.org/En/Core_JavaScript_1.5_Reference/Statements/import
https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Statements/export

I don't know whether any future 'import' or 'export' syntax could be
made not to collide with the Mozilla extensions.

 Perhaps I'm behind on the times, but I'm under the impression that
 presently the behavior of this function foo declaration has no
 standard behavior:
 
 (function () {
function foo() {
}
 })();

No, that is perfectly standard (and implemented correctly cross-browser).
The body of the outer function is a sequence of SourceElements, which
allows a FunctionDeclaration. 'foo' is bound only within the outer
function's scope.

-- 
David-Sarah Hopwood ⚥

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Spawn proposal strawman

2009-05-09 Thread David-Sarah Hopwood
kevin curtis wrote:
 Re:
 eval.hermetic(program :(string | AST), thisValue, optBindings) :any
 
 Is a 'canonical' AST part of the plans for ecmascript 6/harmony.

I hope so; that would be extremely useful. I would like to see an
ECMAScript source - AST parser (as well as an AST evaluator) in the
Harmony standard library.

-- 
David-Sarah Hopwood ⚥

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Spawn proposal strawman

2009-05-09 Thread David-Sarah Hopwood
Mark S. Miller wrote:
 On Sat, May 9, 2009 at 2:32 PM, David-Sarah Hopwood
 david-sa...@jacaranda.org wrote:
 [...] but the AST should preserve the associativity defined in the
 language spec.
 
 But which language spec? Again, specs only traffic in observable
 differences. Since ES5 does not define any std parse or AST API, there
 is no observable difference in ES5 whether this is specified as
 left-or-right associative. Assuming ES6 does define such APIs, the
 difference becomes observable. I see no reason why ES6 could not
 compatibly specify a right associative grammar for || and .

I have no objection to that as long as the AST API and the ES6 grammar
are consistent with each other.

-- 
David-Sarah Hopwood ⚥

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Operators ||= and =

2009-05-06 Thread David-Sarah Hopwood
liorean wrote:
 On Tue, May 5, 2009 at 11:37 AM, Peter Michaux petermich...@gmail.com 
 wrote:
  function(a=1, b=defaultVal);

 And in this syntax will default values be used if the parameter is
 falsey or only if it is undefined?
 
 2009/5/5 Mark S. Miller erig...@google.com:
 Or only if it is absent?
 
 I've been out of the ECMAScript world for many months now, but IIRC in
 ES3 all formal parameters that are absent gets initiated to the value
 undefined. Not sure which side of the function call border that
 initiation takes place on, though. Wouldn't special casing absence
 from undefined value effectively introduce another state for a
 variable to be in, though, since the behaviour is indistinguishable in
 user code in ES3?

It's not indistinguishable; exactly the first arguments.length parameters
are present, regardless of whether they are undefined.

-- 
David-Sarah Hopwood ⚥

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Catchall proposal

2009-05-06 Thread David-Sarah Hopwood
Brendan Eich wrote:
[...]
 I finally found time to write up a proposal, sketchy and incomplete, but
 ready for some ever-lovin' es-discuss peer review ;-).
 
 http://wiki.ecmascript.org/doku.php?id=strawman:catchalls

# Catchalls are sometimes thought of as being called for every
# access to a property, whether the property exists or not.

The ability to define 'has', 'get', and 'invoke' handlers for the
case where the property exists, is definitely needed IMHO. Suppressing
the visibility of a property is potentially as useful as creating a
virtual property, especially for the secure JS subsets.

I agree however that this is only needed for some use cases, and
for those case in which it is not needed, it would be inconvenient
(and less efficient) to require has/get/invoke handlers to perform
the default action. Defining 'has' and 'hasMissing', 'get' and
'getMissing', etc. appears to solve this problem, and I think that
the extra complexity is justified.

(Defining both the always and missing versions of a handler
in the descriptor is not useful, and could be an error.)

# Defaulting: sometimes a catchall wants to defer to the default
# action specified by the language’s semantics, e.g. delegate to a
# prototype object for a get. The ES4 proposal, inspired by Python
# and ES4/JS1.7+ iteration protocol design, provided a singleton
# exception object, denoted by a constant binding, DefaultAction,
# for the catchall to throw. This can be efficiently implemented
# and it does not preempt the return value.

This means that throw e, where e might have come from an unknown
source, has to be avoided in a handler in favour of something like
throw e === DefaultAction ? new Error() : e. Yuck.

# Runaway prevention: should a catchall, while its call is active,
# be automatically suppressed from re-entering itself for the given
# id on the target object?

I think that all catchalls on a given object O, not just those for
the same id, should be suppressed when handling a catchall for O.
If you want the behaviour that would occur as a result of triggering
a catchall for another property, then it is easy to inline that
behaviour in the handler. But if you want to suppress the catchall
behaviour for another property while in a handler, then it would be
difficult to do so under the semantics suggested above.

(Also the per-object suppression is easier to specify; it just
requires an [[InCatchall]] flag on each object.)

-- 
David-Sarah Hopwood ⚥

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Catchall proposal

2009-05-06 Thread David-Sarah Hopwood
David-Sarah Hopwood wrote:
 Brendan Eich wrote:
 [...]
 I finally found time to write up a proposal, sketchy and incomplete, but
 ready for some ever-lovin' es-discuss peer review ;-).

 http://wiki.ecmascript.org/doku.php?id=strawman:catchalls
 
 # Catchalls are sometimes thought of as being called for every
 # access to a property, whether the property exists or not.
 
 The ability to define 'has', 'get', and 'invoke' handlers for the
 case where the property exists, is definitely needed IMHO. Suppressing
 the visibility of a property is potentially as useful as creating a
 virtual property, especially for the secure JS subsets.
 
 I agree however that this is only needed for some use cases, and
 for those case in which it is not needed, it would be inconvenient
 (and less efficient) to require has/get/invoke handlers to perform
 the default action. Defining 'has' and 'hasMissing', 'get' and
 'getMissing', etc. appears to solve this problem, and I think that
 the extra complexity is justified.

To flesh this out a bit more, I propose the following handlers:

  has(id)
  hasMissing(id)
  get(id)
  getMissing(id)
  set(id, val)
  setMissing(id, val)
  invoke(id, args)
  invokeMissing(id, args)
  delete(id)
  call(args)
  new(args)

'add' in Brendan's proposal is effectively renamed to 'setMissing',
except that the initial value is passed to 'setMissing' just as it
would be for 'set'.

'call' handles calls to the object using the function call syntax.

'new' handles calls to the object using 'new ...(...)'.
(Note that keywords can be used as property names as of ES5.)

There is no need for a 'deleteMissing' handler.

It is an error (causing defineCatchAll to throw) if both 'foo' and
'fooMissing' are present for foo = {has, get, set, invoke}.

While a catchall is entered for object O, an O.[[InCatchall]] flag
is set that suppresses *all* catchalls for O, i.e. reverting to
the default ES5 behaviour.

Catchall handlers are called with 'this' bound to the object on
which the catchall was triggered.

The 'DefaultAction' idea is not used.

-- 
David-Sarah Hopwood ⚥

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Universal Feature Detection

2009-04-29 Thread David-Sarah Hopwood
David Foley wrote:
 Please forgive me if I'm polluting the list, and re-direct me if I am,
 but considering that  there has been so much focus on browser
 implementation,  that JavaScript is also employable in various
 'environments' (IDE's, Servers etc.) and that all of these environments
 avail different features to developers, that a universal / standard
 feature detection API, perhaps through a standardised global Environment
 object, would be prudent.
 
 Are there any plans to do such?

There will probably be some kind of module system in ES-Harmony
(which will be prototyped before then). It would make sense for that
to support querying whether a given module is available, its version,
and other metainformation about it.

-- 
David-Sarah Hopwood ⚥

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Case transformations in strings

2009-03-24 Thread David-Sarah Hopwood
Christian Plesner Hansen wrote:
 David-Sarah Hopwood wrote:
 If converting one character to many would cause a problem with the
 reference to toUpperCase in the regular expression algorithm, then
 presumably Safari and Chrome would hit that problem. Do they, or
 do they use different uppercase conversions for regexps vs
 toUpperCase?
 
 Chrome uses context (but not locale) sensitive special casing for
 ordinary toUpperCase.  For regexps it uses the same mapping but
 doesn't convert chars that map to more than one char and non-ascii
 chars that would have converted to ascii chars.  We would have liked
 to use the full multi-character mapping without the exception for
 ascii but couldn't for compatibility reasons.

Can you expand on what the compatibility problem was for
non-ASCII - ASCII mappings in regexps?

-- 
David-Sarah Hopwood ⚥

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Case transformations in strings

2009-03-24 Thread David-Sarah Hopwood
David-Sarah Hopwood wrote:
 Christian Plesner Hansen wrote:
 David-Sarah Hopwood wrote:
 If converting one character to many would cause a problem with the
 reference to toUpperCase in the regular expression algorithm, then
 presumably Safari and Chrome would hit that problem. Do they, or
 do they use different uppercase conversions for regexps vs
 toUpperCase?
 Chrome uses context (but not locale) sensitive special casing for
 ordinary toUpperCase.  For regexps it uses the same mapping but
 doesn't convert chars that map to more than one char and non-ascii
 chars that would have converted to ascii chars.  We would have liked
 to use the full multi-character mapping without the exception for
 ascii but couldn't for compatibility reasons.
 
 Can you expand on what the compatibility problem was for
 non-ASCII - ASCII mappings in regexps?

Oh, never mind -- this is required by step 5 of Canonicalize in section
15.10.2.8.

So, there would be no regexp-related problems with requiring toUpperCase
to perform multi-code-unit and/or context-sensitive mappings in ES3.1.

-- 
David-Sarah Hopwood ⚥

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Exactly where is a RegularExpressionLiteral allowed?

2009-03-24 Thread David-Sarah Hopwood
Waldemar Horwat wrote:
 David-Sarah Hopwood wrote:
 I'll repeat my argument here for convenience:

   A DivisionPunctuator must be preceded by an expression.
   A RegularExpressionLiteral is itself an expression.

 (This assumes that the omission of RegularExpressionLiteral from
 Literal is a bug.)

   Therefore, for there to exist syntactic contexts in which either
   a DivisionPunctuator or a RegularExpressionLiteral could occur,
   it would have to be possible for an expression to immediately
   follow [*] another expression with no intervening operator.
   The only case in which that can occur is where a semicolon is
   automatically inserted between the two expressions.
   Assume that case: then the second expression cannot begin
   with [*] a token whose first character is '/', because that
   would have been interpreted as a DivisionPunctuator, and so
   no semicolon insertion would have occurred (because semicolon
   insertion only occurs where there would otherwise have been a
   syntax error); contradiction.
 
 Yes, I verified when we were writing ES3 that this was the only case
 where the syntactic grammar permitted a / to serve as both a division
 (or division-assignment) and a regexp literal.  The interaction of
 lexing and semicolon insertion would have been unclear (how can you say
 that the next token is invalid if you don't know how to lex it?), so we
 wrote the spec to explicitly resolve those in favor of division.

If that is what the note is intended to clarify, I think its current
wording is more confusing than helpful. It certainly confused me.
Anyway, there is no case in which a regexp needs to be parenthesized
to avoid lexical ambiguity.

How about replacing the current wording by something that specifically
discusses the semicolon insertion issue, with an example:

  There are two goal symbols for the lexical grammar. The InputElementDiv
  symbol is used in those syntactic grammar contexts where a leading
  division (/) or division-assignment (/=) operator is permitted. The
  InputElementRegExp symbol is used in other syntactic grammar contexts.

  NOTE
  There are no syntactic grammar contexts where both a leading division
  or division-assignment, and a leading RegularExpressionLiteral are
  permitted. This is not affected by semicolon insertion (section 7.9);
  in examples such as the following:

a = b
/hi/g.exec(c).map(d);

  where the first non-whitespace, non-comment character after a
  LineTerminator is '/' and the syntactic context allows division or
  division-assignment, no semicolon is inserted at the LineTerminator.
  That is, this example is interpreted in the same way as:

a = b / hi / g.exec(c).map(d);

-- 
David-Sarah Hopwood ⚥

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Exactly where is a RegularExpressionLiteral allowed?

2009-03-23 Thread David-Sarah Hopwood
Brendan Eich wrote:
[...]
 If you make the /= correction, there is no ambiguity.

Indeed there is no ambiguity; I think Allen's point is that the spec
is currently written in a way that is very unhelpful in allowing one to
conclude that.

 7.1 last paragraph:
 
 Note that contexts exist in the syntactic grammar where both a division
 and a RegularExpressionLiteral are permitted by the syntactic grammar;

I believe that statement is wrong.
I gave a detailed argument for why it is wrong at
http://www.mail-archive.com/es-discuss@mozilla.org/msg01329.html;
the only reply was by Eric Suen, and the argument in his post was incorrect.

I'll repeat my argument here for convenience:

  A DivisionPunctuator must be preceded by an expression.
  A RegularExpressionLiteral is itself an expression.

(This assumes that the omission of RegularExpressionLiteral from
Literal is a bug.)

  Therefore, for there to exist syntactic contexts in which either
  a DivisionPunctuator or a RegularExpressionLiteral could occur,
  it would have to be possible for an expression to immediately
  follow [*] another expression with no intervening operator.
  The only case in which that can occur is where a semicolon is
  automatically inserted between the two expressions.
  Assume that case: then the second expression cannot begin
  with [*] a token whose first character is '/', because that
  would have been interpreted as a DivisionPunctuator, and so
  no semicolon insertion would have occurred (because semicolon
  insertion only occurs where there would otherwise have been a
  syntax error); contradiction.

  [*] Ignoring comments and whitespace.

 however, since the lexical grammar uses the InputElementDiv goal symbol
 in such cases, the opening slash is not recognised as starting a regular
 expression literal in such a context. As a workaround, one may enclose
 the regular expression literal in parentheses.
 
 ASI strikes again:
 
 a = b
 /hi/g.exec(c).map(d);

Semicolon insertion is not possible after 'b', because the '/' following
it is a valid token in that context, so there is no syntax error that
could prompt semicolon insertion.

 The Note takes care of this.

This is not a case where the note applies; it's essentially the same
case as given by Eric Suen. My response to him is at
http://www.mail-archive.com/es-discuss@mozilla.org/msg01331.html.

-- 
David-Sarah Hopwood ⚥

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Exactly where is a RegularExpressionLiteral allowed?

2009-03-23 Thread David-Sarah Hopwood
Brendan Eich wrote:
 On Mar 23, 2009, at 12:38 PM, Allen Wirfs-Brock wrote:
 
 I don't think so, although perhaps the fix is as easy as adding
 RegularExpressionLiteral as an alternative RHS for PrimaryExpression.
 
 Oh sure -- that is the missing link. Thanks!

It should be Literal, not PrimaryExpression. There is no technical
difference (since Literal is only used as one of the alternatives
for PrimaryExpression), but it's just common sense that a
RegularExpressionLiteral is a literal.

-- 
David-Sarah Hopwood ⚥

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Case transformations in strings

2009-03-23 Thread David-Sarah Hopwood
Waldemar Horwat wrote:
 Allen Wirfs-Brock wrote:
 Any input from our other Unicode experts would be appreciated...

 Here's what I found (running on Windows Vista):
 IE, FF, Opera
 \u00DF.toUpperCase()  returns \u00DF
 Safari, Chrome
 \u00DF.toUpperCase()  returns SS
[...]
 The reason the ES3 specification was the way it was is because
 converting one character to many during case conversions would be
 incompatible with regular expressions.  The regular expression algorithm
 refers to String.prototype.toUpperCase.

If converting one character to many would cause a problem with the
reference to toUpperCase in the regular expression algorithm, then
presumably Safari and Chrome would hit that problem. Do they, or
do they use different uppercase conversions for regexps vs
toUpperCase?

If the latter, then we should allow that, and probably require it.

-- 
David-Sarah Hopwood ⚥

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: 15.4.4.21 Array.prototype.reduce ( callbackfn [ , initialValue [ , thisArg ] ] )

2009-03-21 Thread David-Sarah Hopwood
Edward Lee wrote:
 Right now [].reduce doesn't take an optional thisArg, so the callback
 is always called with |null| for |this| per 9.c.ii.
 
 The Array prototype object take an optional thisArg for every, some,
 forEach, map, and filter; so it would be good to make reduce
 consistent with the rest.

Why is it better to use 'this' than to simply have the callback function
capture the variables it needs? The latter is just as expressive and
IMHO results in clearer code, since:

 - the captured variables are named, and the names can be more
   meaningful than 'this';
 - there can be more than one such variable, without needing to set
   'this' to an object or list.

The required variables are necessarily in scope when passing a
FunctionExpression as the callback. The case where they are not in scope
because the callback function is defined elsewhere is quite unusual;
in that case, you can instead pass a lambda expression that calls the
function defined elsewhere with these variables as explicit parameters.
(That is a situation where using 'this' results in particularly *unclear*
code, because the definition of what 'this' is set to may be far away
from its use.)

The other methods with callbacks take a 'thisArg' not because it is
needed or even useful, but for compatibility, because they already do
in existing implementations that provide these functions.

-- 
David-Sarah Hopwood ⚥

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: 15.4.4.21 Array.prototype.reduce ( callbackfn [ , initialValue [ , thisArg ] ] )

2009-03-21 Thread David-Sarah Hopwood
Edward Lee wrote:
 On Sat, Mar 21, 2009 at 9:50 AM, David-Sarah Hopwood
 david.hopw...@industrial-designers.co.uk wrote:
 Why is it better to use 'this' than to simply have the callback function
 capture the variables it needs?

 It's nice to be able to consistently refer to the same 'this' from an
 prototype's function whether from inside a local lambda or another
 function on that prototype. Any generic function that takes 2
 arguments and returns 1 can be used for reduce, but if that callback
 if a prototype's function, its 'this' will be wrong unless you
 provided extra code to bind the function to an object.
 
 Yes, you can achieve this in other ways by just binding the callback
 to the object before passing it to reduce, so one minor benefit is
 that it's more compact:
 
 [].reduce(fn, init, this)
 [].reduce(fn.bind(this), init)

Very minor. '.bind(this)' has the advantage of working in general for all
such cases, not just for particular Array methods.

In the thisless style where objects are constructed as closures rather
than using prototypes, of course, this problem never happens.

 But the main reason is just consistency with the rest of the functions
 that take a callback.

I accept that consistency is a valid consideration; I just don't think it
outweighs the considerations given in my previous post. I'm not strongly
opposed to adding 'thisArg' to these functions, though, if the concensus
is in favour. My argument is primarily that they're not needed and that
it is better for programs to use variable capture, and either the
thisless style or '.bind(this)', instead.

-- 
David-Sarah Hopwood ⚥

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: features for es that would make it a perfect intermediate compiler target

2009-03-21 Thread David-Sarah Hopwood
Ash Berlin wrote:
 On 21 Mar 2009, at 22:16, Luke Kenneth Casson Leighton wrote:
 
 i'd be interested to hear how these issues can be addressed, using the
 new language features, so that javascript can become a language that
 can be taken seriously instead of being treated as something that
 people avoid at all costs [silverlight, anyone?]
 
 Also a lot of your concerns only address javascript as a language
 running inside a browser - I for one think it has a future as a stand
 alone language.

I don't see that Luke's argument is at all dependent on whether it is
an implementation of JS embedded in a browser that is being used as a
compilation target. His post has numerous technical errors (which I'll
address separately), but this isn't one of them.

-- 
David-Sarah Hopwood ⚥

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ES3.1 questions and issues

2009-03-18 Thread David-Sarah Hopwood
Mark Miller wrote:
 On Tue, Mar 17, 2009 at 6:56 PM, Allen Wirfs-Brock
 allen.wirfs-br...@microsoft.com wrote:
 In all other Array.prototype functions (and some other places) where similar 
 possibilities exist, I have a situational appropriate variation of a 
 sentence such as The final state of O is unspecified if in the above 
 algorithm any call to the [[ThrowingPut]] or [[Delete]] internal method of O 
 throws an exception.
 
 Oh. I see that now. Searching, I see it in
 Array.prototype.{pop,push,reverse,shift,splice,unshift,map}
 
 I could find no other examples.
 
 I think the inclusion of map is a mistake. Map does not mutate O.
 
 For the others, I think unspecified is way too broad.
 1) These methods must not modify any frozen properties.
 2) The only values they may write into these properties should be
 those that might have been written had the specified algorithm been
 followed up to the point that the error was thrown. Otherwise, a
 conforming implementation could react to a failure to pop by writing
 the global object or your password into the array.
 3) Is there any reason for this looseness at all? If you simply leave
 out this qualifier, then the reading of these algorithms consistent
 with the rest of the spec is that the side effects that happened up to
 the point of the exception remain, while no further side effects
 happen. That assumption is pervasive across all other algorithmic
 descriptions in the spec. I think we should just drop this qualifier.

I agree completely, and particularly with points 1) and 2). There
should be very good reasons to make behaviour unspecified or
implementation-defined; here there is not.

-- 
David-Sarah Hopwood ⚥

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: name property for built-in functions??

2009-03-11 Thread David-Sarah Hopwood
Garrett Smith wrote:
 I (finally) realized it would be useful to allow setting the name dynamically.
 
 In a good number of cases, a closure is used where
 Function.prototype.bind() would be used. In this case, a
 generically-named or anonymous function is created and returned. It is
 not possible to parametrize the function name.
 
 EventPublisher.fireEvent = function(publisher) {
 
   return function [publisher.eventName+Handler](ev) {
 // code here.
   };
 };
 
 The square bracket is just pseudo. The syntax is incompatible.
 
 Maciej' Function.create proposal:-
 
 Function.create([Foo bar], param1, param2, code(); goes(); here(););
 
 - uses strings to create a function. Escaping strings in strings is
 awkward to read and write. Refactoring a function into strings to
 arguments to Function.create would be tedious and error-prone.
 Especiall if the soure already contains strings in strings (html = p
 id='p'p\/p. Using with strings to build functions cumbersome.
 
 Eval uses the calling context's scope. I do not know what the scope
 would be for a function created with Function.create.  To use the
 calling context's scope would seem to be not secure. However, it would
 seem necessary to wrap a function.
 
 Possible alternative:-
   Function.create( name, fun[, context] );

I don't see the problem here that would require overriding the context.
The scope used by fun would be its original lexical scope. In the example
above:

  EventPublisher.fireEvent = function(publisher) {
return Function.create(publisher.eventName + Handler, function(ev) {
  // code here
  // can refer to 'publisher', etc. if needed
});
  };

All the hypothetical Function.create does is to create a new function that
behaves the same as its fun argument, but with a different name. The
.name property of all function objects would be non-[[Writable]] and
non-[[Configurable]].

Whether this is actually needed, I'm not sure, but it has all of the
functional and security properties I've seen stated as desirable so far
in this thread.

-- 
David-Sarah Hopwood ⚥

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: name property for built-in functions??

2009-03-09 Thread David-Sarah Hopwood
Allen Wirfs-Brock wrote:
 I have another concern about the potential interactions between the
 proposed name property and toString.  Apparently, there is a known use
 case of eval'ing the result of toString'ing a function in order to create
 a new function. If we assign synthetic names such as get_foo or set_foo
 to syntactically defined getter/setter functions or allow a user to
 associate a name with an anonymous function which then appears in the
 toString representation will mean that eval will parse the toString result
 as a FunctionDeclaration rather than a FunctionExpression.

 For non-strict evals, that means that the synthetic name will get added
 to the Declaration Environment of the eval. Note that for indirect evals,
 the Declaration Environment is now the Global Environment but even for
 nested eval this possibility seems like a hazard that that most uses are
 not dealing with.

I don't see why this is an interaction between 'name' and 'toString'.
Isn't this issue independent of whether 'name' is present?

-- 
David-Sarah Hopwood ⚥



___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: name property for built-in functions??

2009-03-07 Thread David-Sarah Hopwood
David-Sarah Hopwood wrote:
 Brendan Eich wrote:
 The utility of mutable name for anonymous functions is not at issue if
 we do not define name at all on such functions -- this is the proposal
 Allen and I were converging on. You can set name on such anonymous,
 expressed functions to whatever value you like, delete it, shadow it via
 .prototype/construction, etc.
 
 I think that's a good solution; it meets the Objective-J use case without
 introducing the mutability issue raised by MarkM.
 
 The only issue remaining in this anonymous function case is whether
 toString picks up the assigned name. For anonymous functions only, this
 could be done without breaking the isolation property that allowing
 mutation of the name initialized from the declared name of a
 non-anonymous function would break. In fact it would seem independent:

 anonymous function referenced by variable f:
   * name can be set;
   * if set to a value converting to the string g, then f.toString()
 returns function g(...) {...} (modulo whitespace).
 
 If name is set to a value that is not an Identifier, then the resulting
 string might not be a syntactically correct FunctionExpression or
 FunctionDeclaration.
 
 Of course a possible response is Don't do that.

I meant, Don't rely on the result of toString being syntactically correct
after setting the name property to a non-Identifier.; not necessarily
Don't set the name property to a non-Identifier.

 Since the code that
 is setting the name could do other things that would achieve the same
 effect (for example, setting 'toString'), a Don't do that answer may
 be adequate in this case. The function object can be sealed to prevent
 all such mutations.

-- 
David-Sarah Hopwood ⚥

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: parseInt and implicit octal constants

2009-02-23 Thread David-Sarah Hopwood
Allen Wirfs-Brock wrote:
 David-Sarah Hopwood wrote:
 Herman Venter wrote:
 I appreciate that this proposal does not try to go all the way on
 octal. I am not so sure this is a good thing or that it makes the
 proposal more likely to succeed.

 I wouldn't be opposed to removing octal entirely from the spec, but
 bearing in mind the section 16 wording on syntactic extensions, even
 that would not prevent implementors from conformantly supporting it.
 
 Actually, I don't think section 16 applies.

It doesn't apply to parseInt; it does to octal numeric literals and
string/regexp octal escapes (which going all the way on octal would
remove from the spec).

[...]
 Finally, there is another approach to resolving this issue.  Define a
 new global function, parseInteger, that does the right thing and
 relegate parseInt to Annex B.

That's not a bad idea, given that parseInt has the additional flaw of
silently stopping at the first invalid character, which this change
will not fix. But it should be a static method of Number, not a global,
to avoid further polluting the global namespace and potentially clashing
with existing user functions.

-- 
David-Sarah Hopwood ⚥


___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Object.prototype.link

2009-02-22 Thread David-Sarah Hopwood
memo...@googlemail.com wrote:
 I'd like to use link(obj, target).
 
 E.g.
 a = 10;
 link(b, a);
 a++;
 b++;
 print(b);
 // output: 12

That would require a catchall mechanism, allowing accesses to
nonexistent properties of an object to be handled. It's quite likely
that such a mechanism will be added, I think, but there is currently
no detailed concrete proposal.

Some previous discussion:
https://mail.mozilla.org/pipermail/es-discuss/2008-November/008159.html

-- 
David-Sarah Hopwood ⚥

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Object.prototype.link

2009-02-22 Thread David-Sarah Hopwood
David-Sarah Hopwood wrote:
 memo...@googlemail.com wrote:
 I'd like to use link(obj, target).

 E.g.
 a = 10;
 link(b, a);
 a++;
 b++;
 print(b);
 // output: 12
 
 That would require a catchall mechanism, allowing accesses to
 nonexistent properties of an object to be handled. It's quite likely
 that such a mechanism will be added, I think, but there is currently
 no detailed concrete proposal.

Sorry, I misread your example; a catchall mechanism is not particularly
relevant to it.

What you've asked for above can be implemented in terms of linking of
properties using getters/setters, in the case where the variables are in
the global scope or in a scope introduced using 'with'. However, why on
earth would you want this? It looks frightful.

-- 
David-Sarah Hopwood ⚥

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: parseInt and implicit octal constants

2009-02-22 Thread David-Sarah Hopwood
Herman Venter wrote:
 I appreciate that this proposal does not try to go all the way on octal. I am 
 not so sure this is a good thing or that it makes the proposal more likely to 
 succeed.

I wouldn't be opposed to removing octal entirely from the spec, but
bearing in mind the section 16 wording on syntactic extensions, even
that would not prevent implementors from conformantly supporting it.

 For the record, I'm personally all for the proposed change, would also like 
 to all other forms of octal go away and most of all would like to have a 
 standard that defines just one language, not a powerset.
 
 But even if the standard does change, I'm not going to bet on the 
 implementations following suit any time soon. I'm not so sure that having a 
 standard that is ignored is a good thing either.
 
 If the change in the standard is agreed to by the representatives of the 
 implementers, they should first be sure that the change will in fact be made 
 in their implementations (and sooner rather than later, as in their next 
 release).

If there is a thorough test suite for specification changes in ES3.1:
http://bugs.ecmascript.org/ticket/449,
then I would expect there to be considerable pressure from developers
for implementations to pass that test suite, as there has been in
similar cases such as the ACID tests.

(Of course, a test suite cannot guarantee conformance, but it can test
whether implementors have tried to address spec changes and known bugs.)

I have submitted a bug for this change to parseInt, with a test case:
http://bugs.ecmascript.org/ticket/449.

-- 
David-Sarah Hopwood ⚥

___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Am I paranoid enough?

2009-02-21 Thread David-Sarah Hopwood
Waldemar Horwat wrote:
 What are you trying to do?  Exclude all scripts that use the  operator?

Oops, I failed to describe the intended restrictions correctly.
Any sequence of consecutive '' would be allowed if followed by AmpFollower.
But at least this tells me you were paying attention ;-)

 David-Sarah Hopwood wrote:
 Suppose that S is a Unicode string in which each character matches
 ValidChar below, not containing the subsequences !, / or ]], and
 not containing ( followed by a character not matching AmpFollower).

-- 
David-Sarah Hopwood
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


  1   2   3   >