Re: return when desugaring to closures

2008-10-09 Thread David Herman
Sorry, I was unclear. I meant 'lambda' for the expression form and 'define' for 
the definition form.

Dave

- Original Message -
From: Brendan Eich [EMAIL PROTECTED]
To: David Herman [EMAIL PROTECTED]
Cc: Peter Michaux [EMAIL PROTECTED], es3 x-discuss [EMAIL PROTECTED], 
es-discuss@mozilla.org
Sent: Thursday, October 9, 2008 9:12:26 PM GMT -05:00 US/Canada Eastern
Subject: Re: return when desugaring to closures

On Oct 9, 2008, at 4:28 PM, David Herman wrote:

 How would people feel about the declaration form being 'define'  
 instead of lambda? As in:

define const(x) {
lambda(y) x
}

 Maybe I'm just accustomed to Scheme, but it looks awkward to me for  
 the declaration form to be called lambda. Dylan also used 'define'.

For named functions, it's less cryptic, it has clearer connotations.  
For anonymous functions, e.g.:

   (define (x) {...})(x)

or

   return foo(define (y) {...}, z);

your mileage *will* vary, but it seems worse by a hair to me. But I'm  
used to lambda as a term of art.

The obscurity of lambda helps it avoid collisions (we have ways of  
unreserving keywords in property-name contexts, but these do not work  
for formal parameters and variables named define, which seem  
likelier at a guess than lambda -- spidering the web could help  
confirm this guess).

The obscurity also arguably partners lambda better with function.  
Setting up define as a cleaner function seems to switch domains of  
discourse. Concretely, we have in ES3.1 Object.defineProperty and  
similarly named functions. These define APIs were prefigured by  
Object.prototype._defineGetter__, etc.. This sense of define has  
meant bind property name to value or getter/setter.

On the other side, Python, E, etc. use def. But we would be verbose  
like Scheme and Dylan. So define vs. lambda.

End of my bike-shedding ruminations.

/be
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: return when desugaring to closures

2008-10-11 Thread David Herman
   if (h == 0)
 h = function() {break};

Did you mean if (x == 0)? That's been confusing me in trying to read your 
example.

Dave
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: return when desugaring to closures

2008-10-11 Thread David Herman
 Also, I wonder why lambda in it's block-less form is restricted to
 expressions.

I'm with you... but I'd want to check with the experts on the ES grammar to see 
whether this introduces any nasty ambiguities.

Dave
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: return when desugaring to closures

2008-10-11 Thread David Herman
 Sounds good to me but it is a little confusing to keep track if let
 is either in or out of ES-Harmony and if it is partly in then which
 of
 the several JavaScript 1.7 uses are in and if there will be let,
 let*, letrec semantics.

I've got no crystal ball, but I'd say it'd be unlikely (and terribly silly) 
that we'd have `lambda' without having `let'.

(The `lambda' form would fail in its requirement as an equivalence-preserving 
primitive if it became a target of `var'-hoisting.)

Dave
___
Es-discuss mailing list
Es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module isolation

2010-01-18 Thread David Herman
[Removed Mark's address from Cc to stop my stmp server from complaining]

On Jan 18, 2010, at 12:14 PM, Brendan Eich wrote:

 Copy and paste. I copy prototype-1.6.0.2.js into mybigfatmodule.js, add my 
 special sauce which makes good use of Prototype's extensions to primordials 
 such as Array.prototype, and then purvey it to the world.

I think I mentioned before at a meeting-- I think we should address modules and 
isolation as separate concerns and likely separate constructs.

If you do that, you can construct new contexts and control explicitly which 
contexts modules get instantiated in. That way you don't have your modules 
tightly coupled with instantiations of e.g. the primordials (and there's no 
reason to limit this to just the primordials). This would also allow explicit 
control over multiple modules *sharing* contexts -- you don't just want 
isolation, you also want the ability to control when /not/ to isolate.

Of course, you also want sensible and convenient defaults, both for back-compat 
and to avoid infecting all code with the general case.

But say you want to use Prototype and a 3rd-party Prototype-based widget, but 
you also want to use a couple other libraries that expect the primordials to be 
left alone. You create a separate context for the former two modules  let 'em 
go to town.

Dave

PS I'm not claiming we should prevent people from copying and pasting a bunch 
of modules into one fat one, if that's what they want/need to do, but we also 
shouldn't prevent people from using separate modules separately  still 
controlling isolation. Separating modularity and isolation is a way to do that.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Isolated worlds (was Re: Module isolation) (Adam Barth)

2010-01-20 Thread David Herman
[BTW, I couldn't see what you were replying to. Quoting would be helpful.]

The call-with-current-program-state feature is awfully heavyweight. I suspect 
it's plenty useful simply to provide facilities for creating separate contexts 
in which modules can be *initiated*, as a way of *kicking off* isolated (or 
partially isolated) states, but without providing the facility for freezing and 
cloning the *current* state of the world.

Dave

On Jan 20, 2010, at 2:53 PM, Kam Kasravi wrote:

 
 This reminds me of ometa's implementation of worlds which provides isolation.
 
 http://www.vpri.org/pdf/rn2008001_worlds.pdf
 
 From: es-discuss-requ...@mozilla.org es-discuss-requ...@mozilla.org
 To: es-discuss@mozilla.org
 Sent: Tue, January 12, 2010 12:00:01 PM
 Subject: es-discuss Digest, Vol 35, Issue 9
 
 Note: Forwarded message is attached.
 
 Send es-discuss mailing list submissions to
 es-discuss@mozilla.org
 
 To subscribe or unsubscribe via the World Wide Web, visit
 https://mail.mozilla.org/listinfo/es-discuss
 or, via email, send a message with subject or body 'help' to
 es-discuss-requ...@mozilla.org
 
 You can reach the person managing the list at
 es-discuss-ow...@mozilla.org
 
 When replying, please edit your Subject line so it is more specific
 than Re: Contents of es-discuss digest...
 Today's Topics:
 
   1. Re: Isolated worlds (was Re: Module isolation) (Adam Barth)
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


simple modules

2010-01-29 Thread David Herman
We had a good discussion about modules at this week's meeting, and Sam 
Tobin-Hochstadt and I have worked out a strawman for a simple module system. 
I've posted a first draft to the wiki:

http://wiki.ecmascript.org/doku.php?id=strawman:simple_modules

There are lots of examples to peruse at:

http://wiki.ecmascript.org/doku.php?id=strawman:simple_modules_examples

which might also be a decent starting point to get a feel for the design.

This has a good bit of overlap with -- and is in many ways inspired by -- Ihab 
and Chris's work, but it has at least one major departure: modules in this 
proposal are not inherently first class values, although we do provide ways of 
getting objects that reflect the contents of modules.

This is just a draft, and we'll be revising it as we go along. But I'm happy to 
take any feedback people have. Also, please let me know what's unclear. I'll 
try to both clarify on es-discuss and revise the page accordingly.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: simple modules

2010-02-01 Thread David Herman
Hi Vassily, thanks for the feedback.

 It should be
 
 script type=harmony
 
 // import everything as Math
 import Math;
 
 alert(2π =  + Math.sum(Math.pi, Math.pi));
 
 /script

This is already possible with the `import Math as Math' form (which 
incidentally can easily be compiled to be exactly as efficient). Leaving the 
as Math part implicit doesn't work if the module specifier is not 
syntactically an identifier:

import @#$!;
@#$!.mumble(grunch)

 We already have with for polluting local namespace, 
 and short syntax for such polluting doesn't feel right.

That's an inappropriate comparison, for two critical reasons. First: `with' 
*dynamically* changes the environment, so it destroys lexical scope, whereas 
when you import everything from a second-class module, it is still possible to 
know statically what bindings are in scope. Second: there's no contention, 
since it's a static error to import two modules that bind the same name. So 
conflicts are ruled out, and there's no ambiguity in a valid program.

Now, I recognize that some people feel that stylistically the import 
everything approach is bad /style/, and that programmers ought to list all 
their imports explicitly. But at the same time, this is still a scripting 
language, and it has to be convenient. There's a difference between advocating 
a style and forcing it. And if it imposes too heavy a burden you get nasty 
unintended consequences (e.g., nobody uses modules at all).

 Some longer syntax would be better, e.g.
 
 script type=harmony
 
 // import everything
 import Math as this;
 
 alert(2π =  + sum(pi, pi));
 
 /script

That would only work with at most one module.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: simple modules

2010-02-01 Thread David Herman
 Sounds good. A Context is configured with the objects (eg dom, xhr) that the 
 developer wants to make accessible in the Context. These objects are bound to 
 an outer lexical frame which all modules imported into the Context can 
 access. Contexts are the means by which access to platform resources are 
 mediated. (Or is that wrong. Hey - i'll wait for the strawman :).

In terms of mechanism, that's not exactly what I had in mind, but in terms of 
purpose that's the rough idea (specifically: contexts are the means by which 
access to platform resources are mediated -- yes). Creating new contexts would 
make it possible to restrict what modules can be imported, so you could create 
pure execution contexts in which you could evaluate code that would not have 
access to anything with interesting authority. This would of course be 
host-dependent.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: simple modules

2010-02-02 Thread David Herman
 But this should be a usable ocap language, not gratuitously lacking any 
 features from full Harmony that could have been provided safely, merely 
 because Harmony unnecessarily chose to provide them in an insecurable manner.

That's a bit hyperbolic; no one's proposing an insecurable system. Our 
proposal strives to let simplicity and conciseness outweigh isolation in the 
common case, and yet still make isolation possible. Within that isolation there 
should plenty of room for building secure subsets.

For example, much of the first-class module strawmen have attempted to build 
off a secure eval; simple modules + isolation would essentially provide that. 
(If that's not obvious, it's entirely our fault, since we haven't written up 
anything on isolation yet.)

 In particular, it would be bizarre for Harmony to have two distinct and 
 disjoint module systems, A and B, simply because module system A was 
 unnecessarily inappropriate for the ocap subset.

Well, they can be addressing different concerns, especially when they have 
pretty dramatic differences. But more to the point, starting with the goal of 
supporting all the features needed for an objcap subset is needlessly tying our 
hands behind our backs. Nobody's trying to jeopardize objcap subsetting, but we 
also can't let objcap goals eclipse others.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: simple modules

2010-02-02 Thread David Herman
Not quite sure how to unpack the question. Let me try a quick sketch, at least 
for the simple modules system:

- a module ID resolver is a mapping from module ID's to module instances
- a module context is a set of module instances
- a module context object is a first-class value representing a module context

A module has at most one instance per module context, but may be instantiated 
separately in separate contexts. Module contexts might share some module 
instances; for example, the standard library might be shared. There might be a 
facility for explicitly constructing module contexts that share some module 
instances. Not sure.

Different module contexts may have different module ID resolvers, so for 
example it would be possible for host environments to provide a SecureESContext 
that didn't allow identifiers to resolve to the filesystem module or the 
dom module.

There would be an API for dynamically evaluating code in a given context; 
roughly a `loadModule' that takes a Context argument (or is a method of 
Context).

Does this help?

Dave

On Feb 2, 2010, at 5:03 PM, Kris Kowal wrote:

 Presuming that in the proverbial glossary a Context is what ES
 presently calls an execution context that has some intrinsic
 Primordials, and a Sandbox is a mapping from unique module
 identifiers to modules (albeit instances or makers depending on what
 proposal you're talking about), does this proposal suggest that there
 is exactly one Context for every Sandbox and that any module
 block statement evaluated in a context populates the corresponding
 sandbox with a mapping from the given module identifier to the first
 class exports object of that module?
 
 Kris Kowal
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: simple modules

2010-02-03 Thread David Herman
[including the list, which got inadvertently cut]

 Yes, I understand that short import  is not with, it just smells
 like with and like global. Your reasons are correct for language
 implementers and I am telling about language users.

To be clear: this is not for implementers, it's for users. The point is that 
you can always know *statically* what is in scope. When you say

function f() { ... }
var x = ...
with(obj) {
f(x);
}

you don't know whether the references to f and x are captured by `obj' 
properties or not. When you say

import * from Foo;
f(x);

you know statically that f and x came from Foo. It doesn't depend on the 
runtime behavior of the program.

 This is exactly about promoting good style. And when I proposed some
 shortcut for function 2-3 years ago in this list, it was discarded
 for good-style and readability reasons.

IIRC, it was discarded because a) we couldn't agree on a color for the 
bikeshed, b) it was determined that it didn't add enough utility to the 
language that wasn't already there.

My point is that it's kind of a deal-breaker for every script on the web to 
have to write e.g.:

script type=harmony
import alert, document, window from DOM;
...
/script

People have not had much problem writing `function' since the mesozoic era of 
JavaScript. But if we want to get them to use a new module system, we can't 
impose undue burdens simply for the sake of promoting good style. Especially 
when the downsides of ignoring that good style are not all that grave.

 About burden - I see it - new item Never use default import will be
 added to books and code style guides after Never use global and
 with.  And every user will need to remember one more JS quirk.

And if *I* were writing a style book, I would not say never use default 
import. I would say, if you are writing a module of a reasonable size, or 
with a reasonable number of imports, or if it's not particularly clear, then 
don't use default import. Otherwise, go to town. It's your language, do what 
you see fit.

It's not a JS quirk, it's actually an important feature.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: simple modules

2010-02-03 Thread David Herman
 - a module context is a set of module instances
 
 Please call this something else.

Okay.

 It is confusing for context to
 mean execution context and module context depending on the
 context.  I've called this a system of modules or sandbox of
 modules in the past.

Well, a module system is a language construct that provides modules. I think 
sandbox sort of suggests more isolation than is necessarily provided. PLT 
Scheme uses the worst possible name for the concept (I won't even say what it 
is, it's so awful).

I'll think about alternatives and update the wiki.

 Different module contexts may have different module ID resolvers, so for 
 example it would be possible for host environments to provide a 
 SecureESContext that didn't allow identifiers to resolve to the filesystem 
 module or the dom module.
 
 This verbiage implies black-listing.  It would be good to be clear
 that the object formerly known as a module context should be
 explicitly populated with a white-list of module instances for SES.

Okay; all I was saying was you could create any kind of restricted context you 
want, with whatever policies you want, in a security-oriented setting. But when 
there's a proposal please do inspect the verbiage.

 ...it should
 never be necessary to edit code to give a module, or a tree of
 modules, an alternate name.  That kind of coupling has wasted many
 days of my time, integrating disparate projects with tight linkage.


A mechanism for specifying modules through a level of indirection, like Allen 
was urging, should make these kinds of problems solvable.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: simple modules

2010-02-03 Thread David Herman
 If not, could possibly non-shared state be the default behaviour. And shared 
 state modules - which share state within contexts - are somehow marked as 
 shared at module definition. e.g.
 module ModShared {
 use shared // or some mechanism to signify shared state
 ...

IMO, this would be draconian, insufficient on its own for security, and ad hoc.

Sometimes you want global state. Sometimes you want a module that memoizes 
everything. When you decide that you want control over the memoization 
(separate memoization tables, multiple instances of your memoizing data 
structures, etc.), you put the state in functions and/or objects and now it's 
not global anymore. That works well in ES already, and there's no need to 
restrict it.

That said, dialects (e.g. security-oriented dialects) would of course be free 
to provide such restrictions.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: simple modules

2010-02-03 Thread David Herman
 Well, a module system is a language construct that provides modules. I 
 think sandbox sort of suggests more isolation than is necessarily provided. 
 PLT Scheme uses the worst possible name for the concept (I won't even say 
 what it is, it's so awful).
 
 I'll think about alternatives and update the wiki.

How about module group?

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: simple modules

2010-02-03 Thread David Herman
Yep. I thought ModuleInstanceGroup was a little over the top. :)

Dave

On Feb 3, 2010, at 3:30 PM, ihab.a...@gmail.com wrote:

 On Thu, Feb 4, 2010 at 10:11 AM, David Herman dher...@mozilla.com wrote:
 How about module group?
 
 But it's not a group of modules; it's a group of their instances (or
 whatever you want to call them -- the extension of the types in the
 modules).
 
 Ihab
 
 -- 
 Ihab A.B. Awad, Palo Alto, CA

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: simple modules

2010-02-03 Thread David Herman
 Well, a module system is a language construct
 
 I not sure I agree with that characterization.  A Module is a language 
 construct as it as specific syntactic element of the language. It is a 
 specific thing that you have to learn about when you learn the language.

I was pretty imprecise, sorry. All I meant was that informally, people use the 
phrase module system to describe the whole of the design. As in, everything. 
Example: We've been discussing lately the design of a module system on 
es-discuss. Confusing to overload familiar terminology, esp. when it's in the 
same space.

 Before we worry too much about the naming of new concepts perhaps we should 
 try to identified the existing ambient concepts that need to be explicit and 
 then add any new only that are needed to support new features.  

Sure, we're painting the specification bikeshed. We will need to agree on terms 
at least provisionally before long. But it can wait till there's content on the 
wiki.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: simple modules

2010-02-03 Thread David Herman
I like it. I might prefer module loader for a bit more concreteness. But it 
has the benefit of concreteness and familiarity.

Dave

On Feb 3, 2010, at 4:03 PM, Mark Miller wrote:

 On Wed, Feb 3, 2010 at 3:11 PM, David Herman dher...@mozilla.com wrote:
  Well, a module system is a language construct that provides modules. I 
  think sandbox sort of suggests more isolation than is necessarily 
  provided. PLT Scheme uses the worst possible name for the concept (I won't 
  even say what it is, it's so awful).
 
  I'll think about alternatives and update the wiki.
 
 How about module group?
 
 Since, AFAICT, the concept being named here, in Java, corresponds to a 
 ClassLoader, I suggest loader. (E calls these loaders as well.)
 
 
  
 Dave
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
 
 
 
 -- 
 Text by me above is hereby placed in the public domain
 
Cheers,
--MarkM
 

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: simple modules

2010-02-03 Thread David Herman
Sorry for the confusion-- we're discussing a name for something that is not 
part of the current strawman. One of the things Sam and I were trying to do was 
separate the concerns of modularity and isolation. So there's a 
not-fully-worked-out strawman waiting to be written for isolation. That's what 
we're talking about a name for.

The rough idea of the impending strawman is that there would be the ability to 
create a new ModuleLoader with which one could load modules completely isolated 
from the current setting. Thus every instance of a module would be tied to a 
particular loader. Per loader, there would never be more than one instance of a 
given module, but in an application with multiple loaders, there might be 
multiple distinct instances of the same module.

Sorry to have confused things by discussing a non-existent (yet) proposal. :/

Dave

On Feb 3, 2010, at 4:50 PM, Allen Wirfs-Brock wrote:

 It’s still not clear to me what we are trying to name?
  
 According to the proposal, a Module is a syntactic element that is part of an 
 Application and Application can consist of multiple Modules.  A complete 
 Application is presumably represented by an external container such as a 
 source file so we will presumably “load” Applications, not Modules.  (If 
 “load” is even the correct concept, I don’t see any reasons  you couldn’t 
 build a Harmony implementation where you feed a bunch of such Application 
 source containers to an Harmony compiler that generated a self-contained 
 binary exe.  Where does the “loading” take place in that scenario?)
  
 Are we trying to name the specification mechanism that is used to describe 
 the semantic association between ImportDeclarations and ExportDeclarations or 
 are we trying to name a hypothetical extensible mechanism of a Harmony 
 implementation that is used to identify and located the external containers 
 of Applications, or are we naming a specific semantic abstraction that we 
 intend to reify that permits some sort of dynamic intercessions in the 
 binding of ModuleSpecifiers, or something else?
  
 (sorry, if I’m being a  pain about this but it is pretty clear that everybody 
 is reading lots of implications into the names that are being thrown around)
  
 Allen
  
 From: es-discuss-boun...@mozilla.org [mailto:es-discuss-boun...@mozilla.org] 
 On Behalf Of David Herman
 Sent: Wednesday, February 03, 2010 4:07 PM
 To: Mark Miller
 Cc: es-discuss Steen
 Subject: Re: simple modules
  
 I like it. I might prefer module loader for a bit more concreteness. But it 
 has the benefit of concreteness and familiarity.
  
 Dave
  
 On Feb 3, 2010, at 4:03 PM, Mark Miller wrote:
 
 
 On Wed, Feb 3, 2010 at 3:11 PM, David Herman dher...@mozilla.com wrote:
  Well, a module system is a language construct that provides modules. I 
  think sandbox sort of suggests more isolation than is necessarily 
  provided. PLT Scheme uses the worst possible name for the concept (I won't 
  even say what it is, it's so awful).
 
  I'll think about alternatives and update the wiki.
 
 How about module group?
 
 Since, AFAICT, the concept being named here, in Java, corresponds to a 
 ClassLoader, I suggest loader. (E calls these loaders as well.)
 
 
  
 Dave
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
 
 
 
 -- 
 Text by me above is hereby placed in the public domain
 
Cheers,
--MarkM
 
  

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Traits library

2010-02-19 Thread David Herman
First, this traits proposal looks very nice -- thanks, Tom and Mark, for your 
work on this.

I want to add another point about the benefit of new syntax by calling out a 
piece of the code.google.com proposal under Performance, where it says: In 
order for the partial evaluation scheme to work, programmers should use the 
library with some restrictions. This is one of the places where you have the 
opportunity to call out more precisely in the language where users can expect 
better performance. By creating a declarative syntax, you invite implementors 
to make the common cases efficient, and provide a clearer set of rules for 
programmers to know when they are or aren't straying into you can do that, but 
it'll cost you territory.

Another side of the same coin is that you /dis/invite users from accidentally 
straying into the expensive territory. The convenient syntax provides an 
incentive for people to use the version you want to be the common case.

On Feb 19, 2010, at 7:29 AM, Kam Kasravi wrote:

 Picking up where Tom left off below...   I've wondered how you and the 
 ECMAScript body prefer to have 
 particular concepts presented. Given that the lag time between new syntax and 
 conformance across vendors 
 could be months, years or never, it seems that there is always a need to 
 provide a 'shim' 
 or implementation that emulates proposed syntax. I think many concepts 
 including Tom and Mark's
 steer away from new syntax due to the problems noted.  In general should 
 there be due diligence on both?
 I realize this may vary per strawperson but thought you may have a general 
 philosophy to share.

I know your question was to Brendan, but if I may add my $.02: we should be 
mindful but not terrified of changing the language. That's what we're here to 
do, after all!

Now, the specific concern over new syntax has been that it prevents a program 
from even running, which means you can't dynamically test for a feature before 
using it. Putting aside the fact that you could use `eval' (or dynamic module 
loading, given a module system) for such a purpose, I'm still not sure how 
often people actually want to build two different implementations of a program 
based on whether or not a particular feature exists. We shouldn't hold back the 
language just because it'll take take for new features to be spread enough for 
commonplace use. That's all the more reason to move forward sooner than later.

That said, I think something like the traits library could have a nice 
migration path, where a library compatible with ES5 could be even more 
attractive down the road with new syntax.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Traits library

2010-02-19 Thread David Herman
 Could macros - or some kind of AOP-ish compile time processing - help:
 @addtrait mytrait myobj;
 @import acme.mymod; // expands to const acme = {mymod: {myfunc: ...;

Macros wouldn't really solve the I can't parse this problem. You could 
package up syntax extensions as macros and provide them via modules, but you'd 
still then need to dynamically load the modules (or use server-side user-agent 
sniffing to produce web pages using different modules). And modules are a 
significant enough language feature that they deserve to be specified directly, 
not via translation.

 An ES-Harmony goal is to: Provide syntactic conveniences ...defined by 
 desugaring into kernel semantics.
 How could this be achieved? Macro source expansion? What is truly new 'kernel 
 semantics' as opposed to syntax sugar? Interesting stuff.

This is a design/specification approach, not a technological thing. The idea is 
that some features can be described in the spec as sugar for something else 
that already exists. For example, IIRC, Java 5's introduction of for-in loops 
was described by desugaring.

Specification-by-elaboration is not entirely unproblematic. It has its place, 
but it's not an end unto itself. For one, the idea that a desugared construct 
doesn't change the semantics is hard to make precise without falling into the 
Turing tarpit.[1] Also, unless the desugaring is dead simple, it starts to turn 
into specification by implementation.

Dave

[1] For example, why isn't a full-fledged compiler just desugaring? There's 
some pretty intense theoretical research on the topic [2], but some decent 
rules of thumb: if your translation needs information about its context or 
deeply analyzes or rearranges the contents of its subterms, you've fallen into 
the tarpit.

[2] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.51.4656

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Single frame continuations proposal

2010-03-30 Thread David Herman
Kris,

Thanks for this proposal. I think it's got a lot going for it. I like the 
simplicity of the API, although I think it could be simplified even further.

 This is often called continuation passing style (CPS) and is
 quite verbose in JavaScript, difficult to debug, and can be very
 complicated to use in non-linear control flow.

Yes, I think there's a pretty clear need for better language support for 
programming with event-loop-concurrent API's.

 I propose a new call
 operator that can expressed as a function local translation that can
 greatly ease the construction of code that involves resuming flow at a
 future point in time.

I don't think it *should* be expressed as a translation in the spec, but let's 
keep specification approaches separate from the discussion at hand.

 Traditional continuations were proposed for ES4, and rejected for a
 couple of very important reasons.

Uncontroversial; no one's championing full continuations for ES.

 generators have been successfully implemented in SM and Rhino without
 any unnecessary burdens on the VM since their semantics are local to a
 single function and do not introduce interleaving hazarads. But,
 generators are just too specialized and the API is too difficult to use
 for other CPS style mechanisms.

Your approach seems nicely Harmony-ous: small, simple, orthogonal. It would 
definitely be a cost, however, if there turned out to be some incompatibility 
with JS 1.7 generators. I don't think there is one, but it would be good to 
know-- and conversely, it'd be awesome if generators (minus the syntax) could 
really be implemented as a library, e.g. if yield expr could be simulated 
by calling:

GeneratorsAPI-yield(expr)

 semantics:

I can't quite make sense of your pseudo-code, and I'm pretty sure it's not what 
you intended. I couldn't quite follow it enough to figure out what you meant.

   1. evaluate the LHS expression and then the argument expressions
 (just like function calls)
   2. Let *continuation* be defined as the capture of the current
 function activation's continuation
   3. call the LHS value with the arguments (like a normal function
 calls), and let *result* be the value returned from the call
   4. If the call throws an exception, throw the exception in the
 current stack.
   5. If *result* is an object and it has a function value in the
 property named continueWith, than proceed to step 7.
   6. Resume *continuation*, using the value of *result* for the
 continued evaluation of the current expression (if the continuation is
 inside an expression).

Is this supposed to fall through?

   7. Set an internal flag *waiting* to true for this call.
   8. Call the function that is stored in the continueWith property
 of the *result*. The function should be called with one argument, a
 function value defined as *resume*. Let *continueWithResult* be the
 value returned from the call.
   9. Exit the function returning *continueWithResult*
   10. If and when *resume* is called, store the first argument as
 *resumeNextFrame* and proceed to step 10.

This is an infinite loop...

   11. If internal flag *waiting* is false, throw an Error The stack
 can not be resumed again.
   12. Set the internal flag *waiting* to false.
   13. Call *resumeNextFrame* and let *result* be set to the value
 returned from the call. Proceed to step 5.

I think it's more complicated than necessary. NarrativeJS seems expressive 
enough that I bet you could express your semantics as a library on top of that. 
As you say, we're not in the business of blessing libraries! :)

I would think the following sufficient and, IIRC, the same as NarrativeJS 
(roughly?):

semantics of expr - arg-list:

1. Evaluate the LHS expression and then the argument expressions
2. Let *k* be the current function continuation (including code, stack 
frame, and exception handlers).
3. Exit the current activation.
4. Call the LHS value with the argument values followed by *k*.
5. Return the result of the call.

semantics of calling *k* with argument *v* (defaults to the undefined value):

1. Create a new function activation using the stack chain from the point of 
capture.
2. Reinstall the function activation's exception handlers from the point of 
capture (on top of the exception handlers of the rest of the stack that has 
called *k*).
2. Use *v* as the completion of the point of capture.
3. Continue executing from the point of capture.

This is quite a bit simpler. And I suspect it should be sufficient to implement 
your above semantics as well as JS 1.7 generators. I will see if I can sketch a 
translation.

 * Don't mess with the current concurrency model - EcmaScript's current
 shared-nothing event-loop concurrency model is ideal for preventing
 concurrency hazards.

Agreed (other than the word ideal).

 * Don't introduce traditional continuations

Agreed.

 * Consequently, I don't want to suggest any new runtime semantics

You have. :) /pedant


Re: Single frame continuations proposal

2010-03-31 Thread David Herman
Hi Kris,

I've been poring over this for a while, and it's still really, really 
confusing. Could I ask you to show how you would write the following example 
with your proposal?

function setup() {
setFlashing(document.getElementById(notificationArea));
alert(done setup!);
}

var flashingElts = [];

function setFlashing(elt) {
var toggle = true;
window.setTimeout(function() {
elt.style.background = toggle ? red : white;
toggle = !toggle;
}, INTERVAL);
flashingElts.push(elt);
}

And then would you mind showing a rough translation of your implementation to 
ES5 code (similar to the translation of your `foo' function)?

 6. Resume *continuation*, using the value of *result* for the
 continued evaluation of the current expression (if the
 continuation is inside an expression).
 
 Is this supposed to fall through?
 
 Yes

That can't be true, can it? If step 6 continues to step 7 then we get:

// step 5
if (!($result  typeof $result.continueWith === function)) {
// do step 6
}
// fall through
var $waiting = true;
var $continueWithResult = $result.continueWith($resume);

But we established in step 5 that there's no $result.continueWith function.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Single frame continuations proposal

2010-04-05 Thread David Herman
[BTW, your quoted text got garbled.]

 In order to utilize leverage continuations with a function that
 execute multiple we would need to eliminate single-shot restriction.
 You could then create some library function that passed the
 continuation to the setInterval function to do something like:
var toggle = true;
intervalling-(INTERVAL);
elt.style.background =...

Your answers keep leaving out the definition of the function that you're 
calling via `-', which is supposed to be the part that creates the requisite 
object with `continueWith' etc. Examples don't do much good if they skip the 
part they were supposed to illustrate!

 But in this case, using the yielding call would be confusing and
 provide very little benefit. This is definitely not the type of
 scenario that it is designed for, normal anon functions work great
 here. This is geared for when a callback is used to execute the
 remainder of a sequence after the completion of an asynchronous that
 we see so often in JavaScript.

I know. I wanted an example that had *both* asynchronous *and* synchronous 
statements, to understand the control flow of your proposal better, but I also 
was trying to keep it short. Maybe you'd prefer this example, which more 
clearly separates the parts that would be expressed with `-':

function setup() {
getThreeThings(URL1, URL2, URL3);
alert(done setup!);
}

function getThreeThings(url1, url2, url3) {
getAndFrob(url1);
getAndMunge(url2);
getAndGrok(url3);
}

function getAndFrob(url) {
var xhr = new SimpleXHR(url);
xhr.get(function(data) {
window.setTimeout(function() {
frob(data);
}, TIMEOUT);
});
}

function getAndMunge(url) {
var xhr = new SimpleXHR(url);
xhr.get(function(data) {
window.setTimeout(function() {
munge(data);
}, TIMEOUT);
});
}

function getAndGrok(url) {
var xhr = new SimpleXHR(url);
xhr.get(function(data) {
window.setTimeout(function() {
grok(data);
}, TIMEOUT);
});
}

Would you mind writing out how to express this program with your proposal? I'm 
just trying to understand. If you can help me with some examples I think I 
maybe able to help clarify your ideas.

 Do you prefer basing single-frame continuations on new non-latin
 character syntax instead of using the yield keyword (hadn't realized
 it was already reserved in ES5 strict-mode when I did the first proposal)?

I don't follow you. Non-latin?

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Single frame continuations using yield keyword with generators compatibility proposal

2010-04-06 Thread David Herman
 The key idea of this approach is that when a function is called that
 contains a yield operator, rather than following a hard-coded
 prescription to return an generator/iterator object, this triggers a
 call to the startCoroutine variable (from the current lexical scope)

Totally opposed. I don't even think this is a fruitful avenue to go down. Going 
through backflips to tie together a couple different features is really beyond 
the goals of Harmony (small, orthogonal).

And this approach is particularly bad. What you're talking about creates a 
context-dependency, which is a nasty refactoring hazard and violates local 
reasoning about code. Macrologists sometimes describe what you're talking about 
as a violation of referential transparency: well-behaved syntactic forms 
shouldn't change their meaning when you move them from one context to another. 
Specifically, if the definition of a syntactic form refers to a variable, that 
reference should be fixed no matter where you use the syntactic form. For 
exactly these reasons.

I see *why* you're trying to do this: you essentially are trying to introduce a 
kind of static operator overloading in order to generalize the meaning of 
`yield'. But this isn't solving a real need, and overloading of any sort isn't 
currently a priority.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


let expressions strawman

2010-04-06 Thread David Herman
Dear all,

We've talked about various let-binding forms in the past, and the 
let-declaration form has pretty wide support. The other two forms proposed for 
ES4 were more controversial. I've just posted a small strawman proposal for let 
expressions that brings this down to just one additional form, and ends up 
being more useful to boot-- you can *both* execute statements *and* produce a 
result value. A quick example:

f(let (a = getArray()) {
  if (x.length === 0)
  throw empty array;
  = a[0]
  })

Read all about it here:

http://wiki.ecmascript.org/doku.php?id=strawman:let_expressions

The proposal is short, but hopefully it gets the point across. I'd be more than 
happy to discuss it here. Comments most welcome!

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: let expressions strawman

2010-04-06 Thread David Herman
f(let (a = getArray()) {
  if (x.length === 0)
  throw empty array;
  = a[0]
  })

erm, a.length. Like it matters. :)

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


thinking about continuations

2010-04-07 Thread David Herman
Hey all,

I went ahead and wrote a series of blog posts this morning about the way I look 
at the design space for single-frame continuations for Harmony. I offer it as 
food for thought wrt how to approach the design. Obviously this is my personal 
angle, but it also covers a reasonably broad spectrum of prior work in the 
research world.

Delimited continuations? In ECMAScript?

http://calculist.blogspot.com/2010/04/delimited-continuations-in-ecmascript.html

The design space of continuations
http://calculist.blogspot.com/2010/04/design-space-of-continuations.html

Thinking about continuations
http://calculist.blogspot.com/2010/04/thinking-about-continuations.html

Harmony first-class activations
http://calculist.blogspot.com/2010/04/harmony-first-class-activations.html

Regards,
Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Complete ECMAScript 5th edition implementation

2010-04-09 Thread David Herman
I think this discussion is getting off-topic. We're happy to have accepted your 
initial announcement, but this list is here to discuss the language 
standardization and design. Discussions relevant only to a specific 
implementation belong elsewhere.

Regards,
Dave

On Apr 8, 2010, at 11:39 PM, Benjamin Jan Alexander Rosseaux wrote:

 Dmitry A. Soshnikov schrieb:
 var x = 10;
 alert(delete x);  // true, should be false (for [[Configurable]])
 alert(x); // undefined
 
 But identifier resolution for x binding (which should be resolved in
 the global object since it isn't deleted) from the function, says not 
 undefined, but x is
 not defined. If do not use delete, it resolves to 10.
 
 (function () {
  alert(x);
 })();
 
  
 Hm, i've checked my DELETE operator + Object.Delete + EnvRec.Delete code 
 implementations against the ES5 final spec PDF again once more, but all 
 instruction steps between my implementation and  the PDF are exactly equal.  
 Can you provide me more information to this, what you does mean, what may be 
 wrong?
 
 Anyway, a new version is available, where I've fixed some other small bugs on 
 other code locations. 
 Benjamin
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: thinking about continuations

2010-04-12 Thread David Herman
Yes, that's an excellent point-- something like:

function captured() {
try {
handler-();
throw throw;
}
finally {
alert(finally!);
}
}

function handler(k) {
k();
}

This calls the handler with A still on the stack, and the handler runs A again; 
then throwing passes through the finally twice, resulting in two alerts.

Thanks for the careful reading. Now I'm not sure whether the function call 
notation makes sense. There's a pigeonhole problem with finally: if you expect 
a capturing form of function call to happen in the context of its 
handlers/finalizers, but you also expect the handlers/finalizers to be executed 
when the activation is suspended, then you can't guarantee that a finally block 
only executes once.

With generators, this doesn't come up because `yield' simply jumps immediately 
out of the frame, without invoking additional user code first.

Dave

On Apr 12, 2010, at 7:08 PM, Waldemar Horwat wrote:

 David Herman wrote:
Thinking about continuations
http://calculist.blogspot.com/2010/04/thinking-about-continuations.html
 
 Your attempted fix for evaluating finally blocks just moved the problem 
 elsewhere.  Since in the final expression you have A(... A(x)), you'll just 
 end up executing the same finally block twice under certain circumstances.
 
   Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Single frame continuations proposal

2010-04-14 Thread David Herman
 function foo(...) { f-(...); g-(...); }
 [snip]
 Of course, and I certainly understand how continuations reify the
 frame(s), and how traditional continuations preserve the stack, but I
 don't follow how to preserve the stack when you have broken the
 continuations apart into separate autonomous single-frame
 continuations.

Single-frame continuations capture a single frame. That's all there is to it-- 
there's nothing else to preserve. You suspend a single frame, save it in a data 
structure, and when it comes time to resume it, you place it back on the top of 
your current stack.

If the frame is on top of some larger continuation, you *cannot* and *must not* 
save any of the rest of that continuation. That's the whole point of 
single-frame continuations. They aren't *supposed* to save the rest of the 
continuation. If you did, they wouldn't be single-frame continuations any more. 
They'd be full continuations.

If, for the sake of diagnostics, you would like to capture some diagnostic 
information for e.g. better stack trace information, your VM can certainly take 
a snapshot of the current full stack (basically, save a stack trace) at the 
point of suspending the continuation, and carry that around in case an 
exception gets thrown later on when the continuation is resumed.

But beyond that, I still don't know what you mean by preserve the stack.

 In the example, when an event triggers resuming the
 execution of g by calling the continuation activation function for g's
 continuation, how does the VM know to put the foo frame underneath it?

In the semantics I proposed, you capture the foo frame and execute g. If g 
registers an event handler that uses the captured frame, well, that's the foo 
frame, which you captured. The VM knows to reconstruct the foo frame because 
that's what it captured.

To frame an answer w.r.t your semantics, I'm afraid I'd need a better 
understanding.

 foo is supposed to resumed with the value for continuing execution,
 but that isn't available until g's continuation is done. What if there
 is user code between foo and g? For the VM to put this stack back in
 order seems like it would require some type of predictive analysis to
 determine that foo's continuation function is guaranteed to be
 executed and partially resume the continuation of foo, then fully
 resuming after g's completion. This seems really magical or am I
 missing something?

This doesn't make any sense to me.

At any rate, there's never any prediction. A continuation is just an internal 
data structure representing the remaining code left to execute in a program-- 
or with delimited continuations, a portion of a program. A first-class 
continuation is a reflection of that data structure as a first-class value in 
the user programming language. If a VM knows what it has left to do *now*, then 
it can always stop what it's doing, save the information about what it had left 
to do, and perform it *later*.

 Earlier in this thread, I demonstrated a simple single-frame
 semantics and shown how generators could pretty easily be layered
 on top of it...
 
 Sorry, I thought you were suggesting that it was difficult to
 understand my specification of the semantics and code translation and
 need it be more clearly written. Your example of the generator library
 provides neither.

My example of the generator library was not the proposed semantics. See

https://mail.mozilla.org/pipermail/es-discuss/2010-March/010866.html

I wrote:

 semantics of expr - arg-list:
 
 1. Evaluate the LHS expression and then the argument expressions
 2. Let *k* be the current function continuation (including code, stack 
 frame, and exception handlers).
 3. Exit the current activation.
 4. Call the LHS value with the argument values followed by *k*.
 5. Return the result of the call.
 
 semantics of calling *k* with argument *v* (defaults to the undefined value):
 
 1. Create a new function activation using the stack chain from the point 
 of capture.
 2. Reinstall the function activation's exception handlers from the point 
 of capture (on top of the exception handlers of the rest of the stack that 
 has called *k*).
 2. Use *v* as the completion of the point of capture.
 3. Continue executing from the point of capture.

I thought this was reasonably clear, or at least, at the time you sounded like 
you thought it was clear. Alternatively, I could sketch it out as a rough 
reduction rule, operational-semantics-style:

  S(A(f-(v1, ..., vn)))
 S(f(v1, ..., vn, function (x) A(x)))

As I suggested in my blog post, this has the problem that the call to `f' loses 
the catch and finally blocks of A. In my post I suggested the alternative:

  S(A(f-(v1, ..., vn)))
 S(A(let (result = f(v1, ..., vn, function (x) A(x))) { return result }))

But, as Waldemar pointed out, this is wrong, too. In fact, I'm about at the 
point where I don't believe a feature based on function calls will be 

Re: names [Was: Approach of new Object methods in ES5]

2010-04-16 Thread David Herman
 Name sounds like a stripped-down uninterned symbol (http://bit.ly/bY3Jkg) to 
 me.

Yup.

 It's an object with a magic attribute that says, unlike any other object you 
 might try to use it as a property name, it is not coerced into a string 
 first.  And it is compared by identity when looked up.  And it is invisible 
 to (all?) enumerations of property names.

Yup again. Basically it entails a slight generalization of the property lookup 
semantics; instead of ToString there would be a ToPropertyName meta-operation, 
which for existing ES objects would just delegate to ToString, but for the new 
class of things would be the identity.

 I have to wonder if it would be a worthwhile generalization to be able to 
 confer these magical attributes on arbitrary objects?  This might allow more 
 experimentation with namespace ideas.

This is worth exploring. I'd worry about unintended consequences of an 
attribute that can be turned on/off at will. But even if it were fairly 
restricted -- e.g., you could turn it on but you couldn't turn it back off 
again -- it might be more powerful.

Tucker: if the property-nameness attribute weren't transferrable but names 
were objects with property tables, do you think that would be powerful enough? 
Or would you want the ability to define custom constructors, e.g.:

function MyCustomKindOfNamespace() {
Object.becomePropertyName(this);
// ...
}

Dave

PS Still, I have my doubts about using any such mechanisms for versioning. 
Incidentally, ROC was just talking about versioning and metadata on the web:

http://weblogs.mozillazine.org/roc/images/APIDesignForTheMasses.pdf

He wasn't talking about JS API design, but some of the lessons still apply.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: names [Was: Approach of new Object methods in ES5]

2010-04-17 Thread David Herman
 But I meant not only naming convention, but that by this naming
 convention this properties (symbols) will be hidden -- just like in
 Python, when _ and __ properties become unavailable outside...

You still haven't specified what outside means. What does get to see a hidden 
name and what doesn't?

 Then I have to see more examples.

1) Publishing private names ruins abstraction

Let's say you create a library and share it with some people. Then in version 2 
of your library, you introduce a new feature, which uses an internal private 
property called count. One of your clients figures out that you used this 
private name and writes a blog post saying hey, if you want to figure out 
whether the library is greater than version 1, just look for a private member 
variable called 'count'!

Now you have 100,000 clients depending on the fact that you have a private 
property called count. You decide for version 3 that you'd rather call it 
elementCount but you can't get rid of the private name because your customers 
have already relied on it.

2) Publishing private names creates namespace pollution

Library A adds a private count property to some shared object.

Library B also adds a private count property to the same object.

They are both developed separately.

Now Client C wants to use both Library A and Library B. Let's arbitrarily say 
it adds A first, then B. Library B fails with an error because it tries to use 
the private count property, which it doesn't have access to because Library A 
already claimed it.

 And nevertheless, encapsulation in
 its main purpose -- is increasing of abstraction. But you're talking about
 already *security* hiding.

Absolutely not. What I'm talking about is abstraction, *not* security. The 
purpose of abstraction is to support modularity, i.e., to eliminate 
dependencies between separate units of modularity so that their implementations 
are free to change. If you publish your private names, you create a point of 
dependency between modules and make it harder to change code. None of this is 
talking about security.

Of course, publishing private names is bad for security as well!

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: names [Was: Approach of new Object methods in ES5]

2010-04-17 Thread David Herman
There are multiple levels of opt-in versioning:

(1) versioning of the language itself

(2) language support for versioning of libraries

I agree with what you're saying wrt (1), but wrt (2), feature detection is 
feasible, and I'd think more tractable than version detection.

Dave

On Apr 17, 2010, at 9:38 AM, Brendan Eich wrote:

 On Apr 16, 2010, at 2:31 PM, David Herman wrote:
 
 PS Still, I have my doubts about using any such mechanisms for versioning. 
 Incidentally, ROC was just talking about versioning and metadata on the web:
 
http://weblogs.mozillazine.org/roc/images/APIDesignForTheMasses.pdf
 
 Rob's blog post: 
 http://weblogs.mozillazine.org/roc/archives/2010/04/api_design_for.html
 
 
 He wasn't talking about JS API design, but some of the lessons still apply.
 
 
 Old WHATWG co-conspirators like me obviously agree on the principles roc 
 presents, but they do not work so well in JS compared to HTML or even CSS. 
 Consider HTML markup:
 
 video ...
 object .../object
 /video
 
 A new HTML5 video tag with an object tag as fallback, to use a plugin to 
 present the video for pre-HTML5 browsers. There are text-y examples that work 
 too, even if the degradation is not as graceful as you might get with a 
 plugin (plugins can lack grace too :-/).
 
 CSS has standardized error correction from day one, although as noted in 
 comments on roc's blog it lacks feature detection. But graceful degradation 
 seems to work as well with CSS as with HTML, if not better.
 
 With JS, new syntax is going to make old browsers throw SyntaxErrors. There's 
 no SGML-ish container-tag/point-tag model on which to build fallback 
 handling. One could use big strings and eval, or XHR or generated scripts to 
 source different versions of the JS content -- but who wants to write 
 multiple versions of JS content in the first place.
 
 The find the closing brace error correction idea founders on the need to 
 fully lex, which is (a) costly and (b) future-hostile. Allowing new syntax in 
 the main grammar only, not in the lexical grammar, seems too restrictive even 
 if we never extend the lexical grammar -- we might fix important bugs or 
 adjust the spec to match de-facto lexical standards, as we did for IE's 
 tolerance of the /[/]/ regexp literal.
 
 So API object detection with fallback written in JS random logic works (for 
 some very practical if not theoretically pretty definitions of works) for 
 the non-syntactic extensions coming in Harmony, assuming we can dodge the 
 name collision bullets. But for new Harmony syntax, some kind of opt-in 
 versioning seems required.
 
 We survived this in the old days moving from JS1.0 to JS1.2 and then ES3. One 
 could argue that the web was smaller then (it was still damn big), or that 
 Microsoft's monopolizing helped consolidate around ES3 more quickly (it did 
 -- IE started ignoring version suffixes on script language= as I noted 
 recently).
 
 Roc's point about fast feedback from prototype implementations to draft 
 standards is the principle to uphold here, not no versioning.
 
 Obviously we could avoid new syntax in order to avoid opt-in versioning, but 
 this is a bad trade-off. JS is not done evolving, syntax is user interface, 
 JS needs new syntax to fix usability bugs. I'm a broken record on this point.
 
 Secondarily, new syntax can help implementations too, both for correctness 
 and optimizations.
 
 So I should stop being a broken record here, and let others talk about opt-in 
 versioning. It seems inevitable. We have it already in a backward-compatible 
 but semantically meaningful (including runtime semantic changes!) sense in 
 ES5 with use strict.
 
 Opt-in versioning s not a free ride, but it is going to a destination we need 
 to reach: new syntax where appropriate and needed for usability and 
 productivity wins.
 
 /be

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


revisiting shift

2010-04-27 Thread David Herman
One of the semantics I suggested and then dismissed for single-frame 
continuations was based directly on the operators shift and reset from the 
PL research literature.[1] To my eye, when we dressed them up to look like a 
function call (with -), they suggested that we were calling a function in 
the current exception handlers when they weren't.

But if we stop being creative with syntax and just use a more traditional 
prefix operator:

UnaryExpression ::= ... | shift UnaryExpression

then let's revisit the semantics. Evaluating one of these expressions in the 
stack S(A) -- ie, a base stack S with current function activation A -- would do 
the following:

1. Evaluate the argument expression to a value v.
2. Suspend and capture A as a continuation value k.
3. Remove A from the stack, leaving S as the stack.
3. Call v with k as its single argument in the stack S.

We have the same representation choices for k; it could simply be a function, 
or it could be an object with a few methods, most likely send, throw, and 
close. (I still think we couldn't accommodate anything more powerful than 
one-shot continuations, mostly because of finally.) Re-entering the 
activation simply pushes it on top of the stack, including its suspended 
exception handlers (just the ones installed in the function body). It also 
closes over its scope chain, of course.

My original quibble with this semantics and the old notation had been that when 
you said:

try { f-(x,y,z) } catch (e) { ... }

it looked like any exceptions thrown by f would be caught by the try-block, but 
that wouldn't be the case. But I think this was more a syntactic issue. Just 
like other powerful control operators like fork and call/cc, a continuation 
operator overrides the ordinary flow of control. As long as the notation 
doesn't hide this fact, it's something programmers would have to be aware of-- 
just like they have to with yield, return, break, and continue.

Example:

function f() {
try {
for (let i = 0; i  K; i++) {
farble(i);
// suspend and return the activation
let received = shift (function(k) { return k });
print(received); // this will be 42
}
}
catch (e) { ... }
}

let k = f(); // starts running and return the suspended activation
...
k.send(42); // resume suspended activation

It wouldn't be hard to show a proof-of-concept implementation of JS 1.7 
generators with this construct, as well as a Lua-style coroutine API (one frame 
only, though, of course).

Notice that, unlike the CPS people normally have to write in, shift 
essentially flips around the control flow so that the callback is what's 
evaluated immediately, whereas the remainder of the function is delayed for 
later. Because shift expects a function argument, programmers/library writers 
could come up with conveniences for common idioms.

Also notice that, unlike JS 1.7 yield, a function that uses shift is not 
special in that it doesn't immediately suspend its body when you first call it. 
But because it's a syntactic operator, it's more manageable for implementors of 
high-performance ES engines, since they can trivially detect whether a function 
may need to suspend its activation.

One last thought: a variation you sometimes see is something like:

Expression ::= ... | shift ( Identifier ) Expression

(with the precedence worked out-- yadda yadda), which avoids the function 
indirection and simply binds the continuation to the identifier and evaluates 
the argument expression. This would be slightly more wonky in JS, because of 
the lack of TCP -- it's unclear what the arguments array should be bound 
to, and if we had something like let expressions with statement bodies, 
return would be weird.

It's also likely that retaining the function indirection makes it more 
convenient to use handlers, e.g.:

function sleep(ms) {
return function(k) {
window.setTimeout(function() { k.send() }, ms)
}
}

function iamtired() {
...
shift sleep(100);
...
}

Dave

[1] http://en.wikipedia.org/wiki/Delimited_continuation

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: revisiting shift

2010-04-27 Thread David Herman
 Also notice that, unlike JS 1.7 yield, a function that uses shift is not 
 special in that it doesn't immediately suspend its body when you first call 
 it. But because it's a syntactic operator, it's more manageable for 
 implementors of high-performance ES engines, since they can trivially detect 
 whether a function may need to suspend its activation.

Quick clarification-- more manageable than if it were a library function 
instead of a syntax, not more manageable than yield. (The yield form is 
also an operator, and is manageable for implementors for exactly the same 
reason.)

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: revisiting shift

2010-04-28 Thread David Herman
 What happens if you don't supply a function but another type, or none?

The simplest thing is to specify it as a runtime error if the argument to shift 
is not callable. You're right that there's an overhead to constructing a new 
function. But it gives you flexibility that's otherwise a pain for the 
programmer. More below.

 Would 42 still be returned by shift? Or is it actually the returned value 
 in k, that gets a send method augmented to it?

I don't understand this question-- do you mean whatever value the handler 
function (in the example, function(k) { return k }) returns? Then no, there's 
no augmentation or mutation here. The continuation is represented as an object 
with three methods:

- send(v): pushes the suspended activation back onto the stack and uses v as 
the result of the shift expression

- throw(v): pushes the suspended activation and throws v from the shift 
expression

- close(): pushes the suspended activation and performs a return (running any 
relevant finally blocks first)

(This is all just what JS 1.7 generators do.)

A simpler representation for captured continuations is just a function. But as 
Kris pointed out in an earlier thread, this is inconvenient for the throw and 
close cases.

 It'd be cleaner if it was just shift()

You might think so, since the semantics seems simpler-- but it would lead to 
uglier programs. You're not affording the writer of the function doing the 
shift any ability to specify what to do after the shift, and you're not giving 
them the ability to communicate any data to the caller. This requires them to 
coordinate with clients to save any additional action in some side-channel, 
e.g.:

// library
function gen(thenWhat) {
...
thenWhat.action = function() { ... /* do more stuff */ ... };
let received = shift();
...
}

// client
var k = gen({ action: function() { } });

 But maybe I'm knifing an entire API here I don't know about :) Otherwise the 
 send method seems redundant.

I'm not sure what the send method has to do with it-- it sounds like I may not 
have explained clearly enough. The semantics of shift is to capture and pop the 
current activation, reify it as an object with the three methods I describe 
above (send, throw, close) and call the handler function with this reified 
activation object as its argument. It's then up to the program to decide 
when/whether to continue the captured function by calling its methods.

Note that this means that when you use the shift operator, the handler function 
is executed immediately, whereas the rest of the captured function is suspended 
and not continued until some later time. This is the opposite of patterns like 
event callbacks and CPS, where the code in the callback is called at some later 
time, but the rest of the current function is continued immediately.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: revisiting shift

2010-04-28 Thread David Herman
 Hm. Maybe you meant to return the function to allow access to the local 
 variable
 k through a closure? And not a fingerprint
 mixed shift(function)
 as I read it at first?

I don't know what you're saying, but I have already posted the semantics in 
this thread. I *think* it should be pretty clear. (If others are confused about 
it, please do weigh in.)

 The shift could also expose the iterated value in the continuation object as a
 read-only property (or getter). The shift itself would return whatever you 
 give
 .send as parameters. But maybe this was what you meant in the first place..

If I understand this, it still doesn't give the function that does the shift 
very much room to do anything interesting after it pops the function activation.

 Whatever needs to be done before or after the shift should obviously be done
 from within the loop. I see no problems with that myself, but maybe I'm 
 missing
 something?

I think you are. When the shift happens, it pops the activation frame, which 
means that the rest of the function after the shift stops running. This is what 
a continuation operator is all about-- suspending control and saving it in a 
value that can be called later. The purpose of calling the handler function is 
to do something *immediately* after popping the activation frame, not later 
when the activation is resumed.

 On a sidenote; continuations could be implemented much like timers are now. I
 mean, for timers to work, you would already have to have a way to preserve 
 state
 and stack because you have to access it when the timer fires. This proposal
 could just extend the API using the same mechanics as the timers (I read
 something about this proposal being a possible problem for vendors). But 
 rather
 than being executed when some time has elapsed, the callback would be fired on
 demand. After implementing setTimeout and setInterval, this should be quite
 easy...

None of this is true. Closures are not the same thing as continuations.

That said, if this is a hardship for implementers, I'd be interested in having 
them weigh in.

 I also read syntax using try{}catch(){} as some kind of continue mechanism 
 and
 just wanted to say I find it really ugly and hackish.

I'm genuinely baffled by this comment.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


modules proposal

2010-05-13 Thread David Herman
Hello!

I've updated the strawman proposals for static modules and dynamic module 
loaders and would love to get feedback from the es-discuss community.

* Static modules:

http://wiki.ecmascript.org/doku.php?id=strawman:simple_modules
http://wiki.ecmascript.org/doku.php?id=strawman:simple_modules_examples

Briefly, the proposal provides a static module form:

module Widgets { ... }

The declaration binds the module name lexically. Modules declared at the same 
scope can be mutually recursive. They can also be loaded from external URL's:

module Even = http://example.com/even.js;;
module Odd = http://example.com/odd.js;;

The body of a module contains nothing in scope except for outer module 
bindings. Modules are truly lexically scoped, with no global object pollution. 
Modules can easily import bindings from other modules, either by dereferencing 
the modules that are in scope directly:

module Client { ... Widgets.messageBox(hello!); ... }

or by importing them explicitly:

module Client {
import Widgets.messageBox;
...
messageBox(hello!);
...
}

Although modules are static entities, they can be dynamically reflected as 
objects.

There's plenty more details on the wiki page.

* Dynamic module loaders:

http://wiki.ecmascript.org/doku.php?id=strawman:module_loaders

These provide an API for dynamically loading and evaluating modules, and 
creating separate isolated contexts in which modules can be evaluated. You can 
create a separate module loader that shares no module instances with the 
current module loader; this could be used e.g. by an IDE (such as Bespin) that 
wants to run client code without letting the client code affect the running 
environment of the IDE itself. You can also share some module instances, to 
have finer-grained control over the sharing between module loaders.

There are some API notes on the wiki page. I expect it'll be revised and 
refined over time. Feedback welcome!

Thanks,
Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: modules proposal

2010-05-15 Thread David Herman
 I wonder if you considered having an export list, rather than tagging the 
 individual exports?  I think that makes it easier to see/document the public 
 API of a module.  You could at the same time allow renaming on export.

Yes, this is a good point. We chose inline-export for convenience, but I don't 
see any reason not to allow both.

 FWIW, the rename on import looked backwards to me at first glance, but I 
 think I can learn.

Yeah, I'm not thrilled about how hard it is to remember which way it goes. I 
meant for it to be consistent with the syntax of destructuring:

let { draw: d } = obj;
import M.{ draw: d };

 Do we really need to say `.*` for all?  We couldn't just say `import Math`?

The proposal originally left out the '.*' and others didn't like it. But more 
importantly, this has subtler implications for nested modules, e.g. what the 
meaning of the following example is:

module Outer { module Inner { ... } }
import Outer.Inner;

The way we've written the proposal, any time you import a path without the 
'.{...}' syntax, you're importing a single binding. So import Outer.Inner is 
the same as

import Outer.{Inner};

That is, it binds Inner locally to the Outer.Inner module instance.

Alternatively, we could a) disallow leaving off the '.{...}' for importing a 
single binding and 'import x1.---.xn' would only be allowed to specify a 
module-binding and would import all its exports, or b) allow leaving off the 
'.{...}' but specify that it imports just the single binding when it's a 
value-binding and imports-all when the path indicates a module-binding. I am a 
little concerned that the former is too restrictive and the latter too subtle. 
IMO. the extra '.*' is only a two-character hardship and EIBTI.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: modules proposal

2010-05-15 Thread David Herman
 Thought looking at the syntax section it seems that `import Math;` isn't 
 valid and instead you do it as `module Math = Math`. Why not use import here 
 as well? My first instinct here is `module` is for defining a module and 
 `import` is importing it - instead `module` serves a dual function.

That's not quite right; 'module' is for creating a module *binding*. I.e., all 
the forms start with 'module m' and create a lexical binding of 'm' to a 
statically known module.

 Is there any reasoning for this behaviour that I might have missed?

The idea is that 'module' creates module bindings and 'import' creates value 
bindings. So you can create a statically bound module binding via:

module M = MyLib.Math; // M is now a module binding that aliases MyLib.Math
import M.*; // import all bindings from MyLib.Math

whereas 'import' always creates a value binding:

import MyLib.Math; // Math is now a value binding
import Math.*; // error: Math is not a module binding

Since we are allowing dynamic reflection of modules, a goal of this proposal is 
to keep the distinction between static module bindings and dynamic module 
values as firm as possible, while still making it as convenient as possible to 
reflect modules as values.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: modules proposal

2010-05-16 Thread David Herman
I think Charles means examples like:

module Foo {
export var x = 1;
}

module Foo {
export var y = 2;
}

The answer is that this is a static error. Modules are not open and extensible. 
You can't declare the same module name twice in the same scope. (This should be 
made clearer in the subsection Module binding and resolution, thanks for 
asking about it.)

You can redeclare a new module with the same name at a different level of scope:

module Foo {
export var x = 1;
}

module Bar {
module Foo {
export var y = 2;
}
import Foo.x; // error: no such binding
}

Since modules are bound in the scope chain like everything else, the semantics 
is just the same as all other lexical scope: the innermost binding of Foo wins.

Dave

On May 16, 2010, at 11:25 AM, Brendan Eich wrote:

 On May 16, 2010, at 11:15 AM, Charles Jolley wrote:
 
 I am unclear from this proposal.  What would happen if I declared the same 
 module twice?  Would it reopen the module and add the extra declarations?
 
 The simple modules proposal makes it an error to export twice. This is a bit 
 implicit in
 
 http://wiki.ecmascript.org/doku.php?id=strawman:simple_modules#export_declarations
 
 That you get an error on duplicate export is a good point to make explicitly.
 
 Not sure if you meant Dmitry's export *; idea. That would export 
 everything, but again if something was already exported, then there was 
 already a const binding in the module's exports and you'd get an error.
 
 /be
 
 
 
 -Charles
 
 
 On May 16, 2010, at 11:11 AM, Brendan Eich bren...@mozilla.com wrote:
 
 On May 16, 2010, at 9:32 AM, Dmitry A. Soshnikov wrote:
 
 On 15.05.2010 19:22, Brendan Eich wrote:
 On May 15, 2010, at 7:53 AM, David Herman wrote:
 
 I wonder if you considered having an export list, rather than tagging 
 the individual exports?  I think that makes it easier to see/document 
 the public API of a module.  You could at the same time allow renaming 
 on export.
 
 Yes, this is a good point. We chose inline-export for convenience, but I 
 don't see any reason not to allow both.
 
 +1 on an export form that takes a list of already-declared names.
 
 
 Besides, the variation with
 
 export all; // or export all;
 
 can be considered. It can be useful for debug.
 
 Debugging is good.
 
 This is a minor point, but rather than all, we'd use * instead, to mirror 
 import M.* or M.{*} or import * from M; whatever it ends up being.
 
 Or, simply -- to omit export statement. Thus, if there will be no local 
 export statement for some function, it means that all functions/properties 
 are exported. Although, it breaks the first design where by default 
 methods are private for a module.
 
 Sorry, the idea of implicit everything-exported is a footgun. Just say no.
 
 
 Excluding:
 
 export all except [intrinsic, builder]; // if we don't want to write 20 of 
 22 methods to be exported
 
 This is overdesign. By far the most common case is explicit export of a 
 select list of API functions and consts.
 
 /be
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: modules proposal

2010-05-16 Thread David Herman
 This is a minor point, but rather than all, we'd use * instead, to mirror 
 import M.* or M.{*} or import * from M; whatever it ends up being.

Agreed.

As for the meaning: from my experience with PLT Scheme it works well for 
export *; to export everything that's defined in the module, but not 
re-export any imports. For that you'd have to explicitly re-export them. I 
think either semantics (import all defined here vs. all in scope here) 
works fine, but the common case of importing is for private use, and 
re-exporting is the less common case.

On a related note, I forgot to stipulate in the proposal what happens with 
diamond imports. For example:

module Shared {
export var x = ...
}

module Lib1 {
import Shared.x;
export x;
}

module Lib2 {
import Shared.x;
export x;
}

module Client {
import Lib1.*;
import Lib2.*;
... x ... // error or no error?
}

Because module bindings are static, it's straightforward to determine that both 
Lib1.x and Lib2.x point to the same binding (Shared.x). So I claim that we 
should loosen the error condition to the following:

It is a static error to import the same name from two different modules *unless 
both bindings are the same*.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: modules proposal

2010-05-16 Thread David Herman
 Along these lines of imports, a regex would allow partial imports.

I'm a little hesitant on this idea; on the one hand it's nice that we already 
have literal support for regexps, so it's not a huge conceptual or syntactic 
overhead. OTOH, like Brendan suggested for export all except it smacks of 
over-design.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: modules proposal

2010-05-16 Thread David Herman
 See, even after I knew the rules it was too confusing. Can we go back
 to using as?
 
 import draw as drawGun from Cowboy;

This is an incomplete suggestion. What do you want the full syntax to be for 
multiple imports? Force the user to write a separate one on each line? Bracket 
them? Comma-separate without bracketing?

Whatever syntax you pick, as starts upping the syntactic overhead. The 
destructuring notation is pleasantly terse and consistent with destructuring.

[FWIW, here's how I would conceptually distinguish the two sides of the : 
token. The LHS is a fixed label. When you write an object literal, you're 
creating a property with that fixed label. When you are destructuring an 
object, you are selecting out that fixed label. When you are importing, you are 
requesting the import with that fixed label. The RHS is the varying part.]

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: modules proposal

2010-05-16 Thread David Herman
 Stupid question - is the following form of (Python-ish) import
 possible (from the grammar it doesn't look like it):
 
 [snip]
 
 Or is it the case that with a module {...} declaration the module is
 immediately accessible in the scope it was declared in - and no import
 required if module.function syntax is used:
 
 script type=harmony
 // Math module declared above - no import needed
 
 alert(2π =  + Math.sum(pi, pi));
 /script

The latter, yes. Since modules are bound in the scope chain, they're already 
available for use as values.

This makes it very convenient to reflect them as first-class values:

var x = Math;
alert(x[sum]); // function(x,y) { ... }
alert(x[thisIsNotDefined]); // undefined

But since module instance objects are immutable, it's also easy for 
implementations to make static uses, such as your example above, as efficient 
as if they had been explicitly imported.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: modules proposal

2010-05-16 Thread David Herman
Yeah, sorry, that example went through several iterations as the modules 
proposal evolved, and now it's kind of muddled. I'll update it tomorrow.

Since the reference to the module is used in an expression, it could 
technically be thought of as reflecting the module as an object and pulling out 
its property. But the system is designed so that such uses can be guaranteed to 
have the same performance as if sum and pi were explicitly imported. I'll 
clarify that example and add another example that's truly dynamic, such as:

function inspect(m) {
for (key in m) {
if (m.hasOwnProperty(key))
alert(found export:  + key);
}
}
inspect(Math);

Dave

On May 16, 2010, at 10:31 PM, Kevin Curtis wrote:

 On Mon, May 17, 2010 at 6:02 AM, David Herman dher...@mozilla.com wrote:
 This was the point I was explaining here:
 
https://mail.mozilla.org/pipermail/es-discuss/2010-May/011162.html
 
 Modules are static entities, whose structure is known at compile-time. The 
 module form creates these static bindings. But they can be reflected as 
 first-class values. (Similar to e.g. classes in Java and proposed ES4.)
 
 The import form, like var and const and function creates a binding 
 to a first-class value, whereas the module form creates a static module 
 binding.
 
 So when you write:
 
var x = Math;
 
 you create a variable binding to the reflected first-class module value. 
 Whereas when you write:
 
module x = Math;
 
 you are creating a static module binding x which is bound to the static 
 module bound to Math.
 
 
 
 OK. On the simple_modules_examples wiki page:
 
 Reflecting module instances as first-class objects
 
 script type=harmony
 // a static module reference
 module M = Math;
 
 // reify M as an immutable module instance object
 alert(2π =  + M.sum(M.pi, M.pi));
 /script
 
 Q: Why does referencing the module M's functions/constants - M.sum and
 M.pi - 'reify' M as an object.(Is this the same process as generating
 a 'reflected first-class module value'?)
 
 I get the difference between lexical/source vs first class - but this
 comment 'reify' genuinely confuses me. Why isn't the call to M.sum()
 static - known at compile time.
 
 --

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: modules proposal

2010-05-17 Thread David Herman
 No worries - the examples page is very useful.
 
 In your original email:
 module Even = http://example.com/even.js;;
 module Odd = http://example.com/odd.js;;
 
 Is the load keyword missing:
 module Even = load http://example.com/even.js;;

Yes, sorry for another inconsistency there. FWIW, I'm not married to that 
particular syntax. I put the load keyword into the proposal just to emphasize 
that these strings correspond to a compile-time action (loading the bits). For 
example, consider:

module M = load http://example.com/foo.js;;
module N = load http://example.com/foo.js;;

These two modules are loaded from the exact same MRL, but the module system 
treats them as two completely separate, independent modules. (Even custom 
module loaders shouldn't be able to change this fact, since all they can do is 
deliver the bits of a module resource.)

The reason for this is that on the web, there's just no way to know that two 
URL's point to the same resource-- fetching the bit-for-bit-same URL even 
microseconds apart can result in completely different data, since web servers 
are free to deliver whatever they want. So the module loading semantics is 
resolutely non-clever about interpreting MRL's.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: modules proposal

2010-05-17 Thread David Herman
 1. Is it possible to import parts of a module to avoid potentially large 
 network payloads?

Not parts of a module, no. (The semantics of this would be incredibly hairy and 
hard to define.) But you can dynamically load modules, so you can write code 
that decides dynamically when to load modules.

 2. Is there syntax that would allow the server to group or concatenate 
 delivery of multiple modules within one request?

Since modules can be nested, you can certainly put multiple modules in a file; 
grouping them in a single module is nothing more than namespace management. So 
you can write:

// file1.js
module A { ... }
module B { ... }
...
module Z { ... }

// client.html
script type=harmony
module MyLib = load file1.js;
/script

or with HTML support, something like:

// client.html
script type=harmony module=MyLib src=file1.js/script

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: modules proposal

2010-05-17 Thread David Herman
 Coalescing requests (for some value of request) could be pushed down a layer, 
 not specified as part of the ECMA-262 language. This serves the decoupling 
 requirement. Is it enough? A while ago Allen wrote about separating 
 configuration management:
 
 https://mail.mozilla.org/pipermail/es-discuss/2010-January/010686.html
 
 Part of this was about version selection, but part of it was about assembly 
 structuring. It seems worth working through this, both with a real 
 implementation of simple modules, and with some thought experiments.

Yes, certainly. To some extent the module declaration syntax provides you 
enough to do this already:

!-- my configuration table --
script type=harmony
module M1 = m1.js;
module M2 = m2.js;
...
/script

But it might be nice to crystallize this in a restricted special form so 
browsers can reliably prefetch:

script type=harmony
module table {
M1: m1.js,
M2: m2.js,
// ...
}
/script

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: modules proposal

2010-05-17 Thread David Herman
 Oh, but you probably meant that the module table form, besides being sugar, 
 is written in a restricted language that cannot have effects other than to 
 create module bindings -- cannot do document.write or 
 document.createElementNS(script) or whatever. In that case we'd want 
 type=harmony-module-table or some such, and then such a script indeed would 
 allow layout to proceed immediately, and not block rendering.

Yes, sorry for the mixup. I should've written something like:

script type=harmony-module-configuration-table
{ M1: m1.js,
  M2: m2.js,
  // ...
}
/script

 Thinking about it more, simple modules let authors bundle things in .js 
 files, and src them with scripts. That's almost enough. Anything more, we do 
 not want to standardize prematurely.

Agreed.

 Simple modules are really about lexical scope all the way up, and guaranteed 
 errors (early errors, even), and static code partitioning with information 
 hiding, and of course the lexical-only module-binding namespace management.

Well put.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: modules proposal

2010-05-19 Thread David Herman
 That's surprising. Within a moduleloader I would have thought that
 same url meant the same static module. Across moduleloaders maybe not.

The problem is defining same url. One option is that same means identical 
string but then when http://example.com/foo.html says:

script
module A = http://example.com/lib.js;;
module B = ./lib.js;
/script

the programmer would be tempted to think those are the same URL. And yet a 
server has access to its request URL and can deliver entirely different bits 
depending on exactly what was requested.

So you could propose some sort of canonicalization that happens on the client 
side, and say same means same canonicalized string. But then the semantics 
depends on some non-trivial algorithm that happens away from view of the 
programmer.

 For the following scenario:
 script
 module ModA = http://acme.com/moda.js;;
 module ModB = http://acme.com/modb.js;;
 ... source
 /script
 
 Both ModA and ModB use a utility module - that is the moda.js and
 modb.js files both contain:
 module ModUtils = http://widgets.com/modutils.js;;

If they want to share a utility module, you pull it out into a place that's in 
scope for both ModA and ModB:

script
module ModUtils = load http://acme.com/modutils.js;;
module ModA = load http://acme.com/moda.js;;
module ModB = load http://acme.com/modb.js;;
...
/script

 But also, simple modules seem like 'traditional' modules. That is, if
 a top level variable is declared in a C source file, that variable
 occurs once in the resulting compiled image/program. Thus the source
 files (modules) author can reason about the (shared state) of the
 variable.

I'm surprised to here you cite C as a precedent for modules, since C has no 
module system. All it has is #include, which is much harder to work with. The 
load semantics does still have the hazard of re-loading the same source (just 
as script does, BTW), but the scope is more tightly controlled; a loaded 
module is only sensitive to the modules in scope (no var-, let-, const- or 
function-bindings).

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Specification Language

2010-05-19 Thread David Herman
Arjun Guha, Claudio Saftoiu and Shriram Krishnamurthi have a recent paper on 
the topic:


http://www.cs.brown.edu/~sk/Publications/Papers/Published/gsk-essence-javascript/

Having had some experience with this question myself, let me just say that 
while formalization is appealing, it's a very subtle and time-consuming task.

Despite its well-known shortcomings, English is hard to beat for flexibility. 
When it comes to hitting the right level of specification (not over- and not 
under-), formalism can be frustratingly rigid. That includes more rigorous 
approaches like operational semantics as well as reference implementations or 
meta-circular definitions. As Graydon pointed out, you end up being forced to 
choose concrete representations that over-specify, particularly with the choice 
of data structures. Math gives you more wiggle-room, but it's still finicky and 
much slower going than code. And even most research languages don't have fully 
formalized semantics. [ Standard ML is about the only one I know of that is 
100% formalized. Don't tell me Scheme is unless you want to hear me drone on 
about my thesis. ;) ]

There are specific pitfalls to the meta-circular approach. As nice as it sounds 
to just take a language that everyone already knows, you end up standing on a 
delicate precipice. Fall over one side, and you accidentally leave implicit 
parts of the semantics that never get defined. Fall over the other side, and 
you end up spending all your time carefully defining the subset of ES that you 
want to use for your defining language-- in which case, you've essentially 
designed another language (one spec for the price of two!).

(I'm not saying improving the spec language is a bad idea, just trying to spell 
out some of the challenges.)

Dave

On May 19, 2010, at 2:44 PM, Dirk Pranke wrote:

 I wonder if you can answer some of the metacircularity concerns by
 defining the necessary parts using operational semantics, as in
 http://jssec.net/semantics/sjs.pdf .
 
 As an aside, has anyone actually attempted to formally document the
 necessary kernel subset (apart from the above paper)?
 
 -- Dirk
 
 On Mon, May 17, 2010 at 4:03 PM, Graydon Hoare gray...@mozilla.com wrote:
 On 17/05/2010 2:48 PM, Douglas Crockford wrote:
 
 I would like to see the next edition of the specification use the
 ECMAScript language to describe the ECMAScript language. I think this
 would significantly improve the likelihood that a programmer could
 correctly understand ECMAScript by reading the standard. I think that
 would make the web a better place. A strawman is in the wiki.
 
 http://wiki.ecmascript.org/doku.php?id=strawman:specification_language
 
 Some thoughts on this (as one of the authors of the ES4 RI):
 
  - The circularity hazards you mention are real, and were a large
reason why we bottomed out in SML. Unless you're expecting users
to take the fixpoint of the spec or something (which may not even
be unique) you will probably need to draw a line around a kernel
language that you define some other way. SML has a denotational
semantics, which is why we picked it. I'm not suggesting you pick it
again (plenty of problems arose; mostly cultural) but I think you'll
get the most mileage here when describing the standard libraries,
and possibly the front-end, rather than the core concepts like
evaluate expression or look up name.
 
  - There is already a fair quantity of code for both those bits
(library and front-end). ES4 libraries were being written in ES4;
Narcissus has a decent front-end and Mozilla has people working on
it still. Consider cribbing from these.
 
  - Another serious hazard (and strong-ish argument for drawing a line
around a non-self-defined kernel language) is the risk of
over-specification. It's easier to avoid in library code, much
harder when dealing with bits of the most-primitive semantics, which
differ quite a lot between different implementations of ES.
 
  - Be careful with legibility. Another (secondary) reason we settled on
SML is that it presented what we considered at the time to be a
relatively clean typographic quality and can be rendered as not
very much like source code with a certain amount of mechanical
translation and/or marked redaction work. We were repeatedly
warned that we would need to have a natural language translation
produced from the executable steps, on the basis that ISO and/or
ECMA would reject standards containing source and that IP issues
would substantially cloud and possibly derail the process if any
source code was proposed. Find out if this is a real problem before
you go too far down this road.
 
 -Graydon
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 

Re: modules proposal

2010-05-19 Thread David Herman
 So both the moda.js and modb.js source files can contain (for example):
 ModUtils.myfunc();
 
 And can import the exports of ModUtils:
 import ModUtils.myfunc;
 myfunc();

Yes.

 Is it correct that a module declaration within a script tag only has
 scope within that script tag?

No, each script is in scope for subsequent scripts. (The section Top Level in 
the strawman is the relevant part.) One of the enhancements to the current 
proposal, which we've considered but not worked out in detail, would be the 
ability to create private modules.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Specification Language

2010-05-19 Thread David Herman
 I don't agree with this. Two of us formalized, implemented, and tested
 that paper in a month. That is hardly time-consuming, and it's not
 very subtle since we have test suites to test our formalization.

I brought up your paper because it's good work. I wasn't criticizing it. But 
there's a difference between formalizing the operational core of a language and 
writing a language standard.

BTW, only having two people work on it is an *advantage* -- committee work is 
hard. :)

 That includes more rigorous approaches like operational semantics
 
 Don't try to build a semantics for the entire language. Instead, build
 a semantics for the essentials (i.e. objects and functions) and
 write a function that elaborates all the details of language into the
 semantics.

a) I'm the one who advertised your paper, remember?

b) You're arguing with a strawman. I'm not saying operational semantics is 
hard! let's go shopping! I'm saying that making things more precise and formal 
runs the risk of over-specification. Maybe these will becomes less of an issue 
if we decide to determinize more of the language (e.g. object enumeration). But 
I believe libraries will still be tricky. For example, I don't think the spec 
should ever provide an executable presentation of |Array.prototype.sort|, for 
example. Again, English can always swoop in and save the day. But at any rate, 
these are some of the challenges I'm talking about.

c) This isn't the first time we've discussed a formally specified core + 
elaboration. We considered it back in the ES4 days. But the committee decided 
to go with something more traditional and moved towards a reference 
implementation. At the time, people complained that our choice of *ML* was too 
freaky and academic. (Now imagine selling a reduction semantics...)

Doug has more recently suggested we use a subset of ES and write a 
meta-circular implementation, with the rationale that ES is more familiar and 
would be more approachable than ML.

Semanticists prefer choosing the framework that most naturally expresses the 
semantics at hand. I'm certainly on board in principle. But I'm not really the 
one to convince. ;)

d) If you're serious about suggesting lambda-JS as a basis (or starting point, 
anyway) for future editions of ECMA-262, may I make a suggestion? Do a 
proof-of-concept by taking the ES5 document and rewriting some of it in your 
suggested style. Not just the semantics but the document itself. It could be an 
illuminating exercise, and it would also likely result in a more compelling 
product. Nothing sells like concrete examples.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Specification Language

2010-05-19 Thread David Herman
You're still not understanding me. My cat could write an executable 
implementation of the ES standard library. (Seriously, he's an amazing cat.) 
The point is whether you can hit the right level of abstraction, the right 
level of presentation, and the right level of specification-- neither over- nor 
under-. The fact that you wrote this:

 For example, here is Array.prototype.sort:
 
 http://github.com/arjunguha/LambdaJS/blob/master/LambdaJS/src/BrownPLT/JavaScript/Semantics/ECMAEnvironment.hs#L169

indicates that I'm failing to get this point across.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Specification Language

2010-05-19 Thread David Herman
 You're still not understanding me. My cat could write an executable 
 implementation of the ES standard library. (Seriously, he's an amazing cat.)

Hm. I was feeling silly at the time but that just came across mean. Sorry about 
that. All I meant was that there's more to the spec than just whether it's 
executable, and in fact being executable is sometimes exactly *not* what you 
want.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Specification Language

2010-05-20 Thread David Herman
 I'd just like to express my enthusiasm for taking a formal approach to the 
 kernel language. For everything outside the kernel, defining it by 
 self-hosting, by a meta-circular interpreter (where the interpreter is 
 written only in the kernel subset of the language) or by desugaring is fine. 
 These other techniques may or may not be ideal for expressing the semantics 
 of these non-kernel constructs, but it does ensure that an adversary's code 
 cannot take any action beyond those actions allowed by the kernel language.

It's important to distinguish whether this is a valuable exercise for:

1) research;
2) groups focused on security; or
3) the language standard.

(I know security is not a separable concern. But it is also not the exclusive 
concern of language design.)

I am thrilled that researchers are working on formal models of ES. But it's not 
within the purview of TC39 to do research. And there's nothing wrong with an 
unofficial, non-normative formalization of a normative spec coming out of an 
entirely different group. It could be just as useful-- and would undoubtedly be 
of higher quality-- than one we tried to do in the standard.

 All security reasoning is reasoning about limits on what an adversary may do. 
 Thus, all security arguments rest on an induction over all the actions 
 available to the adversary.

This statement's pretty muddled. Your thus does not follow (there are more 
proof techniques on heaven and earth, Horatio...) and your induction is not 
defined. But I see that Sergio, John, and Ankur have some papers with formal 
definitions-- I look forward to reading them. Thanks for the links.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: modules proposal

2010-05-20 Thread David Herman
Hi Mike, sorry I overlooked this message.

  | This allows cyclic dependencies between any two modules 
  | in a single scope
 
 1) Is this to say that cycles are allowed, or not allowed, in 
 other scenarios? (f ex remote or filesystem-loaded modules)

It's an automatic consequence of lexical scope. It's no different from 
functions. If you write:

function f() {
function even(n) { ... odd(n-1) ... }
function g() {
function odd(n) { ... even(n-1) ... }
}
}

you get a scope error, because |even| refers to |odd| which is not in scope. 
But if you put them both at the same level of scope, they can refer to each 
other. Same deal with modules.

Since MRL's and module names are distinct, there's no problem letting remote 
modules refer to one another, they just have to do so through agreed-upon 
names. For example:

module Even = http://zombo.com/even.js;;
module Odd = http://realultimatepower.net/odd.js;;

By binding them both at the same level of scope, both even.js and odd.js can 
refer to Even and Odd.

 2) Am I understanding this correctly as the module loader
 fetching and registering Lexer.js on demand if not already
 present in its registry mapping?

The offline-JS examples are mostly just suggestive; this question is left to 
the (host-dependent) module loader to determine. Some offline-JS engines might 
prefer to have a module loader that can register top-level names implicitly. It 
might also want to load these modules on demand (which is somewhat orthogonal), 
but this would likely only be desirable behavior for built-in libraries that 
have no observable side effects. (Laziness with side-effects is pain.)

But all of this would require some host-dependent, built-in support; there's 
nothing in the spec per se that provides this functionality.

 3) Is Lexer.js required to contain exactly one module
 declaration that must match the filename, or otherwise an
 exception is thrown?

The short answer is no: the contents are the body of the module.

The long answer is that it depends on the module loader. Module loaders have 
hooks that can arbitrarily transform the contents of a module. So a module 
loader can decide on whatever format it wants.

Note that when you say an exception is thrown, this is a compile/load-time 
exception. IOW, in the code being loaded, there's no observable exception. But 
of course, with dynamic loading, one program's compile-time error is another 
program's run-time error. :)

 4) Would this on demand loading functionality be present 
 also on the web platform?

No. We avoided lazy loading in the semantics, because it'd be a hornets' nest 
on the web. Modules are eagerly evaluated in deterministic, top-to-bottom 
order. As I say above, offline-JS has different needs than web-JS, so in that 
context a JS engine may wish to provide a built-in custom module loader that 
can deal with the filesystem in more clever ways.

As I replied to Kam Kasravi earlier in this thread, on-demand loading is 
achievable via explicit dynamic loading. We tried to be non-clever about doing 
this stuff behind programmers' backs.

 5) If so, would then the uses in main1 and main2 be equivalent
 wrt binding MyLib names in the example below?
 
  // lib/MyLib.js
  module MyLib { export doStuff() ... }
 
  // main1.js
  import lib.MyLib.*;
  doStuff();
 
  // main2.js
  module wrapper = load lib/MyLib.js;
  import wrapper.MyLib.*;
  doStuff();

No. There'd be no implicit loading on the web.

 6) If I want to change the previous example to instead expose 
 MyLib's names inside a MyLib module name, would I then do 
 the following? :
 
  // main1.js
  module MyLib = lib.MyLib;
  MyLib.doStuff();
 
  // main2.js
  module wrapper = load lib/MyLib.js;
  module MyLib = wrapper.MyLib;
  MyLib.doStuff();

I'm not sure I see what you're getting at, but IIUC, both of those examples are 
fine. You can always declare a new module binding that points to/aliases an 
existing module binding.

 In the Remote modules on the web (1) example we have:
 
  | module JSON = load 'http://json.org/modules/json2.js';
  | alert(JSON.stringify({'hi': 'world'}));
 
 7) Am I understanding correctly that this is pointing to a
 plain script (without module decls) which is wrapped inside
 the JSON module we specify?

It's a module body, which may contain nested module declarations, variable 
declarations, function declarations, statements, etc.

 8) Imagine a json2mod.js with an embedded:
 
  module JSON { ... }
 
 which would result in:
 
  module JSON = load 'http://json.org/modules/json2mod.js';
  alert(JSON.JSON.stringify({'hi': 'world'}));
 
 Is this assumption correct?

Yes, but you wouldn't wanna do that. ;)

 9) If so, what syntax could be used to avoid the extra wrapper
 module at load?

The creator of the JSON library doesn't name it; the client names it. The JSON 
module just contains the *body* of the module. This is crucial for the web: you 
can have millions of 

Re: modules proposal

2010-05-20 Thread David Herman
 Heh-- excuse my bogus BNF! That's just a made-up extension that allows 1 or 
 more instances of MRL, separated by the ',' token. Spelled out:
 
MRL+(',') ::= MRL
   |  MRL+(',') ',' MRL

PS If this is still unclear, just replace MRL+(',') with MRLList, and add the 
production:

MRLList ::= MRL
 |  MRLList ',' MRL

I'll do that in the spec to avoid this confusion in the future.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: modules proposal

2010-05-24 Thread David Herman
 zero.js:
 module One = load 'one.js';
 module Drawing = load 'gun.js';
 module JQ = load 'jquery.js';
 
 one.js:
 import JQ;
 module Two = load 'two.js';
 
 two.js:
 import JQ;
 import Drawing;
 Drawing.draw();
 
 The module of concern to us is one.js.  According to your proposal, one.js is 
 doing in essence a text inclusion of the contents of two.js

It's not a textual inclusion. The only context-sensitivity within 'two.js' is 
its free references to other modules -- at the outset of the body of a module, 
only static module bindings are in scope.

 and as a result is susceptible to the problems unhygienic macro expansion 
 suffers.

That's an exaggeration and only very tangentially related.

The point of module references is to strike a balance between clean separation 
and convenience. The ability to refer to other modules implicitly makes direct 
references possible (including cyclic references) while still providing the 
client with ultimate control over naming.

 As a result, JQ in two.js gets bound to jquery which zero.js setup and one.js 
 imported as intended however Drawing *also* gets bound to the Drawing that 
 zero.js setup.  The author of one.js - the importee that is responsible 
 setting up two.js - had no way of knowing that two.js needed Drawing and was 
 going to find one in the scope chain.

I'm not sure what you mean by no way of knowing. In this setup, a module's 
free references to existing modules are part of its API. This information would 
be part of the documentation. It's similar to the way things work now, in that 
libraries collaborate by sharing things through documented top-level bindings 
in the global object, but done in a more static way to allow for true lexical 
scope.

  Nor could it take any action to withholding bindings to other imports deeper 
 in modules that two.js may request.

On the contrary-- it can, via module loaders. If you want to load, say, an 
untrusted 3rd party module, you can use module loaders to do so in a separate 
environment.

But the purpose of the core module system is not to provide isolation between 
untrusted parties. It's important to distinguish modularity and isolation. The 
module system provides cleaner code separation while preserving convenient 
collaboration between modules. The module loaders API, by contrast, provides 
stronger controls for isolation.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Modules: Name capture

2010-06-02 Thread David Herman
 Sorry for the slow reply -- was sick

No worries-- hope you're feeling better.

 Years of PL research and experience have demonstrated that explicit linking 
 tends to be unwieldy and inconvenient.
 
 That needs to be added to my reading list. Cite away! :)

ML is dead; what more evidence do you need? ;)

Really, though, the research literature on modules is enormous. I don't have 
the time or inclination to provide a full bibliography. Personally, I've worked 
with several advanced, explicitly-linked module systems, including ML functors 
and PLT Scheme units.

 With concise object literals, would that not be:
 
 module Even = load 'even.js' with { Odd };
 module Odd = load 'odd.js' with { Even };

Possibly, depending on whether you want to present modules to themselves as 
well.

But really, I've seen it before: these kinds of specification languages for 
module graphs spin out of control. You'll wish you had the ability to abstract 
the thing on the RHS of with -- and then you'll have to introduce the 
complexity of compile-time bindings of module graphs, and figure out how to 
shoe-horn those into the existing syntax and semantics. Or, you'll hold the 
line and force programmers to keep writing out the full module graph over and 
over again, in which case they just won't ever use modules at all.

 But seriously: I am not *necessarily* suggesting explicit linking (however 
 defined). I am pointing out the necessary consequences of a dangerous design 
 that promises more than it can deliver.

You've not demonstrated that.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Modules: Name capture

2010-06-02 Thread David Herman
 I don't have the time or inclination to provide a full bibliography.
 
 I consider your argument withdrawn, then.

Excuse me? My argument is not withdrawn (are we in court?). If you are 
unaware of decades of prior art on modules, that's not my failing but yours.

My argument was and remains that others have gone down that road, and it's 
still very much an open research topic how to create module systems that 
provide the generality of explicit linking with the convenience of implicit 
linking. See e.g. Derek Dreyer's work, starting with his thesis and continuing 
to this day.

 Possibly, depending on whether you want to present modules to themselves as 
 well.
 
 As I believe we discussed in our most recent f2f, it is possible to provide 
 modular code with access to its own reified module instance via some 
 distinguished symbol (e.g., this at the top level). And of course, modular 
 code always has direct individual access to its own exports.

Hence depending.

 As recast, therefore, the example introduces Odd to even.js and Even to 
 odd.js. It's pretty minimal.

And yet it's still too expensive. No one will take the step from non-module 
code to module code. They just won't. Besides, a not-quite-so-bad example of 
the Odd and Even modules is pretty weak tea.

The point is, you can special-case this if you want, but if you have a module 
graph of N modules, and each needs to be explicitly linked with N - 1 other 
modules, then you impose a quadratic code-size requirement on programmers. 
Unless, as I said, you beef up your linking-specification language.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Composition of Uncoordinated Working Sets of Modules

2010-06-04 Thread David Herman
Hi Kris,

Thanks for your thoughts; I'll keep reading but I do want to respond to a 
couple points that I don't think are quite accurate.

 The one step forward comes from handling cyclic dependencies
 elegantly.  If I am correct, this is the feature we gain from second
 classness and from not basing the module system on a better eval.

I don't agree with this summary. First of all, you don't have to base any 
module system on eval. By keeping modules second class, we get a number of 
benefits, not just handling cyclic dependencies. (In fact, cyclic dependencies 
can be handled nicely in a first-class module system as well.) One of the 
benefits of second-class modules is the ability to manage static bindings; for 
example, import m.*; is statically manageable. Allen has made some good points 
about how second-class modules are a good fit for the programmer's mental model 
of statically delineated portions of code. At any rate, cyclic dependencies are 
not the central point.

 The loader proposal reintroduces the idea of a better
 eval, being simply a hermetic evaluator that collects a working set
 of modules, links them, and executes them.

It's more than eval -- e.g., it provides load hooks to manage resource fetching 
and even allow transformation -- but yes, it does provide a more controlled 
eval.

 Because all of the Rhino codebase contains fully qualified names in
 every file, refactoring Rhino to contain and link against alternate
 names is onerous, and alternately creating a parallel universe for the
 minifier fork is onerous, so these things are simply not done.

I don't see how this problem applies to simple modules. Because modules are 
referred to as bound names, rather than as fully-qualified names or URL's, it's 
easier for separate projects to share common components. They can even share 
the same module under different names, since module names can be rebound 
(module NewName = OldName) and modules in separate files can be loaded with 
different names (module Foo = load '...some url...').

 The original Simple Modules proposal was only sufficient in the small.
 The Loaders proposal addresses the large.

That's not true. Loaders are about isolation. I agree with you that 
conceptually, there's a level of granularity that consists of a set of modules, 
which is often what we mean by package, at least in common usage (if not the 
particular meaning of that term in a given language). But the idea of 
nested/hierarchical modules is that modules scale to the large by simply making 
modules that consist of nested modules.

 It does not yet enable linking to other working sets
 of internally consistent modules

This is also not true; the ability to attach modules to module loaders (as well 
as the dynamic evaluation methods) makes it possible for separate module 
loaders to communicate. However, loaders aren't about linking multiple working 
sets, but rather providing isolated subspaces. (One use case I sometimes use is 
an IDE implemented in ES, that wants to run other ES programs without them 
stepping on its toes.)

 Another feature of Simple Modules is that it preserves the
 equivalence by concatenation property of existing script tags,
 while liberating the scripts from being sensitive to the order in
 which they are concatenated.  This is in conflict with the goal of
 removing autonomous module blocks.

I don't quite understand this, and I'm glad you bring up the issue of latency 
and plugging into the browser semantics. I believe we're at least partway to 
the answer, but I won't believe we've solved it till I really see it go all the 
way through. That said, I am also not convinced that a) the script tag is 
going away any time soon, or that b) we necessarily need to solve these 
problems in the context of a module system.

 Simple Modules, at present, will not sufficiently assist people
 constructing applications and APIs by composing non-coherent groups of
 name spaces produced by non-cooperating groups of developers.


I'm not convinced of this point. If someone doesn't want to share their code, 
there's nothing we can do to make them do so. But if they do want to, the 
simple modules proposal explicitly *solves* the problems of Java-like systems 
where everything is hard-wired. Instead, modules are given lexically scoped 
names, and can even be deployed without naming themselves; both of these 
features make it far easier to share code between different teams.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Composition of Uncoordinated Working Sets of Modules

2010-06-07 Thread David Herman
 It would be
 good for this to be expressed in one of the examples, and
 for it to be clarified in the description of semantics that
 every script is also an anonymous module from which the
 exports are only accessible through the lexical scope
 shadowing (I assume) and by being bound to a module
 through a load expression.

This doesn't sound quite right-- in the terminology we used, scripts are not 
modules. An application is composed of a sequence scripts, which are like 
module bodies but do not contain exports. Each script's bindings are in scope 
for all subsequent scripts. By contrast, the target of load is the body of a 
module, which can export bindings.

 This is a point that Ihab clarified for me yesterday
 evening that merits bold and emphasis: loaded modules are
 not singletons.

Yes, I think something we need to do is write out some more material explaining 
the proposal in more tutorial fashion. The examples page was a start, but 
clearly not enough.

 You do this to avoid having to compare
 MRL's for equivalence, particularly to avoid having to
 define equivalence given the potential abundance of edge
 cases.
 
http://example.com/module?a=10b=20
http://example.com/module?b=20a=10

Yes, as well as the fact that even the bit-for-bit same URL can deliver 
different bits from moment to moment. So module loading really is effectful 
(albeit at compile time), in the sense that it performs arbitrary Internet I/O. 
In lieu of requiring programmers to learn rules about when two references to 
modules are referring to the same memoized instance and when the instance is 
loaded and evaluated, simple modules make all this explicit and under the 
programmer's control.

 It would also be good for there to be a way to bind $ without binding
 a module.
 
const A = load(aQuery.js).$;
const B = load(bQuery.js).$;

There are a couple reasons why I think I'd avoid this kind of thing: for one, 
it means that |load| -- which indicates a /compile-time/ operation -- can now 
be arbitrarily nested in a program instead of just at top level. Also, loading 
is a fairly heavyweight operation, and since it doesn't memoize, you could very 
easily end up with accidental duplication.

As Sam says, you can write almost the same thing via dynamic loading:

const A = ModuleLoader.current.loadModule(aQuery.js).$;
const B = ModuleLoader.current.loadModule(bQuery.js).$;

or a little more conveniently:

function load(ml, mrl) {
return ml.loadModule(mrl);
}

const ml = ModuleLoader.current;

const A = load(ml, aQuery.js).$;
const B = load(ml, bQuery.js).$;

The main difference from what I think you intended is that this would do the 
loading dynamically.

 It's possible to use the module loader API to do this,
 slightly more verbosely.   But why?  If you say:
 
 module A = load aQuery.js;
 
 then A.$ is already available for use in expression
 contexts.
 
 I can make the same argument about import *.  If I import
 A, I can access its contents as A.$.  To permit
 destructing on all import expressions would be consistent
 philosophically.

I don't follow your reasoning-- import A.* is a convenience form to bind the 
exports of A as local variables. It serves a very different purpose.

Local, nested module loading could either mean static loading, which I contend 
would be confusing and error-prone, or dynamic loading, which is already 
available via the dynamic loading API.

 Your example might point to a need to augment the module
 loader api with information on 'load' calls specifying
 what module the 'load' occurs in.
 
 Exactly.  The Narwhal loader receives an id and a baseId on
 from require(id) calls.  Each module gets a fresh require
 effectively bound on the baseId.  I think that the loader
 handler needs to receive the base MRL as an argument or part
 of the request object.

Yes, I agree. That was an oversight-- thanks for bringing it up.

 Another thing that Ihab clarified which merits a full
 section on the wiki is the dynamic scoping of lexical module
 names.

I've said it before: it's not dynamic scoping. It's static, lexical, 
compile-time scoping. Dynamic scoping necessarily involves dynamically 
determining the binding of a variable. There's nothing of the sort happening 
here; it's all compile-time.

 This is something I have not considered.  It would be good
 to do a write-up on what use-cases you have in mind for this
 feature.

Yes, we should definitely do that. Two important use cases are 1) standard 
libraries, which would be shared as global module bindings in a standard module 
loader, and 2) mutually recursive modules, which need to agree on what they 
call one another.

 I'm going to mull the implications, but one for
 sure, is that it is necessary to buy a whole package even if
 you only want a single function from it.

True. I don't think it's reasonable to try to solve the more intricate problems 
of partial or on-demand loading of modules. I think 

Re: We need to name EphemeronTable (was: Do we need an experimental extension naming convention?)

2010-07-02 Thread David Herman
 Cool. I'm warming to WeakMap as well. Do we have any objections to WeakMap?

+1

I 3 WeakMap.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Rationalizing ASI (was: simple shorter function syntax)

2010-07-25 Thread David Herman
 Mark's restricted production for CallExpression attacks the hazard even more 
 directly, but apart from our aversion to restricted productions, what might 
 it break?

I don't see offhand what it might break. This question seems easy to 
investigate empirically-- crawl the web looking for violations of the 
restriction.

Personally, I'm not enthusiastic about this line of pursuit. It smells of 
excessive fool-proofing. Ad-hoc restrictions seem both unlikely to provide 
clear guarantees and likely to have unintended consequences. Irregular syntax 
is bumpy terrain; obfuscation we will always have with us. In the absence of 
strong evidence of a need, I'd prefer to relegate such syntactic restrictions 
to third-party lint tools and let them experiment with them. Just my $.02.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: WeakMap API questions?

2010-09-02 Thread David Herman
 That page currently has TBD semantics.

Yeah, that's part of the work that needs to be done. Intuitively, it's a simple 
idea: ToName essentially generalizes the current semantics of property lookup; 
instead of trying to convert the property to a string, you try to convert it to 
a name-or-string. If it's already a name, it's a no-op; otherwise, it goes 
through the existing mechanisms.

IOW, everything behaves as before, except name objects are not converted to 
strings.

 Syntax aside, is the observable semantics of Names different from 
 http://wiki.ecmascript.org/doku.php?id=strawman:inherited_explicit_soft_fields?
  How? If the only semantic difference is (not normally observable) less 
 aggressive GC obligations, great. I'm confident we can converge those. 
 Anything else?

The interface is different. With weak maps, you store soft fields off to the 
side; with private names, you actually get/set the properties directly on the 
object.

   class Shape {
 private draw() {...}
 public coDraw(other) {
   draw();
   private(other).draw();
 }
 public shoot(gun) {
   gun.draw();
 }
   }
 
 In the names proposal, it seems that once in scope of a private draw 
 declaration, all apparent uses of draw as a property name are amplifying. 
 Even if the object being accessed has a normally named draw property, 
 gun.draw() will fail to access it. Is that right?
 
 If all this will be addressed in the forthcoming love, I'm happy to postpone 
 these questions till then. Thanks.

These are good questions, but probably best to postpone till Sam has time to 
flesh out the strawman page.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: WeakMap API questions?

2010-09-03 Thread David Herman
But HashMaps and WeakMaps both map objects to values. The difference is just 
that, with WeakMaps, the mapping is weak. The name is excellent, short, and 
clear.

 Perhaps ObjectMap would be better?

That wouldn't distinguish them from HashMaps, since they are both object maps.

WeakMap is a really, really good name. Nay, an /incredibly awesome/ name. Good 
names are so hard to come by. Let's not overthink this in the effort to prevent 
all possible confusion. Mike momentarily forgot what they mean, but there 
aren't really any API docs as such and it's not like he was actually writing 
code with them. I'd imagine after spending 30 seconds writing code with 
WeakMaps, no one would confuse about the types.

It ain't broke -- don't fix it!

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Classes as Sugar is now ready for discussion

2010-09-08 Thread David Herman
 But since Traits seems to be blocked from advancing,
 
 Is there someplace I should read to understand why Traits cannot advance?
 
 I asked MarkM off-list what the reason was, and he replied that there
 was an objection (raised by Waldemar?) to how class evolution was
 handled.

My feeling, and I think the feeling of others at the meeting when we discussed 
traits, was that traits.js is a very nice library but that it doesn't offer 
enough to the language to warrant standardization, at least yet. The fact that 
there would be performance benefits to built-in implementations of traits isn't 
enough to make the case. IMO, libraries should generally be very widely used 
and very stable before they are added to the ES standard library.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Classes as Sugar is now ready for discussion

2010-09-08 Thread David Herman
 Does that feeling carry over to any variants that might actually
 include new syntax?

The issue isn't whether it introduces new syntax, it's whether it introduces 
new semantics. The Traits library, as written, is completely implementable as a 
library in the existing language.

That's not to say that we should *never* add new already-implementable 
libraries to the standard. Sometimes they fill an obvious hole (like 
Array.prototype.forEach), or they are clear, stable, and popular (like JSON).

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Classes as Sugar is now ready for discussion

2010-09-08 Thread David Herman
 libraries should generally be very widely used and very stable before they 
 are added to the ES standard library.
 
 That would seem like an unfair penalty. Am I to infer classes-as-sugar OS 
 preferred because it _can't_ be implemented as a library (despite being a 
 more experimental approach, and at odds with existing OOP libraries)?

No. But keep in mind that nobody's denying anyone of the traits library. Go 
to http://traitsjs.org and use it today! :)

IOW, I think it's better to focus on the things that can't otherwise be done in 
user code. Blessing certain existing libraries with possibly-more-performant 
C++ implementations is not a good use of committee resources, and it's a bad 
way to release and rev software since it's so much more encumbered by process.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Classes as Sugar is now ready for discussion

2010-09-08 Thread David Herman
 Agreed; perhaps my question was not clear. If there was a Traits-like
 proposal that did include new syntax, would you be against it because
 you can implement something similar as a library without needing new
 semantics, or would you be more inclined to reserve judgement until
 you could actually review a proposal and see what the proposed
 benefits were?

The latter (speaking for myself, of course).

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Classes as Sugar is now ready for discussion

2010-09-09 Thread David Herman
 Also, the duality of Object.create vs Traits.create accommodates traditional 
 vs high integrity quite well -- without AFAICT compromising either.

It creates a false choice, though (all or nothing). IIUC, with Object.create, 
you don't even get the conflict checking. And then you've really lost the key 
benefit of traits.

I think there's room for alternatives in the traits space -- for example, 
something similar wrt trait composition, but that didn't bind |this| or freeze. 
That way, you could still integrate traits with the existing prototype system. 
For example, to compose traits to create an object that you then use as the 
prototype for a constructor. This would allow for the vtables approach and 
would also give you the ability to specify initialization behavior to invoke on 
instantiation, which you can't do with traits.js.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Classes as Sugar is now ready for discussion

2010-09-10 Thread David Herman
 Yes, that's an accurate summary. It also brings me back to Dave's earlier 
 question about the limited choices provided by the Traits library.
 ...
 Long story short: it's definitely possible for a Traits library to offer more 
 knobs, although I'm not sure whether the increased complexity is worth it.

Just to be clear, I wasn't saying we should consider *more* knobs, just that 
other knobs are possible. I'm not convinced that the Trait.create knob offered 
by traits.js is necessary.

 Judging from earlier comments, it seems there is at least a niche for the 
 combination of 'early conflict detection' + 'non-frozen, non-|this|-bound 
 objects'.

I'm told this is what our colleagues working on Skywriter would have preferred.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


draft of strawman:simple_modules

2010-09-28 Thread David Herman
I've recently updated and clarified the draft strawman for simple modules:

http://wiki.ecmascript.org/doku.php?id=strawman:simple_modules

One of the key changes is to the initial environment of modules, addressing the 
issue of module name capture pointed out a while back by Jasvir Nagra and Ihab 
Awad. The spec now distinguishes externally-loaded modules from internal 
modules. Internal modules simply extend the existing environment, like any 
other scoping construct. But to prevent external modules from being sensitive 
to *all* modules in scope, they are given an initial environment with only the 
*local* modules (i.e., the sibling modules declared in the same parent module 
as the module-load declaration in question). There are provisions for locally 
rebinding a module so that it can be made explicitly visible to an 
externally-loaded module.

The strawman now provides a more detailed semantics for the core system. Allen 
Wirfs-Brock made the good suggestion that we create a rationale document. I 
will be working on that, and I'll send announcement when it's ready. The module 
loaders API is not ready yet either, so that'll be another future announcement, 
too.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Oct 1 meeting notes

2010-10-04 Thread David Herman
Waldemar, thanks for the great notes. One quick comment on the binary data 
notes:

 int64's:  Open issue.  Reference semantics are annoying, but what's a
 realistic alternative?
 int128's?  Those come up increasingly often in SSE programming.

We briefly discussed bignums as a realistic alternative, where equality would 
be based on numeric value rather than object identity. But for now we'll try to 
keep the binary data spec as orthogonal as possible from a bignums proposal.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


hoisting past catch

2010-10-11 Thread David Herman
ES3 `catch' is block-scoped. At the last face-to-face, we talked about 
statically disallowing var-declarations from hoisting past let-declarations:

function f() {
{
let x = inner;
{
var x = outer; // error: redeclaration
}
}
}

I just noticed a case I missed in the discussion, which has actually existed 
since ES3:

function g() {
try {
throw inner;
} catch (x) {
var x = outer;
}
}

This is allowed, and it binds a function-scoped variable x, while assigning 
outer to the catch-scoped variable x. This is pretty goofy, and almost 
certainly not what the programmer expects. And it's exactly analogous to the 
function f above, which SpiderMonkey currently rejects and we all agreed 
Harmony ought to reject.

It's too late to add this case to ES5's strict mode restrictions, but I propose 
we ought to reject it in Harmony.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: No more modes?

2010-10-14 Thread David Herman
 Given script type=harmony as an opt-in, I'm puzzled about how it would 
 work anyway. Since it is per script, not per frame, presumably
 
 script type=harmonyuse strict; var e1 = eval;/script
 scriptuse strict; var e2 = eval;/script
 script ...use strict; e1 === e2 /*results in true*/ /script

That's not quite right, at least according to the simple modules design. But 
it's my fault for not spelling it out clearly enough in the strawman. In 
particular:

- Harmony scripts would not have the legacy global object as a record in their 
scope chain
- Harmony eval retains the same value and behavior as ES5-strict eval, i.e., it 
evaluates its code as legacy code, not as Harmony code.

Both of these are open to discussion, of course, but I offer them at least as 
counter-evidence to your implication that there's no answer to the versioning 
question other than eliminating modes.

The invariants of Harmony do *not* rely on not interacting with legacy code. 
You still get lexical scope, and ES5-strict eval is sufficient for that 
purpose. (When you run eval, of course, you're running code in a language that 
doesn't have full lexical scope.)

 In other words, that both harmony and non harmony code on the same page have 
 the same binding for the global eval function. In that case, what does the 
 following code do:
 
 e2(' use strict; var module = 8;');
 
 This is currently legal es5/strict code. As suggested by the modules 
 strawman, it is not legal harmony code.

It's perfectly legal. It's an indirect eval, regardless of whether it's being 
called from ES5-strict or Harmony.

 es5/strict and harmony share a heap.

Yep, and that's fine.

 I do not see a good answer.

As you say, that's why it's worth discussing.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: value_types + struct+types = decimal

2010-10-18 Thread David Herman
Are you suggesting a) that struct types should always be value types, or b) 
some sort of extension to the binary data spec that allows the creation of 
immutable structs that are value types?

I'm afraid a) just seems unworkable -- compound binary data needs to be 
mutable, and its sub-components really need to be selectable by reference, not 
by copying. If you meant b) (or something else), can you flesh it out a bit 
more?

Dave

On Oct 16, 2010, at 2:34 PM, Sam Ruby wrote:

 I'm pleased to see value_types discussed again:
 
  http://brendaneich.com/2010/10/should-js-have-bignums/
 
 It is my belief that value_types[1] and struct_types[2] are sufficient to 
 build a pure ECMAScript implementation of Decimal.  Such an approach would 
 solve a number of problems:
 
 (1) Ubiquity.  Making Decimals a required feature was controversial within 
 the committee.  Anything less would hamper adoption.  Making the 
 implementation in ECMAScript and having that implementation only depend on 
 value_types and struct_types, then the implementation effectively becomes as 
 ubiquitous as those two features.
 
 (2) Format.  A number of committee members objected to 754R as a format.  
 Those that wish to pursue their own formats would be welcome do to so.
 
 The only key requirement that this is not known to satisfy is:
 
 (3) Performance.  While we were not expecting hardware level performance, 
 something approximating C-level performance was the goal.
 
 Mitigating that is:
 
 (3a) ECMAScript implementations have improved dramatically since this was 
 last evaluated.  The expectation is that this trend will continue. And given 
 the expected use case for struct_types, it is especially likely that that 
 feature will be implemented with performance in mind.
 
 (3b) Not all users have the same performance requirements.
 
 (3c) A stable implementation written in C with a liberal license[3] continues 
 to be available.  This will always be an option for enlightened browser 
 vendors with an interest in performance.
 
 Given those mitigating factors, I am quite willing to proceed with the above 
 approach, and would strongly encourage the committee to consider adopting the 
 value_types proposal, and to not make any design decision which would 
 preclude struct_types from being considered as a value_type.
 
 - Sam Ruby
 
 [1] http://wiki.ecmascript.org/doku.php?id=strawman:value_types
 [2] http://wiki.ecmascript.org/doku.php?id=strawman:binary_data#structs
 [3] http://download.icu-project.org/files/decNumber/decNumber-icu-368.zip
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: instanceof trap and default handler for Proxies

2010-10-30 Thread David Herman
 How 'bout extending proxies to provide an introspection API?

That's a really vague question. It would help to say what kind of introspection 
you had in mind.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Negative indices for arrays

2010-11-11 Thread David Herman
 If harmony would introduce this syntax guarded under a new script type, 
 there 
 would at least be no danger of breaking the web (existing scripts).

That sounds like an interop nightmare -- you're talking about forking the Array 
type between language versions. Keep in mind that non-Harmony and Harmony code 
will be able to interact in the same page.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nov 18 notes

2010-11-21 Thread David Herman
 And (sorry, I'll try to keep replies in one message next time) for vs. 
 forvals does not exactly scream keys vs. values, since for is only 
 about keys if you know ECMA-262 and expect the mystery meat of enumeration.

IMO, forvals is a non-starter, as is foreach or for each. The for part 
of the syntax denotes quantification, and the stuff to the right of the 
variable denotes what is being iterated. In all cases, we are talking about 
universal quantification, so they should all be for.

 Which raises another point: meta-programmable iteration is not necessarily 
 about values and not keys. A custom iterator could (and in the strawman 
 does) return key/value pairs. The whole keys vs. values dilemma is a false 
 one here.

Indeed. There are an unbounded number of types of sequences that can be 
iterated over. Whether we provide 2 or 200 it will never be enough. Hence the 
need for a general, programmable iteration mechanism.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nov 18 notes

2010-11-22 Thread David Herman
 1a)  New semantics should use new syntax in a manner that clearly avoids 
 confusion with existing syntax.
 1b)  Syntax should generally be suggestive of a reasonable interpretation of 
 the semantic
 1c)  Harmony is not being designed using the no new syntax rule
 1d)  There is nothing sacred about for as the initial keyword of an 
 enumeration statement.

Nobody said sacred -- I'm not genuflecting. :) Seriously, the reason for 
using for is that it's one of the most stable, common, and universally-used 
keywords for iteration in almost all languages. Introducing a new initial 
keyword is straying really far from precedent, both in JS and other imperative 
languages.

But I appreciate your spelled-out premises. I think my main quibble is with 1a 
as a rule. I might prefer something like:

2a) If existing syntax is given new semantics, it should extend the existing 
semantics conservatively. Otherwise, the new semantics should get new syntax.

 From these I conclude that new iteration semantics should be syntactically 
 distinct from the current for-in and probably the greater the syntactic 
 distance from for-in the better.  Following these principles, here is a 
 suggestion for a new enumeration statement that makes use of existing 
 reserved words:
 
 enum key with keys(x) {
   alert(key)
 }

This is clever, but it just seems to go off the deep end: the syntax is too 
inconsistent with JS precedent. Also, enum is the wrong keyword -- in JS 
parlance, this is iteration not enumeration.

I guess I'm still open to new syntaxes, but I also still feel that when you 
step back and weigh the trade-offs, the cost of all this new syntax is 
incommensurate with the amount of new semantics, and moreover the traditional 
for-in syntax is still the sweetest I've seen for custom iteration. I would 
rather extend the best syntax and leave the legacy special case as a very small 
wart than have a warty syntax with a supremely orthogonal semantics.

 2) Whenever possible, less general pre-existing syntactic forms should be 
 redefined to desugar into new more general forms.

I think this is pretty uncontroversial; whatever syntax we decide on, the 
specific legacy construct can be defined in terms of the more general new 
construct.

 3) Proxy traps should be defined based upon the new, more general semantics 
 not legacy less general semantics.
 
 Define the traps necessary to support enum-with and depend upon the 
 desugaring to take care of legacy for-in.

You don't think for-in should even allow the enumerate trap? This seems to go 
against the design approach of proxies; it's not just for introducing new 
meta-programmable constructs, but also for meta-programming existing facilities.

 4) Provide builtin-library alternatives for new statements that can be used 
 without down-rev syntax errors:

This seems like a good idea.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nov 18 notes

2010-11-22 Thread David Herman
 2a) If existing syntax is given new semantics, it should extend the existing 
 semantics conservatively. Otherwise, the new semantics should get new syntax.

Perhaps I should have numbered that 1a'). :)

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Colons and other annotative characters

2010-11-22 Thread David Herman
 I somehow suspect stringifying the iterator next() return value from for-in 
 machinery will not placate folks who want for-in not to be metaprogrammable.

Nor would it work -- you wouldn't be able to get the values() or properties() 
iteration behavior, for example. It would be the worst of all possible 
compromises.

 But harmony:proxies is already spec'ed with an enumerate trap. Something does 
 not add up. You're right that the line between your items (2) and (3) is 
 arbitrary.

Agreed.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: `static` keyword from C/C++ as own closured var declaration

2010-11-23 Thread David Herman
Allen suggested something like this in the September meeting. One issue people 
had with it was that it adds another violation of the equivalence between a 
statement stmt and (function()stmt)(), which is a refactoring hazard. Put 
differently, if you have some code with static declarations in it, you can't 
wrap the code in a function body without breaking the code, which makes it 
brittle.

Separate from that, I also don't really see how the idea really buys you all 
that much. The analogy to C is only so helpful, since C doesn't have nested 
functions (and static therefore always lifts to global scope), so mostly it 
just reads to me like a rather obscure way of saying oops, I meant to bind 
this over there.

Dave

On Nov 22, 2010, at 8:18 PM, Bga wrote:

 // es3 way
 (function()
 {
  var x = 1;
 
  return function()
  {
return ++x;
  }
 })();
 
 // current es6/SM1.8 way
 let(x = 1) function()
 {
  return ++x;
 }
 
 // new more readable sugar
 function()
 {
  static x = 1; // hello c/c++
 
  return ++x;
 }
 
 Implementation, when compiling source code, just collects 'static' vars
 from scope and wraps current scope to closure scope with collected vars  
 
 
 
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: `static` keyword from C/C++ as own closured var declaration

2010-11-23 Thread David Herman
 Can you give a small example (it's just interesting) -- to see the issue?

Sure thing. Say you're writing some code with a constant value, and somewhere 
inside the code you use `static':

var METERS_PER_SQUARE_KILOJOULE = 17.4;
...
static foo = 1;
...
f(foo, METERS_PER_SQUARE_KILOJOULE);

Now you decide you want to parameterize over the constant, instead of a fixed 
constant:

function(metersPerSquareKiloJoule) {
...
static foo = 1;
...
f(foo, metersPerSquareKiloJoule);
}

This change accidentally alters the scope of `foo'.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: `static` keyword from C/C++ as own closured var declaration

2010-11-23 Thread David Herman
Hm, that's an interesting point: *all* declaration forms are sensitive to being 
wrapped in a function, e.g.:

|(function() { var x })()| != |var x|

That pretty much nixes that critique!

Dave

On Nov 23, 2010, at 9:53 AM, Allen Wirfs-Brock wrote:

 How is your example any different from if you had said:
  const foo=1;
 
 In both cases, wrapping the declaration with a function changes its scope??
 
 Allen
 
 -Original Message- From: David Herman
 Sent: Tuesday, November 23, 2010 9:24 AM
 To: Dmitry A. Soshnikov
 Cc: es-discuss@mozilla.org
 Subject: Re: `static` keyword from C/C++ as own closured var declaration
 
 Can you give a small example (it's just interesting) -- to see the issue?
 
 Sure thing. Say you're writing some code with a constant value, and somewhere 
 inside the code you use `static':
 
   var METERS_PER_SQUARE_KILOJOULE = 17.4;
   ...
   static foo = 1;
   ...
   f(foo, METERS_PER_SQUARE_KILOJOULE);
 
 Now you decide you want to parameterize over the constant, instead of a fixed 
 constant:
 
   function(metersPerSquareKiloJoule) {
   ...
   static foo = 1;
   ...
   f(foo, metersPerSquareKiloJoule);
   }
 
 This change accidentally alters the scope of `foo'.
 
 Dave
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss 

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nov 18 notes

2010-11-23 Thread David Herman
   for (k in keys(o)) ...
   for (v in values(o)) ...
   for ([k, v] in properties(o)) ...
 
 What are keys, values, and properties here?  Global functions?

Those are API's suggested in the strawman:iterators proposal. They would be 
importable from a standard module.

  How would a new object abstraction T customize them just for instances of T?

By writing its own custom iteration protocol via proxies with the iterate() 
trap implemented appropriately. E.g.:

function MyCollection() { }
MyCollection.prototype = {
iterator: function() {
var self = this;
return Proxy.create({
iterate: function() { ... self ... },
...
});
}
}

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nov 18 notes

2010-11-23 Thread David Herman
 How would a new object abstraction T customize them just for instances of T?
 
 By writing its own custom iteration protocol via proxies with the iterate() 
 trap implemented appropriately. E.g.:
 
function MyCollection() { }
MyCollection.prototype = {
iterator: function() {
var self = this;
return Proxy.create({
iterate: function() { ... self ... },
...
});
}
}

I left out the last step: clients would then use this via:

var coll = new MyCollection();
...
for (var x in coll.iterator()) {
...
}

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: natively negotiating sync vs. async...without callbacks

2010-12-09 Thread David Herman
I pretty much abandoned that line of investigation with the conclusion that 
generators:

http://wiki.ecmascript.org/doku.php?id=strawman:generators

are a good (and well-tested, in Python and SpiderMonkey) design for 
single-frame continuations. They hang together well; in particular, they don't 
have the issues with `finally' that some of the alternatives I talked about do. 
Moreover, the continuation-capture mechanisms based on call/cc or shift/reset 
require additional power in the VM to essentially tail-call their argument 
expression. When I tried prototyping this in SpiderMonkey, I found this to be 
one of the biggest challenges -- and that was just in the straight-up 
interpreter, not in the tracing JIT or method JIT.

Generators work well for lightweight concurrency. As a proof of concept, I put 
together a little library of tasks based on generators:

http://github.com/dherman/jstask

Somebody reminded me that Neil Mix had written a very similar library several 
years ago, called Thread.js:

http://www.neilmix.com/2007/02/07/threading-in-javascript-17/

and there's another library called Er.js that built off of that to create some 
Erlang-like abstractions:

http://www.beatniksoftware.com/erjs/

Dave

On Dec 8, 2010, at 11:36 PM, Tom Van Cutsem wrote:

 The spirit of the proposal is that this special type of statement be a linear 
 sequence of function executions (as opposed to nested function-reference 
 callbacks delegating execution to other code).
  
 The special behavior is that in between each part/expression of the 
 statement, evaluation of the statement itself (NOT the rest of the program) 
 may be suspended until the previous part/expression is fulfilled. This 
 would conceptually be like a yield/continuation localized to ONLY the 
 statement in question, not affecting the linear execution of the rest of the 
 program.
 
 This reminds me of a proposal by Kris Zyp a couple of months ago (single 
 frame continuations)
 https://mail.mozilla.org/pipermail/es-discuss/2010-March/010865.html
 
 I don't think that discussion lead to a clear outcome, but it's definitely 
 related, both in terms of goals as well as in mechanism.
 I also recall it prompted Dave Herman to sketch the design space of 
 (single-frame) continuations for JS:
 https://mail.mozilla.org/pipermail/es-discuss/2010-April/010894.html
 
 Cheers,
 Tom
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: natively negotiating sync vs. async...without callbacks

2010-12-09 Thread David Herman
PS To be concrete, here's an example code snippet using my jstask library that 
chains several event-generated actions together in a natural way (i.e., in 
direct style, i.e. not in CPS):

var task = new Task(function() {
var request = new HttpRequest();
try {
var foo = yield request.send(this, foo.json);
var bar = yield request.send(this, bar.json);
var baz = yield request.send(this, baz.json);
} catch (errorResponse) {
console.log(failed HTTP request:  + errorResponse.statusText);
}
... foo.responseText ... bar.responseText ... baz.responseText ...
});

I should also point out that the core of jstask is 7 lines of code. :)

Dave

On Dec 9, 2010, at 7:55 AM, David Herman wrote:

 I pretty much abandoned that line of investigation with the conclusion that 
 generators:
 
 http://wiki.ecmascript.org/doku.php?id=strawman:generators
 
 are a good (and well-tested, in Python and SpiderMonkey) design for 
 single-frame continuations. They hang together well; in particular, they 
 don't have the issues with `finally' that some of the alternatives I talked 
 about do. Moreover, the continuation-capture mechanisms based on call/cc or 
 shift/reset require additional power in the VM to essentially tail-call their 
 argument expression. When I tried prototyping this in SpiderMonkey, I found 
 this to be one of the biggest challenges -- and that was just in the 
 straight-up interpreter, not in the tracing JIT or method JIT.
 
 Generators work well for lightweight concurrency. As a proof of concept, I 
 put together a little library of tasks based on generators:
 
 http://github.com/dherman/jstask
 
 Somebody reminded me that Neil Mix had written a very similar library several 
 years ago, called Thread.js:
 
 http://www.neilmix.com/2007/02/07/threading-in-javascript-17/
 
 and there's another library called Er.js that built off of that to create 
 some Erlang-like abstractions:
 
 http://www.beatniksoftware.com/erjs/
 
 Dave
 
 On Dec 8, 2010, at 11:36 PM, Tom Van Cutsem wrote:
 
 The spirit of the proposal is that this special type of statement be a 
 linear sequence of function executions (as opposed to nested 
 function-reference callbacks delegating execution to other code).
  
 The special behavior is that in between each part/expression of the 
 statement, evaluation of the statement itself (NOT the rest of the program) 
 may be suspended until the previous part/expression is fulfilled. This 
 would conceptually be like a yield/continuation localized to ONLY the 
 statement in question, not affecting the linear execution of the rest of the 
 program.
 
 This reminds me of a proposal by Kris Zyp a couple of months ago (single 
 frame continuations)
 https://mail.mozilla.org/pipermail/es-discuss/2010-March/010865.html
 
 I don't think that discussion lead to a clear outcome, but it's definitely 
 related, both in terms of goals as well as in mechanism.
 I also recall it prompted Dave Herman to sketch the design space of 
 (single-frame) continuations for JS:
 https://mail.mozilla.org/pipermail/es-discuss/2010-April/010894.html
 
 Cheers,
 Tom
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New private names proposal

2010-12-16 Thread David Herman
 This sounds great, but doesn't this kind of violate referential
 transparency?

That's a loaded criticism. JS doesn't have referential transparency in any 
meaningful sense. But it does generalize the meaning of the dot-operator to be 
sensitive to scoping operators, that's true.

 Couldn't the goals of this be achieved by having a Name constructor
 (albiet less convenient syntax, since you have to use obj[name],
 perhaps that is what you are addressing) or having private name create
 a name Name (to be used like obj[name])?

You can do the same thing with the current proposal (except that private names 
in Allen's strawman are primitive values and not objects). You can write a 
library that produces new private name values, and you can use the bracket 
operator to get and set properties with that name.

But your albeit less convenient syntax is the crux of why I like the proposal 
as is. I would much rather write (and I suspect many programmers would as well):

function Point(x, y) {
private x, y;
this.x = x;
this.y = y;
...
}

than

function Point(x, y) {
var _x = gensym(), _y = gensym();
this[_x] = x;
this[_y] = y;
}

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New private names proposal

2010-12-16 Thread David Herman
Without new syntax, isn't soft fields just a library on top of weak maps?

Dave

On Dec 16, 2010, at 3:47 PM, Mark S. Miller wrote:

 
 
 On Thu, Dec 16, 2010 at 3:23 PM, Brendan Eich bren...@mozilla.com wrote:
 On Dec 16, 2010, at 2:19 PM, Mark S. Miller wrote:
 
  Currently is JS, x['foo'] and x.foo are precisely identical in all 
  contexts. This regularity helps understandability. The terseness difference 
  above is not an adequate reason to sacrifice it.
 
 Aren't you proposing the same syntax x[i] where i is a soft field map, to 
 make exactly the same sacrifice?
 
 http://wiki.ecmascript.org/doku.php?id=strawman:names_vs_soft_fields
 
 I am *not* proposing these syntactic extensions. Neither am I avoiding them 
 on that page, since the point of that page is to compare semantics, not 
 syntax. The first paragraph (!) of that page clearly states:
 
 This translation does not imply endorsement of all elements of the names 
 proposal as translated to soft fields, such as the proposed syntactic 
 extensions.
 
 The two issues are orthogonal. Whichever of Names or Soft Fields wins, we can 
 have an orthogonal argument about whether the winner should use this 
 syntactic shorthand. Conversely, whatever the outcome of the syntax argument 
 in this thread, they would apply equally well to either semantics.
  
 
 
 /be
 
 
 
 
 -- 
 Cheers,
 --MarkM
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New private names proposal

2010-12-16 Thread David Herman
 I'll address this last point first, since this seems to be the core issue. 
 The question I am raising is: given soft fields, why do we need private names?

I didn't see that asked as a question; I saw it asserted, but not opened for 
discussion.

And I disagree. Now, I happen to think it's not worth blessing libraries simply 
because they could be optimized, but I do not see soft fields as supplanting 
private names -- especially because of usability -- but especially because I 
happen to like weak maps very much, and very much hope for a world where ES a) 
makes it easy to write (any number of) soft field libraries and b) makes it 
*easy* to store encapsulated data in objects.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New private names proposal

2010-12-17 Thread David Herman
Let me make a gentle plea for not creating unnecessary controversy. Take a step 
back: we all seem to agree we would like to provide a more convenient and 
performant way to create private fields in objects. In terms of observable 
behavior in the runtime model, there aren't that many differences between your 
proposed soft fields and the original names proposal or Allen's recent 
revisions. There are a handful of points where we have different ideas about 
what the desired behavior should be, and those are worth discussing.

But let's please not battle over specification mechanism, especially not in 
this phase of design. We shouldn't jeopardize the process over whether it's 
better to conceptualize the feature as storing private fields in a side table 
or internally. Can we try to stay on track with the Harmony process, where we 
recognize that we have common goals and try to move forward from there and 
discuss individual features as objectively as possible, rather than engaging in 
winner-take-all wars?

Remember, the real enemy is the Romans:

http://www.youtube.com/watch?v=gb_qHP7VaZE

Dave

On Dec 16, 2010, at 5:38 PM, Mark S. Miller wrote:

 
 
 On Thu, Dec 16, 2010 at 5:24 PM, David Herman dher...@mozilla.com wrote:
 Ok, I open it for discussion. Given soft fields, why do we need private 
 names?
 
 I believe that the syntax is a big part of the private names proposal. It's 
 key to the usability: in my view, the proposal adds 1) a new abstraction to 
 the object model for private property keys and 2) a new declarative 
 abstraction to the surface language for creating these properties.
 
 As shown on 
 http://wiki.ecmascript.org/doku.php?id=strawman:inherited_explicit_soft_fields,
  the syntax you like applies equally well to both proposals. The fact that I 
 don't like this syntax is not an argument (to one who does like the syntax) 
 that we should do names. Were names adopted, I would still not like this 
 syntax and would still argue against it. Were the syntax adopted, I would 
 still argue that the syntax should be used to make soft fields more 
 convenient, rather than make names more convenient. The arguments really are 
 orthogonal.
 
  
 
 
 And I disagree.
 
 I'm sorry, but what do you disagree with? That I am raising the question?
 
 No, I disagree that the two are in direct competition with one another.
 
 Below, you seem to be saying that given names, why do we need soft fields? 
 How is that not a  there can be only one! Highlander contest? If that's 
 not what you're saying, then what?
 
 It's not what I'm saying. I'm saying that private names are independently 
 useful, and weak maps are independently useful, and given *weak maps* we 
 don't need soft fields. It might have been less confusing if I had left out 
 the latter point. Just to break it down as clearly as possible:
 
 - I don't believe soft fields obviate the utility of private names.
 - I don't believe private names obviate the utility of soft fields.
 - Given that soft fields are expressible via weak maps, I don't believe they 
 are worth adding to the standard.
 
 In fairness, I think the apples-to-apples comparison you can make between the 
 two proposals is the object model. On that score, I think the private names 
 approach is simpler: it just starts where it wants to end up (private names 
 are in the object, with an encapsulated key), whereas the soft fields 
 approach takes a circuitous route to get there (soft fields are semantically 
 a side table, specified via reference implementation, but optimizable by 
 storing in the object).
 
 
 I happen to like weak maps very much, and very much hope for a world where 
 ES a) makes it easy to write (any number of) soft field libraries and b) 
 makes it *easy* to store encapsulated data in objects.
 
 How do names make this easier than soft fields?
 
 The syntax (see above).
 
 Ok, how do names with your syntax make this easier than soft fields with your 
 syntax?
 
  
 
 Dave
 
 
 
 
 -- 
 Cheers,
 --MarkM

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: I recuse myself (was: Private names use cases)

2010-12-21 Thread David Herman
 I never said I don't want syntactic support. I said I don't like the syntax 
 you proposed. You and Dave have now both said that you consider this to be 
 the main issue.

Hm, I certainly didn't intend to say that. I'm not quite sure what you're 
referring to that I said. I don't necessarily have a single sine-qua-non in 
this design space. Sorry I haven't been as clear as I should have.

There's a lot going by in this conversation, and I don't want to add too much 
noise, but I do want to emphasize something: the various options we've 
discussed *all* have a syntactic component, even if we separate out the 
|private| syntax. Brendan mentioned this in reply to David-Sarah, but I think 
it's worth repeating: whether in the soft fields approach or the private names 
approach, expr[expr] is overloaded in a new way.

Maybe that's the key sticking point for me about the soft fields approach: 
overloading that syntax to mean lookup in a side table is what seems like a 
drastic break from the intuitive model of objects. I have nothing against side 
tables as a programming idiom; it's when you make them look like they aren't 
side tables that they become confusing. Especially when you can do 
counter-intuitive things like add new properties to a frozen object. Of course, 
there are clearly use cases where you want to associate new data with a frozen 
object, and convenience would be helpful. I'm just not convinced that making it 
look like ordinary object lookup is the right programmer-interface. So yes, 
it's a syntax issue, but it's a syntax issue that frames the mental model, and 
I think that matters.

What am I saying with all this? Not sure. I'm disheartened at the level of 
controversy too, but I think it's worth pushing through, because as a JS 
programmer, I really feel the lack of private fields-or-properties (with 
apologies to the English language... trying to remain diplomatic here...) in 
day-to-day programming.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New private names proposal [repost]

2010-12-22 Thread David Herman
On Dec 21, 2010, at 10:41 PM, David-Sarah Hopwood wrote:

 Again you seem to be confusing the inherited soft fields proposal with
 the *separate* proposal on desugaring the private name syntax to inherited
 soft fields.

I think I may have been misunderstanding what Mark was actually 
proposing/advocating, then. I'm happy to be disabused of my mis-reading.

But on re-reading, I still can't quite make sense of the Can we subsume 
Names? section. There are two syntactic components to the private names 
proposal:

(1) the bracket-notation is generalized to recognize private name values to 
look for private properties

(2) the dot-notation and colon-notation are generalized to use private names 
when their property name is bound by a |private| declaration

But the Can we subsume Names? subsection seems to mix these two cases up. To 
match up with (1), you'd need to interpret *all* bracket notation as a 
potential lookup of a soft field, i.e. something like:

e1[e2] ~~
let (t1 = e1, t2 = e2) {
= t2 instanceof SoftField
 ? t2.get(t1)
 : t1[t2]
}

(where the rewritten brackets are the true brackets, i.e., not re-desugared).

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New private names proposal

2010-12-22 Thread David Herman
On Dec 22, 2010, at 2:00 AM, David Flanagan wrote:

 On 12/22/2010 01:02 AM, David Herman wrote:
 
 function Point(x, y) {
 private #x, #y;
 this.#x = x;
 this.#y = y;
 }
 
 I keep seeing this basic constructor example.  But isn't this the case that 
 Oliver raised of private being over generative?  Those private names have 
 been generated and discarded, and those two fields can never be read again...

Oops, I left out the ellipses:

function Point(x, y) {
private #x, #y;
this.#x = x;
this.#y = y;
...
}

Of course, if you wanted to extend the scope further, you could lift it out of 
the constructor.

As for the complaint of it being over-generative, that's mitigated in this case 
by the sigil. For example, if you wrote:

function Point(x, y) {
private #x, #y;
this.#x = x;
this.#y = y;
}
Point.prototype = {
... #x ... #y ...
};

you'd get a compile-time error since #x and #y aren't in scope. Unless, of 
course, they are already in scope as another private, although I'd expect this 
kind of thing to be a bit rarer than variable scope errors since I would guess 
private names wouldn't be nested and repurposed as often as variables -- that's 
just a guess; it's hard to be sure.

Also, you can only take the but if you do it wrong, it doesn't work arguments 
so far. After all, the generativity is by design. The question is whether that 
design will be too surprising and confusing. We shouldn't make JS too 
complicated or baroque, but we shouldn't nix an idea based on assuming too 
little of programmers. IOW, I think the too complicated criticism should be 
used with competent programmers in mind.

Anyway, I'm also just thinking out loud. :)

 I like private as a keyword in object literals: it doesn't seem any more 
 confusing than get and set in literals. I don't like seeing it in functions 
 though: there it looks like a kind of var and const analog.

Isn't this less the case when what follows the keyword isn't an ordinary 
identifier, i.e., has the sigil?

 Is there any syntax from the old ES4 namespace stuff that could be applied 
 here?

An interesting thought, but I'm skeptical -- ES4 namespaces are pretty 
dis-Harmonious, and for good reason: there be dragons. :)

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


  1   2   3   4   5   6   7   >