Re: System.import()?

2015-08-21 Thread James Burke
On Tue, Aug 18, 2015 at 11:36 AM, Dave Herman dher...@mozilla.com wrote:
 https://github.com/whatwg/loader/blob/master/roadmap.md

From a loader/tool perspective, it seems like working out more of the
Stage 1 items before many of the Stage 0 items will lead to higher
leverage payoffs: the dynamic loading and module meta help existing
transpiling efforts, better inform userland module concatenation
efforts.

In the spirit of the extensible web, defining these lower level APIs
and more of the loader would make it possible to use custom elements
to help prototype a module tag. The custom element mechanism can be
used in a similar way to how JS transpilers are used for the existing
module syntax.

If the Stage 0 normalization is about normalizing IDs to URLs as the
internal normalized storage IDs, I suggest reaching out to talk with
more people in the AMD loader community about the reasons behind it,
how it sits with AMD loader plugin IDs, seeing IDs more like scoped,
nested property identifiers, and module concatenation.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: System.import()?

2015-08-21 Thread James Burke
On Fri, Aug 21, 2015 at 8:12 AM, Domenic Denicola d...@domenic.me wrote:
 No, custom elements cannot prototype the module tag, because custom elements 
 will always be parsed as unknown elements. So e.g.

Ah, thanks for pointing that out. I was thinking more of the script
src style of tag. The need for module bodies directly inside HTML
tags, vs loaded as separate files, is of lower importance, less
leveraged impact, than the other module/loader APIs.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Are ES6 modules in browsers going to get loaded level-by-level?

2015-04-24 Thread James Burke
On Fri, Apr 24, 2015 at 8:42 AM, Allen Wirfs-Brock al...@wirfs-brock.com
wrote:


 I think you're barking up the wrong tree. ECMAScript has never said
 anything about the external representation of  scripts (called Programs
 prior to ES 2015) and the ES 2015 spec.  doesn't impose any requirements
 upon the external representation of Modules.  One Script or Module per
 external contain or multiple Scripts and Modules per external container  -
 it makes no difference to the ES 2015 semantics..  Such encoding issues are
 entirely up to the host platform or ES implementation to define.  But the
 platform/implementation has no choice in regard to the semantics of a
 Module (including mutability of slots or anything else in the ES 2015
 specification).   No matter a Module is externally stored it must conform
 to the ES 2015 module semantics to be a valid ES 2015 implementation.



Understood: the ES2015 spec makes it a point to not get into this. I was
hoping that the module champions involved with the ES2015 spec would be on
this list to respond to how to use modules in practice.

So perhaps I was incorrect to ask for officially blessed, but more a
bundling form that module champions know will meet the ES2015 semantics of
a Module.

The difficulty is precisely that ES2015 sets strong semantics on a Module
that seem difficult to translate to a script form that could allow bundling.

I expect module meta to play a fairly important role for that translation,
so having that defined, in some ES spec or elsewhere, and how that might
work in bundling, would also be really helpful to complete the module
picture.



 So, if you want physical bundling, you need to convince the platform
 designers (eg, web, node, etc) to support that.  Personally, I think a zip
 file makes a fine bundle and is something I would support if I was
 building a command-line level ES engine.



See my first post to this thread why when we had this in practice in
FirefoxOS, a zip file with the contents, it was decided to use script
bundling to increase performance. With the extensible web and more userland
JS needed to bootstrap things like view selection and custom elements,
getting the JS up and running as soon as possible is even more important.

The arguments so far against script bundling have been there are better
things that can be made for performance, but I do not see that in
practice, particularly for the offline web on mobile devices. Besides that,
I see modules as units of reusable code, like functions, which do allow
bundling, nesting.

I can understand that is not the goal of ES2015, so hopefully the use case
feedback will be useful to help flesh out a full module system that can use
the ES module semantics.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Are ES6 modules in browsers going to get loaded level-by-level?

2015-04-24 Thread James Burke
On Fri, Apr 24, 2015 at 11:39 AM, Brendan Eich bren...@mozilla.org wrote:

 Not bundling in full; your previous post talked about HTTP2 but mixed
 dependency handling and bundling. You seemed not to advert to the problem
 of one big bundle being updated just to update one small member-of-bundle.
 One can be skeptical of HTTP2 but the promise is there to beat bundling.

 So in a future where ES6 or above is baseline for web developers, and
 HTTP2 is old hat, there won't be the full bundling and module body
 desugaring you seem to be insisting we must have in perpetuity. (Yes, there
 will be dependency graphs expressed concisely -- that's not bundling.)
 Right?


There are some nice things with HTTP2 and being able to update a smaller
set of files vs needing to change a bundle.

I am mostly concerned about startup performance primarily on mobile
devices, and in the offline cases where HTTP2 is not part of the equation,
at least not after first request. For the Firefox OS Gaia apps, they are
currently zip files installed on the device. The same local disk profile
exists with service worker-backed apps that work offline.

In the Firefox OS case, loading the bundle of modules performs better than
not bundling, because multiple reads to local disk was slower than the one
read to a bundled JS file. I expect this to be true in the future
regardless of ES6 baseline or the existence of HTTP2.

A bundle of modules that have already been traced, usually ordered by least
dependencies first, most dependencies last in one linearized fetch vs. in
the unbundled case, the dependency tree needs to be discovered and then
fetched as the modules are parsed.

It is hard to see the second one winning enough to discard wanting to
bundle modules. Even if the bundle alternative is some sort of zip format
that requires the whole thing to be available in memory. There is still the
read, parse, back-and-forth traffic to the memory area, converting that to
file responses. With service workers in play, it just adds to the delay.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Re: Are ES6 modules in browsers going to get loaded level-by-level?

2015-04-23 Thread James Burke
On Thu, Apr 23, 2015 at 7:47 AM, Domenic Denicola d...@domenic.me wrote:

  Indeed, there is no built-in facility for bundling since as explained in
 this thread that will actually slow down your performance, and there’s no
 desire to include an antipattern in the language.





Some counterpoint:

For privileged/certified FirefoxOS apps, they are delivered as zip files
right now. No HTTP involved. Asking for multiple files from these local
packages was still slower than fetching one file with scripts bundled, due
to slower IO on devices, so the certified apps in FirefoxOS right now still
do bundling for speed concerns. No network in play, just file IO.

With service workers, it is hard to see that also being faster since the
worker needs to be consulted for every request, so in that FirefoxOS app
case, I would still want bundling.

With HTTP2, something still needs to do the same work as bundling, where it
traces the dependencies and builds a graph so that all the modules in that
graph can be sent back in the HTTP2 connection.

So the main complexity of bundling, a build step that traces dependencies
and makes a graph, is still there. Might as well bundle them so that even
when serving from browser cache it will be faster, see device IO concerns
above.

Plus, bundling modules together can be more than just a speed concern: a
library may want to use modules in separate files and then bundle them into
one file for easier encapsulation/distribution.

I am sure the hope is that package managers may help for the distribution
case, but this highlights another use related to bundling: encapsulation.
Just like nested functions are allowed in the language, nested module
definitions make sense long term. Both functions and modules are about
reusing units of code. Ideally both could be nested.

I believe that is a bigger design hurdle to overcome and maybe that also
made it harder for the module champions to consider any sort of bundling,
but bundling really is a thing, and it is unfortunate it is not natively
supported for ES modules.

The fun part about leaving this to transpilers is trying to emulate the
mutable slots for import identifiers. I think it may work by replacing the
identifiers with `loader.get('id').exportName`, or whatever the module
meta/loader APIs might be, so having those APIs are even more important for
a usable module system. There is probably more nuance to the transformation
than that though. Like making sure to add in use strict to the function
wrapper.

It is kind of sad that to use ES modules means to actually not really use
them at runtime, to transpile back to ES5-level of code, and needing to
ship a bootstrap loader script that allows slotting that the ES5-level code
into the ES loader. For the extra script and transpiling concerns, it does
not seem like an improvement over an existing ES5-based module systems.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Are ES6 modules in browsers going to get loaded level-by-level?

2015-04-23 Thread James Burke
On Thu, Apr 23, 2015 at 4:48 PM, Brendan Eich bren...@mozilla.org wrote:

 Your lament poses a question that answers itself: in time, ES6 will be
 the base level, not ES3 or ES5. Then, the loader can be nativized.
 Complaining about this now seems churlish. :-|


So let's stay on this specific point: bundling will still be done even with
ES modules and a loader that would natively understand ES modules in
unbundled form. Hopefully the rest of my previous message gave enough data
as to why.

If not natively supported in ES, it would be great to get a pointer to the
officially blessed transform of an ES module body to something that can be
bundled. Something that preserves the behaviors of the mutable slots, and
allows using the module meta.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Any news about the `module` element?

2014-12-21 Thread James Burke
On Fri, Dec 19, 2014 at 8:55 PM, caridy car...@gmail.com wrote:
 Yeah, the idea is to get engines (V8 maybe) to implement what is spec’d in 
 ES6 today, and get a very basic implementation of script type=module. 
 Remember, the goal is to get this out there asap to start gathering feedback 
 from the community, and this seems to be the minimum requirements.

It would be fascinating to know why this is the prioritization. To me,
this looks like trying to rush something to be implemented to claim
some sort of progress. However, it is so limited and has so many
questions around it. It is hard to see how it is worth the swirl in
the community to introduce it.

The module tag is really a distraction. It is not needed to build a
working system. If the following pieces were worked out, it can be
skipped completely, and in the Worker case, needs to be skipped (I am
sure you are aware of the coming Service Worker bliss, so not just a
curious side issue):

* What has been referred here as “module meta”. This also includes
being able to dynamic import into the loader instance that loaded the
module.
* A loader.
* Module inlining.

Those all exist in some form in existing ES5-based module systems, and
all contribute to full module system. They do not need a module tag
to work. The browser use case definitely needs some new capabilities
to allow a loader to work harmoniously with CORS/Content Security
Policy, but that does not mean a module tag.

I would much rather see people’s time spent on those other items
instead of swirling on a module tag.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Any news about the `module` element?

2014-12-19 Thread James Burke
On Thu, Dec 18, 2014 at 6:13 PM, caridy car...@gmail.com wrote:
 What does this means?

 * no loader (if you need on-demand loading, you can insert script tags with
 type=module, similar to what we do today for scripts)
 * no hooks or settings (if you need more advanced features, you will have to
 deal with those manually)

 Open questions:

 * how to fallback? ideally, we will need a way to detect modules support,
 equivalent to noscript in semantic.
 * we need to reserve some resolution rules to support mappings and hooks in
 the future (e.g.: `import foo from foo/mod.js` will not work because `foo`
 will require loader configs or hooks to be defined, while `import foo from
 “./foo/mod.js”` and `import foo from “//cdn.com/foo/mod.js”` will work just
 fine).

Also:

* How does dynamic loading work in a web worker? In general, how does
dynamic loading work when there is no DOM.
* module IDs should not be paths with .js values (see package/main
types of layout). Maybe that is what you meant by your second
question.

Perhaps the module tag is a DOM implementation detail that backs the
standardized API for dynamic loading, but seems odd to focus on that
backing detail first. I am sure there is more nuance though, perhaps
you were trying to give quick feedback, and if so, apologies for
reading too much into it.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Modules: suggestions from the field

2014-06-23 Thread James Burke
I added a doc about module inlining/nesting, and why it should be
supported in a module system. Mentions SPDY/HTTP2, packaged formats,
Node’s Browserify, and transpiling:

https://github.com/jrburke/module/blob/master/docs/inlining.md

James

On Mon, Jun 16, 2014 at 1:21 PM, James Burke jrbu...@gmail.com wrote:
 I have suggested alterations to the modules effort, and an in-progress
 prototype[1].

 It is based on field usage of existing JS module systems, and my work
 supporting AMD module solutions for the past few years.

 There is a document describing what it attempts to fix[2]. The table
 of contents from that document:

 —

 This project reuses a lot of thinking that has gone into the
 ECMAScript 6 modules effort so far, but suggests these changes:

 * Parse for module instead of import/export
 * Each module body gets its own unique module object
 * Use function wrapping for module scope

 They are motivated by the following reasons:

 * import syntax disparity with System.import
 * Solves the moduleMeta problem
 * Solves nested modules and allows inlining
 * Easy for base libraries to opt in to ES modules

 It has these tradeoffs:

 * Cycle support
 * Export name checking

 —

 I am willing to talk to TC-39 members in realtime channels (video/in
 person) that may need more background or might want to discuss
 further, but I am less likely to discuss it in email threads.

 I will likely continue that prototype effort even if the more recently
 visible issues for modules are solved differently, as the current
 state of the baseline ES system will still require bootstrap loader
 scripts. For the bootstrap scripts, I will need some of the concepts
 in the prototype for the AMD consumers I have traditionally supported.

 There is a “story time” document[3] for a narrative around how the
 prototype relates to smaller ideas around code referencing and reuse.

 [1] https://github.com/jrburke/module
 [2] https://github.com/jrburke/module/blob/master/docs/module-from-es.md
 [3] https://github.com/jrburke/module/blob/master/docs/story-time.md

 James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ModuleImport

2014-06-19 Thread James Burke
On Thu, Jun 19, 2014 at 1:15 AM, David Herman dher...@mozilla.com wrote:
 ## Proposal

 OK, so we're talking about a better syntax for importing a module and binding 
 its named exports to a variable (as distinct from importing a module and 
 binding its default export to a variable). Here's my proposal:
 ```js
 import * as fs from fs; // importing the named exports as an object
 import Dict from dict;  // importing a default export, same as ever
 ```

Two other possibilities:

1) Only allow export default or named exports, not both.

The reason default export is used in module systems today is because
there is just one thing that wants to be exported, and it does not
matter what its name is because it is indicated by the module ID.
Sometimes it is also easier to just use an object literal syntax for
the export than expanding that out into individual export statements.

Allowing both default and named exports from the same module is
providing this syntax/API extension. If there are ancillary
capabilities available, a submodule in a package is more likely the
way it will be used, accessed as a default export via a module ID like
‘mainModule/sub', instead of wanting to use a default and named export
from the same module.

It would look like this:

import fs from ‘fs’; // only has named exports, so get object holding
all the exports
import { readFile } from ‘fs’; // only the readFile export
import Dict from ‘dict’; // a default export

— or —

2) Only allow `export` of one thing from a module, and `import {}`
just means allowing getting the first property on that export. This
removes the named export checking, but that benefit was always a bit
weak, even weaker with the favoring of default export.

//d.js, module with a default export, note it does not need a name,
//`export` can only appear once in a file.
export function() {};

//fs.js, module with “multiple” us
export {
  readFile: function(){}
};

//main.js using ‘d’ and ‘fs'
import d from ‘d’;
import { readFile } from ‘fs’;

—

Both of those possibilities also fix the disjointed Sytem.import() use
with a default export. No need to know that a `.default` is needed to
get the usable part. It will match the `import Name from ‘id’` syntax
better.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ModuleImport

2014-06-19 Thread James Burke
On Thu, Jun 19, 2014 at 12:13 PM, Domenic Denicola
dome...@domenicdenicola.com wrote:
 From: es-discuss es-discuss-boun...@mozilla.org on behalf of James Burke 
 jrbu...@gmail.com

 1) Only allow export default or named exports, not both.

 As a modification of the current design, this hurts use cases like

 ```js
 import glob, { sync as syncGlob } from glob;
 import _, { zip } from underscore;
 ```

It is just as likely the module author will specify sync and zip as
properties of their respective default export. Particularly since
those are coming from JS modules that come from existing module
systems.

So a destructuring assignment would be needed, following the default
import. It works out though, and is not that much more typing. That,
or those pieces would be available as ‘underscore/zip’ or ‘glob/sync'
imports.

The argument for allowing both a default and named exports seems
ill-defined based on data points known so far, and by avoiding it, it
reduces the number of import forms and aligns better with
System.import use.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Modules: suggestions from the field

2014-06-16 Thread James Burke
I have suggested alterations to the modules effort, and an in-progress
prototype[1].

It is based on field usage of existing JS module systems, and my work
supporting AMD module solutions for the past few years.

There is a document describing what it attempts to fix[2]. The table
of contents from that document:

—

This project reuses a lot of thinking that has gone into the
ECMAScript 6 modules effort so far, but suggests these changes:

* Parse for module instead of import/export
* Each module body gets its own unique module object
* Use function wrapping for module scope

They are motivated by the following reasons:

* import syntax disparity with System.import
* Solves the moduleMeta problem
* Solves nested modules and allows inlining
* Easy for base libraries to opt in to ES modules

It has these tradeoffs:

* Cycle support
* Export name checking

—

I am willing to talk to TC-39 members in realtime channels (video/in
person) that may need more background or might want to discuss
further, but I am less likely to discuss it in email threads.

I will likely continue that prototype effort even if the more recently
visible issues for modules are solved differently, as the current
state of the baseline ES system will still require bootstrap loader
scripts. For the bootstrap scripts, I will need some of the concepts
in the prototype for the AMD consumers I have traditionally supported.

There is a “story time” document[3] for a narrative around how the
prototype relates to smaller ideas around code referencing and reuse.

[1] https://github.com/jrburke/module
[2] https://github.com/jrburke/module/blob/master/docs/module-from-es.md
[3] https://github.com/jrburke/module/blob/master/docs/story-time.md

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: multiple modules with the same name

2014-01-28 Thread James Burke
On Mon, Jan 27, 2014 at 11:53 PM, Marius Gundersen gunder...@gmail.com wrote:
 AFAIK ES-6 modules cannot be bundled (yet). But if/when they can be bundled
 this is an argument for silently ignoring duplicates

Loader.prototype.define seems to allow bundling, by passing strings
for the module bodies:

https://people.mozilla.org/~jorendorff/js-loaders/Loader.html
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: multiple modules with the same name

2014-01-27 Thread James Burke
On Mon, Jan 27, 2014 at 3:14 PM, John Barton johnjbar...@google.com wrote:
 On Mon, Jan 27, 2014 at 2:50 PM, Sam Tobin-Hochstadt sa...@cs.indiana.edu
 wrote:
 Imagine that some browser has an ok-but-not-complete implementation of
 the X library, but you want to use jQuery 17, which requires a better
 version.  You need to be able to replace X with a polyfilled update to
 X, and then load jQuery on top of that.

 Note that this involves indirect access in the same library (jQuery)
 to two versions of X (the polyfill and the browser version), which is
 why I don't think Marius's worry is fixable without throwing out the
 baby with the bathwater.


 Guy Bedford, based on experiences within the requirejs and commonjs worlds,
 has a much better solution for these scenarios. (It's also similar to how
 npm works).

 Your jQuery should depend upon the name X, but you Loader should map the
 name X when loaded by jQuery to the new version in Loader.normalize(). The
 table of name mappings can be configured at run time.

 For example, if some other code depends on X@1.6 and jQuery needs X@1.7,
 they each load exactly the version they need because the normalized module
 names embed the version number.

In the AMD world, map config has been sufficient for these needs[1].

As a point of reference, requirejs only lets the first module
registration win, any subsequent registrations for the same module ID
are ignored. “ignore was chosen over “error because with multiple
module bundles, the same module ID/definition could show up in two
different bundles (think multi-page apps that have a “common” and
page-specific bundles).

I do not believe that case should trigger an error. It is just
inefficient, and tooling for bundles could offer to enforce finding
these inefficiencies vs the browser stopping the app from working by
throwing an error.

It is true that the errors introduced by “ignore” could be harder to
detect given that things may mostly work. The general guide in this
case for requirejs was to be flexible in the same spirit of HTML
parsing.

Redefinition seems to allow breaking the expectation that the module
value for a given normalized ID is always the same.

When the developer wants to explicitly orchestrate different module
values for specific module ID sets, something like AMD’s map config is
a good solution as it is a more declarative statement of intent vs
code in a module body deciding to redefine.

Also, code in module bodies do not have enough information to properly
redefine for a set of module IDs like map config can. Map config has
been really useful for supporting two different versions of a module
in an app, and for providing mocks to certain modules for unit
testing.

Given what has been said so far for use cases, I would prefer either
“ignore” or “error” over redefinition, with a preference of “ignore”
over “error based on the above.

[1] https://github.com/amdjs/amdjs-api/wiki/Common-Config#wiki-map

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Mutable slots/indirect value API

2013-11-03 Thread James Burke
With the import/export mutable slots concept, does it make sense to
allow an API that has expresses that type of concept? I think it would
allow creating a module-function thunk that would allow robust cycles
as mentioned here:

https://mail.mozilla.org/pipermail/es-discuss/2013-November/034576.html

So something that returns a value, but when it is looked up by the
engine, it really is just an indirection to some other value that can
be set later. Call it IndirectValue for purposes of illustration, but
the name is not that important for this message:

var value = new IndirectValue();

// something is a kind of ref to a ref engine type under the covers
var something = value.ref();

typeof something === 'undefined' // true at this point

// Some time later, the code that created the
// IndirectValue sets the final value for it.
value.set([100, 200]);

// Then later,
Array.isArray(something) // true now

IndirectValue.prototype.set() could only be called once, and the
engine under the covers could optimize the indirections after the
set() is called so that the indirection would not longer be needed.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Mutable slots/indirect value API

2013-11-03 Thread James Burke
On Sun, Nov 3, 2013 at 12:34 PM, David Herman dher...@mozilla.com wrote:
 IOW expose the first-class reference type of ECMA-262 via a standard 
 library? Just say no! :)

I was thinking that if they were used anyway by the module system,
formalizing them might help, the provide the primitives sort of API
design. I am sure that kind of consideration is probably quite a bit
of work though (sounds like scary too from your response), so sorry
for the distraction.

 BTW, this whole module-function rigamarole only exists for AMD compatibility, 
 so it's only important for it to demonstrate interoperability. For normal ES6 
 use cases there's just no need to use it.

I was not asking about it related to AMD compatibility. AMD's cycle
support is not that strong, so this capability would not be used
specifically for AMD conversions, as that concept is not possible now
with AMD.

This came out of wanting some other way to inline multiple ES-type of
modules in a file, without needing to rely on the JS-strings-in-JS
that were mentioned in the previously mentioned thread. The thought of
being able to use a function instead was appealing, but wanted to be
sure a similar cycle support to ES modules could work in that format.
Not an exact match for true import/export syntax, but may be good
enough for single, default export type of modules.

I will try to wait until the later November design artifacts are done
before asking more questions.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Modules loader define method

2013-11-01 Thread James Burke
On Thu, Oct 31, 2013 at 8:32 PM, Jeff Morrison lbljef...@gmail.com wrote:
 Throwing this out there while I stew on the pros/cons of it (so others can
 as well):
 I wonder how terrible it would be to have this API define module bodies in
 terms of a passed function that, say, accepted and/or returned a module
 object?

This would mean allowing `import` and `export` inside a function,
which starts to break down the semantic meaning of what a module is,
and how to refer to them. Any function would be allowed to import or
export. What does that mean? Does a function name now qualify as a
module ID? Why do import statements use string IDs?

If the function wrapping was restricted to only System.* calls to
express dependencies, then it loses out on the cycle benefits of
import and export: it would not be possible to adequately convert a
module that used export/import to a plain function form.

For me, at least as end user, it seems more straightforward to just
allow `module 'id' {}`. It also avoids the ugliness of having strings
of JS inside JS files. I appreciate it may have notable semantic
impacts though.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Modules loader define method

2013-11-01 Thread James Burke
On Fri, Nov 1, 2013 at 1:04 PM, Jeff Morrison lbljef...@gmail.com wrote:
 No, that's why I said the function generates an instance of a Module object
 imperatively (we're already in imperative definition land with this API
 anyway).
 No need for `import` or `export`

My understanding is that there is no way to express a mutable slot
like the ones that import/export creates using existing syntax, or
some property on an object.

I very well could be incorrect. Looking at this:
http://wiki.ecmascript.org/doku.php?id=harmony:module_loaders#module_objects

and:
https://github.com/jorendorff/js-loaders

I cannot see how that might work, but the info seems sparse, or at
least I could have misunderstood it.

Perhaps you know how a mutable slot could be expressed using existing
syntax for creating Module objects? Illustrating how would clear up a
big disconnect for me.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Modules loader define method

2013-11-01 Thread James Burke
On Fri, Nov 1, 2013 at 2:19 PM, Jeff Morrison lbljef...@gmail.com wrote:
 (I'm operating on the assumption that the Module constructor is still part
 of the spec):

 ```
 System.define({
   A: {
 deps: ['B','C'],
 factory: function(B, C) {
   var stuff = B.doSomething();
   return new Module({stuff: stuff});
 }
   },
   B: {
 deps: ['D', 'E'],
 factory: function(D, E) {
   return new Module({
 doSomething: function() { ... }
   });
 }
   }
 });
 ```

Do you know if the factory arguments are regular variable references,
or are they actually an import-like mutable slot that at a later time
may hold the Module value? I did not think it was possible that they
could be mutable slots, those were reserved for import/export
statements?

But maybe the factory args *are* mutable slot entities, instead of
just variable references? If so, that fixes my disconnect. Still
trying to understand the nuances with the mutable slots.

If they are just regular variables, I do not believe they work for a
cycle, at least it does not for AMD-type of systems, as the factory
argument would be undefined for at least one part of the cycle chain:

```
System.define({
  A: {
deps: ['B'],
factory: function(B) {
  return new Module({
prefix: 'thing',
action: function() {
  return B.doSomething();
  })
}
  },
  B: {
deps: ['A'],
factory: function(A) {
  return new Module({
doSomething: function() { return A.prefix; }
  });
}
  }
});
```

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Standard modules - concept or concrete?

2013-06-20 Thread James Burke
On Thu, Jun 20, 2013 at 9:08 AM, Sam Tobin-Hochstadt sa...@ccs.neu.edu wrote:
 On Thu, Jun 20, 2013 at 10:26 AM, Kevin Smith zenpars...@gmail.com wrote:
 I wonder, though, if this might create issues for polyfilling:

 // polyfillz.js
 if (this.Promise ===  void 0)
 this.Promise = function() { ... }

 // main.js
 import polyfillz.js;
 new Promise();

 This would refuse to compile, right?  We'd have to introduce all of our
 polyfills in a separate (previous) compilation/execution cycle.

 Yes, like so:

 script src=polyfillz.js/

 Note that this is already the way people suggest using polyfills; see
 [1] for an example.

I have found that once I have module loading, I want the dependencies
to be specified by the modules that use them, either via the
declarative dependency syntax or via module loader APIs, and at the
very least, avoid script tags as the optimization tools can work
solely by tracing module/JS loading APIs. In this case, only the
model set of modules would care about setting up indexeddb access,
not the top level of the app.

Example, this AMD module:

https://github.com/jrburke/carmino/blob/master/www/lib/IDB.js

Asks for indexedDB!, which is an AMD loader plugin:

https://github.com/jrburke/carmino/blob/master/www/lib/indexedDB.js

which feature detects and uses a module loader API to load a shim if
it is needed. So the IDB module will not execute until that optional
shim work is done.

I believe this will also work via the ES Module Loader API, but
calling it out just in case I missed something. I want to be sure
there are options that do not require using script src tags, except
maybe one to bootstrap a set of Module Loader hooks.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Minor questions on new module BNF

2013-06-04 Thread James Burke
On Tue, Jun 4, 2013 at 8:02 AM, Domenic Denicola
dome...@domenicdenicola.com wrote:
 From: Yehuda Katz [wyc...@gmail.com]

 In general, expectations about side-effects that happen during module 
 loading are really edge-cases. I would go as far as to say that modules that 
 produce side effects during initial execution are doing it wrong, and are 
 likely to produce sadness.

 In this case, the `import` statement is just asking the module loader to 
 download someModule, but allowing the app to move on with life and not 
 bother executing it. This would allow an app to depend on a bunch of 
 top-level modules that got executed only once the user entered a particular 
 area, saving on initial boot time.

 I don't think this is correct. It is strongly counter to current practice, at 
 the very least, and I offered some examples up-thread which I thought were 
 pretty compelling in showing how such side-effecting code is fairly widely 
 used today.

 This isn't a terribly important thing, to be sure. But IMO it will be very 
 surprising if

 ```js
 import x from x;
 ```

 executes the module x, producing side effects, but

 ```js
 import x;
 ```

 does not. It's surprising precisely because it's in that second case that 
 side effects are desired, whereas I'd agree that for modules whose purpose is 
 to export things producing side effects is doing it wrong.

Agreed, and this is at least what is expected in AMD code today. Not
all scripts export something, but are still part of a dependency
relationship (may be event listeners/emitters). The `import x`
expresses that relationship.

I do like the idea of module bodies not being executed (or even
parsed?) if it is not part of an explicit System.load or import chain.
For code that wanted to delay the execution of some modules though, I
expect that trick to be worked out via a delayed System.load() call
than something to do with an import x combined with a System.get().

This is how it works in AMD today: define()'d modules are not executed
until part of a require chain. Some folks use this to deliver
define()'d modules in a bundle, but only trigger their execution on
some later runtime event, and then do a require([x]) call (which is
like System.load) that would then execute the define()'d x module.

So: yes to delayed execution (and even delayed parse), but not via
import x + System.get(), just via System.load(), and all import
forms doing the same thing for module body execution.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module naming and declarations

2013-05-20 Thread James Burke
On Mon, May 20, 2013 at 12:07 PM, Kevin Smith zenpars...@gmail.com wrote:
 On the other hand, I think it is possible with URLs to create a system which
 truly does work out-of-the-box.

 Let's imagine a world where publicly available modules are located at sites
 specifically designed for naming and serving JS modules.  Call it a web
 registry.  Intra-package dependencies are bundled together using lexical
 modules - the package is the smallest unit that can be referenced in such a
 registry.  The registry operates using SPDY, with a fallback on HTTPS, so
 for modern browsers multiplexing is not a critical issue.  In such a world,

There are lots of problems with this kind of URL-based IDs with a web
registry, which I will not enumerate because they basically boil down
to the problems with using URLs: URLs, particularly when version
information gets involved, is too restrictive. The IDs need to have
some fuzziness to make library code *sharing* easier. I have given
some real world examples previously.

That fuzziness needs to be resolved, but it should be done once, at
dependency install time, not for every run of the module code.
Dependency install time can just mean, create a file at this
location, does not mandate tools.

At this point, I would like to see only URLs as default IDs tabled
unless someone actually builds a system that used them and that system
got some level of adoption.

If it was a great idea and it solved problems better than other
solutions, I would expect it to get good adoption. However all the
data points so far, ones from other languages, and ones from systems
implemented in JS, indicate the URL choice is not desirable.

Note that this problem domain is different from something that needs
new language capabilities, like the design around mutable slots for
import. This is just basic code referencing and code layout. It does
not require any new magic from the language, it is something that
could be built in code now.

Side note: existing HTML script tag use of URLs is not a demonstration
of the success of URLs for a module system since they are decoupled
from the references of code in JavaScript, and requires the developer
to manually code the dependency graph without much help from the
actual code.

Another side note: if someone wanted to use a web registry for library
dependencies, it could just set the baseURL to the web registry
location and have one config call to set the location for app-local
code. It would end up with less string and config overhead than only
URLs as IDs. There has even been a prototype done to this extent:
http://jspm.io/ -- it is backed by the module ID approach used for AMD
modules/requirejs.

But again, these are all side notes. The weight of implementations,
and real world use cases, indicate only URLs as default IDs are not
the way to go.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module naming and declarations

2013-05-08 Thread James Burke
On Wed, May 8, 2013 at 10:44 AM, Sam Tobin-Hochstadt sa...@ccs.neu.edu wrote:
 How is this in disagreement with what Jason said?  His point is that
 if you're in the module a/b/c, ./controllers refers to
 a/b/controllers, and backbone refers to backbone. Once you have
 a module name, there's a default resolution semantics to produce a URL
 for the fetch hook, which you describe accurately.

For a developer coming from Node, this may be slightly new to them,
and I think when Jason mentions package, it may not all fit together
with how they understand Node to work. Here is a shot at trying to
bridge that gap:

Node resolves relative IDs relative to the path for the reference
module, and not relative to the reference module's ID. This is a
subtle distinction, but one where node and AMD systems differ. AMD
resolves relative IDs relative to reference module ID, then translates
that to a path, similar to what Sam describes above.

I believe Node's behavior mainly falls out from Node using the path to
store the module export instead of the ID, and it makes it easier to
support the nested node_modules case, and the package.json main
introspection.

However, that approach is not a good one for the browser, where
concatenation should preserve logical IDs not use paths for IDs. This
allows disparate concatenated bundles and CDN-loaded resources to
coordinate.

For ES modules and Node's directory+package.json main property
resolution, I expect it would work something like this:

Node would supply a Module Loader instance with some normalize and
resolve hooks such that 'backbone' is normalized to the module ID
'backbone/backbone' after reading the backbone/package.json file's
main property that points to 'backbone.js'. The custom resolver maps
'backbone'/backbone' to the node_modules/backbone/backbone.js file.
For nested node_modules case, Node could decide to either make new
Module Loader instances seeded with the parent instance's data, or
just expand the IDs to be unique to include the nested node_modules
directory in the normalized logical ID name.

If 'backbone', expanded to 'backbone/backbone' after
directory/package.json scanning, asked for an import via a relative
ID, './other', that could still be resolved to 'backbone/other', which
would be found inside the package folder. So I think it works out.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module naming and declarations

2013-05-08 Thread James Burke
On Wed, May 8, 2013 at 11:35 AM, Domenic Denicola
dome...@domenicdenicola.com wrote:
 From: sam...@gmail.com [sam...@gmail.com] on behalf of Sam Tobin-Hochstadt 
 [sa...@ccs.neu.edu]

 There's a default place to fetch files from, because there has to be _some_ 
 default.

 Why?

 This is the core of my problem with AMD, at least as I have used it in the 
 real world with RequireJS. You have no idea what `require(string)` 
 means---is `string` a package or a URL relative to the base URL? It can be 
 either in RequireJS, and it sounds like that would be the idea here. 
 Super-confusing!

What part is confusing? Logical IDs are found at baseURL + ID + '.js',
and if it is not there, then look at the require.config call to find
where it came from.

By not having a default, it would mean *always* needing to set up
configuration or specialized module loader bootstrap script to start a
project, and still requires the developer to introspect a config or
understand the loader bootstrap script to find things.

Why always force a config step and/or a specialized module loader
bootstrap? There are simple cases that can get by fine without any
configuration or loader bootstrap.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module naming and declarations

2013-05-07 Thread James Burke
On Tue, May 7, 2013 at 5:21 PM, Domenic Denicola
dome...@domenicdenicola.com wrote:
 I'm not sure the Node.js scheme is the best idea for the web, but I *would* 
 like to emphasize that the AMD scheme is not a good one and causes exactly 
 the confusion we're discussing here between URLs and module IDs.

I believe this is mixing the argument of allowing URLs in IDs with how
logical IDs might be resolved in node vs AMD. Not all AMD loaders
allow URLs as dependency string IDs, but requirejs does. That choice
is separate from how logical IDs are resolved.

If normal logical ID notation is used with an AMD loader (like,
'some/thing', as in Node/CommonJS), then it is similar to node/CJS. It
just has a different resolution semantic than node, which is probably
an adjustment for a node developer. But the resolution is different
for good reason. Multiple IO scans to find a matching package are a
no-go over a network.

I think the main stumbling block in this thread is just should module
IDs allow both URLs and logical IDs. While I find the full URL a nice
convenience (explained more below), I think it would be fine to just
limit the IDs to logical IDs if this was too difficult to agree on.
That choice is still compatible with AMD loaders though, and some AMD
loaders make the choice to only support logical IDs already.

---

Perhaps the confusing choice in requirejs was treating IDs ending in
'.js' to be URLs, and not a logical ID. I did this originally to make
it easy for a user to just copy paste their existing script src urls
and dump them into a require() call. However, in practice this was not
really useful, and I believe the main source of confusion with URL
support in requirejs. I would probably not do that if I were to do it
over again.

However, in that do-over case, I would still consider allowing full
URLs, using the other URL detection logic in requirejs: if it starts
with a / or has a protocol, like http:, before the first /, it is an
URL and not a logical ID.

That has been useful for one-off dependencies, like third party
analytics code that lived on another domain (so referenced either with
an explicit protocol, http://some.domain, or protocol-less,
//some.domain.com), which is only imported once, in the main app
module, just to get some code on the page. Needing to set up a logical
ID mapping for it just seemed overkill in those cases.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module naming and declarations

2013-05-01 Thread James Burke
On Wed, May 1, 2013 at 8:28 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 Central naming authorities are only necessary if you need complete
 machine-verifiable consistency without collisions.  As long as humans
 are in the loop, they tend to do a pretty good job of avoiding
 collisions, and managing them when they do happen.

I would go further: because humans are involved, requiring a central
naming authority, like an URL, for module IDs are a worse choice.
There are subcultures that ascribe slightly different meanings to
identifiers, but still want to use code that mostly fits that
identifier but is from another subculture.

The current approach to module IDs in ES modules allows for that fuzzy
meaning very well, with resolution against any global locations
occurring at dependency install/configure time, when the subculture
and context is known. It would require more config, generate more
friction and more typing with runtime resolution of URL IDs.

Examples from the AMD module world:

1) Some projects want to use jQuery with some plugins already wired up
to it. They can set up 'jquery' to be a module that imports the real
jQuery and all the plugins they want, and then return that modified
jQuery as the value for 'jquery'. Any third party code that asks for
'jquery' still gets a valid value for that dependency.

With ES modules in their current form, they could do this without
needing any Module Loader configuration, and all the modules use a
short 'jquery' module ID.

2) A project developer want to use jQuery from the project's CDN. A
third party module may need jQuery as a dependency, but the author of
that third party module specified a specific version range that does
not match the current project. However, the project developer knows it
will work out fine.

The human that specified the version range in that third party module
did not have enough context to adequately express the version range or
the URL location. The best the library author can express is I know
it probably works with this version range of jQuery.

If all the modules just use 'jquery' for the ID, the project developer
just needs one top level, app config to point 'jquery' to the
project's CDN, and it all works out.

An URL ID approach, particularly when version ranges are in play,
would mean much more configuration that is needed for the runtime
code. All the IDs would require more typing, particularly if version
ranges are to be expressed in the URLs.

Summary:

It is best if the suggestions on where to fetch a dependency from a
globally unique location and what version range is applicable are done
in separate metadata, like package.json, bower.json, component.json.
But note that these are just suggestions, not requirements, and the
suggestions may vary based on the project context. For example,
browser-based vs. server-based, or mobile vs. desktop. Only the end
consumer has enough knowledge to do the final resolution.

It would be awkward to try to encode all of the version and context
choices in the module IDs used for import statements as some kind of
URL. Even if it was attempted, it could not be complete on its own --
end context is only known by the consumer. So it would lead to large
config blocks that need to be loaded *by the runtime* to resolve the
URL IDs to the actual location.

With short names like 'jquery', there is a chance to just use
convention based layout, which means no runtime config at all, and if
there is config, a much smaller size to that config than what would be
needed for URL-based IDs. Any resolution against global locations
happens once, when the dependency is brought into the project, and not
needed for every runtime execution. Plus less typing is needed for the
module IDs in the actual JS source.

In addition, the shorter names have been used successfully in real
world systems, examples mentioned already on this thread, so they have
been proven to scale.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Modules feedback from March 2013 meeting

2013-03-26 Thread James Burke
On Tue, Mar 26, 2013 at 3:23 AM, Andreas Rossberg rossb...@google.com wrote:
 On 25 March 2013 18:31, James Burke jrbu...@gmail.com wrote:
 ### Single Anonymous Export ###
 Also, optimising the entire syntax for one special use case while
 uglifying all regular ones will be a hard sell.

I believe this is one of the points of disconnect, at least with
people in the node and AMD communities. Single exports is regular
form, multiple export are seen as the special case.

But my main point with this section was: I was hoping that by turning
the syntax around (export for single anonymous, some other export with
label for multiple export) maybe that opened up some syntax options.
But syntax is hard, and I do not envy TC39's job to sort it all out.
Sorry if this was just noise.

 As I have explained earlier on this list, destructuring import and
 destructuring let are not the same. The former introduces aliases, not
 new stateful bindings. This is relevant if you want to be able to
 export mutable entities. So no, we cannot drop import.

Right, thank you for the reminder of the previous thread. I can see
mutable entities helping cycle cases. I am curious to know what else
it helps. But cycle cases are important, so that alone is nice.

I was hoping that with single export, since a mutable entity was a
special, new thing, it had more freedom to write the rules around it.
However, it seems different enough that it cannot not fit in with
`let` or `var`.

I wonder if this implies later assignment to the import name is not allowed:

import { decrypt } from 'crypto';
//This would be an error?
decrypt = function () {};

If so, that really drives home that it is a new special kind of thing.

In any case, thanks for your response. With that information, this is
the kind of summary I would give to node and AMD users:

* import exists because it creates something new in the language, a
reference to a mutable slot. This is really important for cycle
resolution. let and var cannot handle this type of mutable slot.

* multiple exports exists because it allows for better static
checking, and due to how import/export works with mutable slots,
allows cycles with those exports. While your community may not prefer
a multiple export style, there are others that do. Also, in some cases
there are roll up modules that aggregate an interface to multiple
module exports, and the multiple exports allows that to work even with
cycles.

* single anonymous export will be supported, so you can code all your
modules in that style and it all works out, and you even get better
cycle support when non-function exports are involved. (I have seen
cycles in node rely on function hoisting and strategically placed
require/module.exports assignment to work -- non-function exports are
harder to support with that pattern)

* node's imperative require is not deterministic enough for a general
loading solution, particularly for the web and network fetching. The
ES spec solves this by using string literals for dependencies that are
language-enforced to be top level, with System.load() for anything
that is computed dependency. The mutable slots provided by import give
robust cycle support.

* there are enough hooks in the Module Loader spec to allow node to
internally maintain its synchronous require, so it does not have to
force all modules to upgrade to a new syntax, and a good level of
interop with ES6 modules is possible.

* AMD's dependency resolution has the right amount of determinism, but
suffers from weak cycle support. It also less clear semantically since
require('StringLiteral') can be used in control structures like
if/else, but operates more like System.get('StringLiteral') --- it
just returns the cached module value, it does not trigger conditional
code loading. All require('StringLiteral') calls are effectively
hoisted to the top level for module loading purposes, which can be
surprising to the end user, particularly when coming from node.

* since the ES6 Module Loader can load scripts with the same browser
security rules as script tags (load cross domain without CORS, avoids
problems with eval, like CSP restrictions) then the need in AMD for a
function wrapper in single module per file source form goes away, and
you recover a level of indent.

* there are enough hooks in the Module Loader spec that a hybrid
AMD/ES6 loader can be made, so no need to force upgrade all your AMD
modules, it can be done over time. Since AMD's execution model aligns
pretty well with the ES6 model, it will be easy to write conversion
scripts.

 ### Nested modules ###
 I agree with your goal, and that is why I still maintain my point of
 view that modules should be denoted by regular lexically scoped
 identifiers, like any good language citizen. Then we'd get the right
 rules for free, in a clean, declarative manner, and wouldn't need to
 reinvent the wheel continuously, through more and more operational
 hacks.

Some dependencies are referenced via strings, `from

Modules feedback from March 2013 meeting

2013-03-25 Thread James Burke
I expect the module champions to be busy, so I am not expecting a
response. This is just some feedback to consider or discard at their
discretion. I'll wait for the next public update on modules to see
where things end up. In general, sounds promising.

I'm going off the meeting notes from here (thanks Rick and all who
make these possible!):
https://github.com/rwldrn/tc39-notes/blob/master/es6/2013-03/mar-12.md#42-modules

### Single Anonymous Export ###

The latest update was more about semantics, but some thoughts on how
single anonymous export might work:

Just use `export` for the single anonymous export:

module m {
export function calculate() {}
}

where calculate is just the local name for use internally by the
module, but `calculate` is not visible to outside modules, they just
import that single anonymous export.

For exporting a named property:

module n {
export calculate: function () {}
}

This would still result in a local `calculate`, let-equivalent local
name, but then also allows for other modules to import `calculate`
from this module.

Single anonymous export of something that is not a function:

module crypto {
export let c = {
encrypt: function () {},
decrypt: function () {}
}
}

Inside this module, `c` is just the local name within the module, not
visible to the outside world.

Syntax is hard though, so I will not be surprised if this falls down.

### Import ###

If the above holds together:

For importing single anonymous export, using the m above:

import calc from m;

This module gets a handle on the single anonymous export and calls it
calc locally. The n example:

import { calculate } from n;

 start extremely speculative section:

This next part is very speculative, and the most likely of this
feedback to be a waste of your time:

crypto is a bit more interesting. It would be neat to allow:

let { encrypt } from crypto;

which is shorthand for:

import temp from crypto;
let { encrypt } = temp;

This could all work out because `from` would still be restricted to
the top level of a module (not nested in control structures). `from`
would be the parse hook for finding dependencies, and not `import`.

If refutable matching:
http://wiki.ecmascript.org/doku.php?id=harmony:refutable_matching

applied to destructuring allowed some sort of throw if property is
not there semantics (making this up, assume a ! prefix for that):

let { !encrypt } from crypto;

This would give a similar validity check to what `import { namedExport
}` would give. It may happen later in the lifecycle of the module (so
when the code was run, vs linking time) but since `from` is top level,
it would seem difficult to observe the difference.

Going one step further:

With that capability, it may be possible to go without `import` at
all, at least at this stage of ES (macros later may require it). The
one case where I think `import` may help are cycles, but if the cycle
parts are placed in separate modules with a single export, it may
still work out. Using the assumption of single anonymous export and
the odd even example from the doku wiki:

module 'E' {
let odd from 'O';
export function even(n) {
return n == 0 || odd(n - 1);
}
}

module 'O' {
let even from 'E';
export function odd(n) {
return n != 0  even(n - 1);
}
}

Going even further: then the `export publicName: value` syntax may not
be needed either.

My gut says getting to this point may not be possible. Maybe it is for
the short term/ES 6, but macros may require `import` and `export
publicName:`. When documenting the final design decisions, it would be
good to address where these speculative steps fall down, as I expect
there are some folks in the node community that would also take this
train of thought.

--- end extremely speculative section

### Loader pipeline hooks ###

I know more needs to be specified for this, so this feedback may be too early.

The examples look like they assign to the hooks:

System.translate = function () {}

Are these additive assignments? If more than one thing wants to
translate, is this more like addEventListener?

I recommend allowing something like AMD loader plugins, since they
allow participating in the pipeline without needing to be loaded first
before any other modules. It also allows the caller to decide what
hooks/transforms should be done, instead of some global handler
sneaking in and making the decision.

So, if a dependency is t...@some.html (AMD loader plugins use !,
text!some.html, pick whatever you all think works as a separator):

* load the text module.
* Grab the export value. If the export value has a property (or an
explicit export property) that matches one of the pipeline names,
normalize, resolve, fetch, translate, etc…, use those when trying to
process some.html.

The main feedback here is to not expect to load all pipeline hooks up
front. This would break dependency encapsulation. It seems fine to
also allow those hooks to be 

Re: Modules feedback from March 2013 meeting

2013-03-25 Thread James Burke
Hi Sam,

I really was not expecting a reply, as it was a lot of feedback. Just
wanted to get some things in the to be considered at some point/use
case queue. Some clarifications, but I do not think it is worth
continuing discussion here given the breadth of the feedback and the
stage of the spec development:

On Mon, Mar 25, 2013 at 12:56 PM, Sam Tobin-Hochstadt sa...@ccs.neu.edu wrote:
 With that capability, it may be possible to go without `import` at
 all, at least at this stage of ES (macros later may require it). The
 one case where I think `import` may help are cycles, but if the cycle
 parts are placed in separate modules with a single export, it may
 still work out. Using the assumption of single anonymous export and
 the odd even example from the doku wiki:

 Therefore, these changes aren't really simplifying things.

Does it complicate things more though? While import checking may not
change much, hopefully the simplifications are:

* removal of an import syntax keyword (just have `from`)
* possibly reducing the scope of export syntax
* possible improvement in general destructuring, even when a module is
not the target.

Maybe this makes some things much more complicated. If so, it would be
good to document why/include in the use cases at some point.

I believe this is at the heart of some of the this seems complicated
feedback, but hopefully expressed in a more precise, targeted way as
to what needs to be explained to someone who might think that. It does
not need to be explained now, just calling out a candidate for the
documentation/use case queue.

 ### Loader pipeline hooks ###
 The system doesn't build AMD-style plugins into the core of the module
 proposal, however. They're neither fully-general (you could configure
 based on something other than a prefix) nor used everywhere in
 existing JS, and we don't want to prematurely standardize on one
 system.

AMD-style plugins would purposely not be fully general in this system,
that is the job of the loader pipeline hooks.

It does not have to be AMD-style directly, but something where I could
specify a module ID that could handle a type of resource ID, that
module gets loaded (with its dependencies), and it gets automatically
wired into the pipeline if it exposes a property whose name matches a
pipeline hook name.

This is also why I suggest more declarative config, like shim vs an
imperative link hook. It is still useful to have the loader hooks as
they are, but just like ondemand is being considered, there are
others that have been proven useful without prone-to-error imperative
overrides of hooks (don't forget to check for a previous one and call
it (before, after?) you run your code).

Since the loader pipeline stuff is still under development though,
this feedback may be much to early.

 ### Nested modules ###
 You could express this as:

  module publicThing/j {}
  module publicThing/k {}

  module publicThing {

export …. //something visible outside publicThing
  }

I am sorry, I mixed how the code would be on disk with how it may be
organized later conceptually after loading. How the example would be
on disk:

//In publicThing.js
module j {}
module k {}
export …. //something visible outside this module

Then, this module is imported by some other module via the
publicThing name. So, publicThing.js does not know its final ID, and
j and k are not meant to be exposed as public modules, they are
just for publicThing's internal use.

The ID lookup tables I visualized more like the maps with prototypes
with the prototype being a parent module space map.  Unfortunately we
probably do not share a common vocabulary here, so I will stop trying
to suggest a solution and just point out the use case.

 ### Legacy opt-in use case ###
 //Some base library that needs to be in ES5 syntax:
 var dep1, dep2
 if (typeof System !== 'undefined'  System.get) {
 //ES6 module loader. The loader will fetch and process
 //these dependencies before executing this file
 dep1 = System.get('dep1');
 dep2 = System.get('dep2');
 } else {
 //browser globals case, assume the scripts have already loaded
 dep1 = global.dep1;
 dep2 = global.dep2;
 }

 In this setting, you could just run the code exactly as you wrote it,
 without changing the default loader at all, and it would work provided
 that the dependencies were, in fact, already loaded, just the way it's
 assumed in the browser globals case.  I imagine that lots of libraries
 will work exactly like this, the same way jQuery plugins expect jQuery
 to already be loaded today.

We have found in the AMD world that once the developer has a module
loader API, they want to avoid loading scripts in some manually
constructed order specified outside of JS, in HTML. If the user needs
a third party script loader to do this on top of ES modules, that
seems redundant.

 Adding a build step that performs this analysis explicitly seems like
 a much better idea than building an ad-hoc analysis that 

Re: Modules spec procedure suggestion (Re: Jan 31 TC39 Meeting Notes)

2013-02-07 Thread James Burke
On Thu, Feb 7, 2013 at 1:00 PM, Claus Reinke claus.rei...@talk21.com wrote:
 There are some existing ES.next modules shims/transpilers that
 could be used as a starting point.

Here is an experiment I did with a JS module syntax that allowed for a
static compile time hooks for things like macros, but still allowed JS
that needed to exist both in pre-ES.next and ES.next to opt into
modules (think base libraries like jQuery, underscore, backbone).

https://github.com/jrburke/modus

It has some tests, and the modus.js file runs in ES5 browsers.

Since the modules desugar to runtime APIs after the compile time pass
for macros, it means not getting the lower level variable bindings
that the ES module format is targeting. This means it has similar
cycle dependency support as AMD -- it can be done, but may not be as
nice as ES module variable binding, and has some restrictions.

The API and syntax was just provisional, just something chosen to wire
things together, and it is not a fully fleshed out system. Also, it
was not done to advocate macros in particular. Macros were used to
demonstrate how a static, compile time feature could be supported in
a system that desugared to runtime API calls, and it helped that
sweetjs was available for that heavier lifting.

It may be useful for others to look at as a way to prototype any
official ES spec approach, even if it does not give full fidelity. In
particular, the use of the sweetjs reader to do a first pass transform
of the module syntax.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Do Anonymous Exports Solve the Backwards Compatibility Problem?

2012-12-20 Thread James Burke
On Thu, Dec 20, 2012 at 8:22 AM, Kevin Smith khs4...@gmail.com wrote:
 This is exactly the use case that my OP addresses.  The logic goes like
 this:  in order to apply that boilerplate, you have to know whether the
 module is ES5 or ES6.  In order to know that, you have to parse it.
 Pre-parsing every single module is not practical for a production system.
 Therefore applying such boilerplate is not practical for a production
 system.

That was not my impression of how backcompat would be done. I was
under the impression it would be more like this:

* The module loader API exposes a runtime API that is not new
syntax, just an API. From some earlier Module Loader API drafts, I
thought it was something like System.get() to get a dependency,
System.set() to set the value that will be used as the export.

* Base libraries that need to live in current ES and ES.next worlds
(jquery, underscore, backbone, etc…) would *not* use the ES.next
module syntax, but feature detect the System API and call it to
participate in an ES.next module scenario, similar to how a module
today detects if it wants to register for node, AMD or browser
globals:

https://github.com/umdjs/umd/blob/master/returnExportsGlobal.js

* Modules using the ES.next module syntax will most likely be
contained to app logic at first because not all browsers will have
ES.next capabilities right away, and only apps that can restrict
themselves to ES.next browsers will use the module syntax. Everything
else will use the runtime API.

Otherwise, forcing existing libraries that need to exist in
non-ES.next browsers to provide a ES.next copy of their library that
force the use of new JS module syntax is effectively creating a 2JS
system, and if that is going to happen, might as well do more
backwards incompatible changes for ES.next. Previous discussion on
this list seem to indicate a desire to keep with 1JS.

For using ES5 libraries that do not call the ES Module Loader runtime
API, a shim declarative config could be supported by the ES Module
Loader API, similar to the one in use by AMD loaders:

http://requirejs.org/docs/api.html#config-shim

this allows the end developer to consume the old code in a modular
fashion, and the parsing is done by the ES Module Loader, not userland
JS.

So, there is not a case where someone would try to ship a module
loader that does full JS parsing to detect new module syntax, except
for more experimental purposes. Or just one used in dev, but then do a
build to translate to ES5 syntax, converting module syntax to the
runtime API forms so that it could run in either ES.next browsers or
in ES5 browsers with an API shim.

 No - the solution for Node WRT ES6 modules, in my mind, is to pull off the
 bandaid.  The solution should not be to make compromises on the module
 design side.

With the runtime System API, node can adapt their module system to use
the ES.next Module Loader API hooks for resolve/fetch and hopefully a
way to register a require function for each module that underneath
calls System.get() and module.exports calling System.set.

However, the ES.next Module Loader API is doing the actual parsing of
the file, scanning for ES.next module syntax, so Node itself does not
need to deliver an in-JS parser.

Maybe instead of (I would like to see in addition to) System.set()
there is a System.exports, like the CommonJS `exports`, that would
allow avoiding the exports assignment pattern for modules that want
to do that.

Summary:

If that all of the above holds true (getting clarification on the
Module Loader API is needed), then I do not believe the original post
about parsing of old and new code is a strong case for avoiding export
assignment.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Do Anonymous Exports Solve the Backwards Compatibility Problem?

2012-12-20 Thread James Burke
On Thu, Dec 20, 2012 at 11:51 AM, Sam Tobin-Hochstadt sa...@ccs.neu.edu wrote:
  - I don't see what a mutable `exports` object would add on top of
 this system, but maybe I'm not understanding what you're saying.

It is one way to allow circular dependencies in CommonJS/Node/AMD
systems. The other way is to call require() at runtime to get the
cached module value at the time of actual use. Some examples here:

http://requirejs.org/docs/api.html#circular

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Do Anonymous Exports Solve the Backwards Compatibility Problem?

2012-12-19 Thread James Burke
On Tue, Dec 18, 2012 at 12:56 PM, Kevin Smith khs4...@gmail.com wrote:
 At first glance, it seems like anonymous exports might provide a way for
 pre-ES6 (read: Node) modules and ES6 modules to coexist.  After all:

 exports = function A() {};

 just looks so much like:

 module.exports = function A() {};

 But is that the case?  Does this oddball syntax actually help?

 My conclusion is that it does not, *unless* the loading environment is
 willing to statically analyze every single module it wishes to load.
 Moreover, the desired interop is not even possible without performing static
 analysis.

I feel this is mixing up backcompat dependency matching (which is has
much larger issues than exports assignment) with a preference to just
not have exports assignment. I believe the backcompat issues and
parsing things are workable. I have done some code experiments, but we
need a more info on the module loader API, specifically the runtime
API, like System.set/get before getting a solid answer on it.

exports assignment is not about backcompat specifically, although it
helps. Exports assignment is more about keeping the anonymous natures
of modules preserved. In ES modules, modules do not name themselves if
it is a single module in a file. The name is given by the code that
refers to that code.

If a module only exports one thing, and chooses a name, it is
effectively naming itself. Example from the browser world: jQuery and
Zepto provide similar functionality. If jQuery exports its value as
jQuery, then that would mean Zepto would then need to export a
jQuery exports if it wanted to be used in places where the jQuery
module is used. But if someone just wanted to use Zepto as Zepto,
then Zepto would need to add more export properties. Saying well have
them both use $ is just as bad. It is simpler to just allow each of
them to export a function as the module value, and avoids these weird
naming issues.

Assigning a single exports also nudges people to make small modules
that do one thing.

It is a design aesthetic that has been established in the JS
community, both in node and in AMD modules, in real code used by many
people. So allowing export assignment is more about paving an existing
cowpath than a specific technical issue with backcompat.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Do Anonymous Exports Solve the Backwards Compatibility Problem?

2012-12-19 Thread James Burke
On Wed, Dec 19, 2012 at 11:44 AM, Kevin Smith khs4...@gmail.com wrote:
 But that cowpath was only created because of the problems inherent in a
 dynamic, variable-copy module system, as I argue here
 (https://gist.github.com/4337062).  In CommonJS, modules are about
 variables.  In ES6, modules are about bindings.  The difference is subtle,
 but makes all the difference.

Those slightly different things are still about naming, and my reply
was about naming. Whether it is a variable or a binding, end
result is whether the caller of the code need to start with a name
specified by the module or with a name of the caller's choosing. The
same design aesthetics are in play.

This is illustrated by an example from Dave Herman, for a language
(sorry I do not recall which), where developers ended up using _t,
or some convention like that, to indicate a single export value that
they did not want to name. As I recall, that language had something
more like bindings than variables. That would be ugly to see a
_t convention in JS (IMO).

In summary, I do not believe there is a technical issue with export
assignment and backcompat, which was the what started this thread. A
different argument (and probably different thread) against export
assignment needs to be made, with more details on the actual harm it
causes.

If the desire to not have export assignment is a style preference, it
will be hard to make that argument given the style in use in existing
JS, both in node and AMD. Real world use and adoption should have have
more weight when making the style choice.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Modules, Concatenation, and Better Solutions

2012-10-16 Thread James Burke
On Tue, Oct 16, 2012 at 2:58 PM, David Herman dher...@mozilla.com wrote:
 prints a then b then main. That's clearly a problem for simple 
 concatenation. On the one hand, the point of eager execution was to make the 
 execution model simple and consistent with corresponding IIFE code. On the 
 other hand, executing external modules by need is good for usually (except in 
 some cases with cyclic dependencies) ensuring that the module you're 
 importing from is fully initialized by the time you import from it.

In earlier versions of requirejs, I used to eagerly evaluate define()
calls as they were encountered, trying to duplicate the IIFE feel.

This caused a problem for concatenation: some build scenarios build in
all the modules used for a page into one JS script. However, only half
the modules may be used for the first screen render, with the second
half of the modules for a second screen render that is triggered by
a user action. The secondary set of modules can have global state
changes, like CSS/style changes.

By eagerly evaluating the modules as they were encountered in the
built script, the page would have unwanted style changes applied
during the first screen render when they should have been held until
the second set of module use for the second render.

By switching to evaluate module factory functions by need in
requirejs, it gained the following benefits:

* concat code executes closer to the order in non-concat form.
* delaying work that does not need to be done up front. If
optimizations like delayed function parsing (like v8 does?) extended
to modules, even parse time could be avoided.
* modules can be concatenated in an order that does not strictly match
the linearized dependency chain (the benefit Patrick Mueller mentions
earlier in the thread).

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A few more questions about the current module proposal

2012-07-05 Thread James Burke
On Thu, Jul 5, 2012 at 5:56 AM, Kevin Smith khs4...@gmail.com wrote:
 Will heterogenous transpiling in a web app be supported? Can a JS
 module depend on a CoffeeScript file, and vice versa?


 Right - Sam's example of having a specific CoffeeScript loader isn't going
 to actually work for this reason.  Instead, we'd have to figure out which
 to-JS compiler to use inside of the translate hook.

 let maybeCoffeeLoader = new Loader(System, {

   translate(src, relURL, baseURL, resolved) {

 // If file extension is .coffee, then use the coffee-to-js
 compiler
 if (extension(relURL) === .coffee)
   src = coffeeToJS(src);

 return src;
   }

 });

 You could use the resolve hook in concert with the translate hook to create
 AMD-style plugin directives.  It looks pretty flexible to me.

Right, I do not believe file extension-based loader branching is the
right way to go, see the multiple text template transpiler uses for
.html in AMD loader plugins. The module depending on the resource
needs to choose the type of transpiler.

So as you mention, a custom resolver may need to be used. This means
that there will be non-uniform dependency IDs floating around. That
seems to lead to this chain of events:

* packages that use these special IDs need to communicate that the end
developer needs to use a particular Module Loader implementation.
* the end developer will need to load a script file before doing any
harmony module loading when using those dependencies.
* People end up using loaders like requirejs.
* Which leads to the dark side. At least a side I do not want to see.

It is also unclear to me what happens if package A wants a particular
ModuleLoader 1 where package B wants ModuleLoader 2, and both loaders
like to resolve IDs differently.

This is why I favor specify transpiler in the ID, transpiler is just
another module with a specific API. If the default module loader
understands something along the lines of something!resource means to
call the something module as a transpiler to resolve and load
resource, the module IDs get uniform, and we can avoid a tower of
babel around module IDs, and the need for bootstrap script
translators.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Module loader use for optimizers (was Re: ES modules: syntax import vs preprocessing cs plugins)

2012-07-04 Thread James Burke
On Tue, Jul 3, 2012 at 4:27 PM, Sam Tobin-Hochstadt sa...@ccs.neu.edu wrote:
 On Tue, Jul 3, 2012 at 7:19 PM, Allen Wirfs-Brock al...@wirfs-brock.com 
 wrote:
 Sam,
 Isn't it also the case that the full characteristics of the default module 
 loader used by browsers still remain to be specified?  This might be 
 somewhat out of scope for TC39 put practically speaking it's something we 
 will need (and want) to be involved with.

 Yes, this needs to be fully specified, but Dave and I have thought a
 bunch about this particular issue, and I think the issues here are
 better understood, because they're similar to other ES/HTML
 integration issues.  As an example, where the system loader looks for
 JS source specified with a relative path should be related to how the
 browser does this for script tags.

Along those lines, I would like to see how the resolution logic that
is used for browsers could be used for optimizers that combine modules
together into scripts for performance reasons.

Those optimizers normally run in a non-browser environment, so
figuring out how an optimizer that runs in those other environments
can know the browser rules.

Maybe it means the optimizers need to hand code the rules themselves
and handle the parsing of module syntax themselves.

In my ideal world though, the ES module specs come with default
resolution logic that works best for browsers (but allows overrides)
and a Loader could be used in some kind of trace mode, to get the
dependency graph without executing the modules. This would help
eliminate cross browser and cross-tooling bugs.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A few more questions about the current module proposal

2012-07-04 Thread James Burke
On Wed, Jul 4, 2012 at 11:13 AM, Sam Tobin-Hochstadt sa...@ccs.neu.edu wrote:
 We've thought a lot about compile-to-JS languages, and a bunch of the
 features of the module loader system are there specifically to support
 these languages.  You can build a loader that uses the `translate`
 hook to perform arbitrary translation, such as running the
 CoffeeScript compiler, before actually executing the code.  So you'll
 be able to write something like this:

 let CL = new CoffeeScriptLoader();
 CL.load(code/something.coffee, function(m) { ... });

Will heterogenous transpiling in a web app be supported? Can a JS
module depend on a CoffeeScript file, and vice versa? What about a JS
module depending on a CoffeeScript and text resource? What would that
look like?

For instance, it is common in requirejs projects to use coffeescript
and text resources via the loader plugin system. While the text plugin
is fairly simple, it can be thought of as a transpiler, converting
text files to module values that are JS strings. It could also be
text template transpiler that converts the text to a JS function,
which when given data produces a custom HTML string.

For requirejs/AMD systems, the transpiler handler is part of the
module ID. This means that nested dependencies can use a transpiler
without the top level application developer needing to map out what
loader transpilers are in play and somehow configure transpiler
capabilities at the top level before starting main module loading.

It also makes it clear which transpiler should be used for a given
module dependency. Each module gets to choose the type of transpiler:
for a given .html file, one module may want to use a text template
transpiler where another module may just want a raw text-to-string
transpiler. Both of those modules can be used in the same project as
nested dependencies without the end developer needing to wire them up
at the top level.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Modules: use of non-module code as a dependency

2012-06-30 Thread James Burke
A good chunk of my previous feedback is based on a higher level design
goal that is worth getting a common understanding on, since it may
invalidate my feedback:

Question:

What happens when a module depends on a non-module code?

Example:

I have an ES module, 'foo', it depends on jQuery. foo uses the new
module keywords, but jQuery does not. jQuery could be doing one of two
things:

a) Nothing, just exports a global.
b) May be modified to call the ES Loader's runtime API, something like
System.set().

Code for module foo:

import jQuery from 'jquery.js'

Possible answers:

1) Unsupported. Error occurs. Developer needs to use a custom loader
that could somehow get jQuery loaded before foo is parsed. Or just
tell the user to stick with existing module schemes until ES module
support has saturated the market.

2) jQuery is suggested to provide a jquery.es.js file that uses the
new keywords.

3) Proposed: When jquery.js is compiled, and no import/module/export
modules are found, then the Loader will execute jquery.js to see if it
exports a value via a runtime API. It uses that value to then finish
wiring up the local jQuery reference for module foo.

I believe #1 will complicate ES module adoption. #2 feels like there
are now two javascript languages, make your choice on what line you
are on. I'm not sure if #3 could be supported with the current module
design.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ES Modules: suggestions for improvement

2012-06-28 Thread James Burke
On Thu, Jun 28, 2012 at 7:56 AM, Sam Tobin-Hochstadt sa...@ccs.neu.edu wrote:
 On Thu, Jun 28, 2012 at 10:40 AM, Kevin Smith khs4...@gmail.com wrote:
    script src=/assets/add_blaster.js
    script
    module main {
      import go from add_blaster;
      console.log(go(4,5));
    }
    /script


 That's *not* what I'd call a forward-compatible solution since you still
 have to use the script tag prior to importing.  What's needed is a way to
 tell the loader that add_blaster can be fetched from
 /assets/add_blaster.js.

 What James asked for was a solution for how libraries such as jquery
 or backbone could be implemented so that they work in both worlds,
 which is what I provided.

As Kevin says later, that is not what I asked for.

A developer using a module system does not want to have to manually
know that some dependencies need to be included as script tags before
starting loading. Often they are using modules that have dependencies,
that have dependencies, and they do not want to know that they have to
manually inspect the dependency tree to figure out what modules need
to be inlined as script tags before using modules themselves.

So, they will rely on a someone to provide a script loader library to
handle this. But that is what module support should do by default.
Otherwise, forms like AMD will continue to thrive, and worse yet cause
confusion in the minds of developers. There should be one module
format, usable with existing code as dependencies for es modules.

This is a real use case because it comes up all the time in AMD. Ask
any AMD+backbone user. Backbone does not use a module format, but it
is used all the time as a dependency in AMD modules.

This is why relying on parse all the dependencies before eval for
exports is a problem. My proposal of eval dependencies, get those
exported values, then use that exported value to modify the AST of the
current module, then eval approach in the other thread is an attempt
to solve that problem.

With that, you can support these older libraries that need to use the
runtime API so they can live in both worlds, but you still have a shot
at supporting things like import checking.

I think working this out in person or in online, real time may work
better. Sam or Dave, feel free to contact me offline if you want to do
so.

I will also try to set up a repo with some test scenarios, because the
optimization case when combining modules also needs more work.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Modules: execute deps, modify, execute (was Re: ES Modules: suggestions for improvement)

2012-06-27 Thread James Burke
On Wed, Jun 27, 2012 at 11:40 AM, Jussi Kalliokoski
jussi.kallioko...@gmail.com wrote:
 This brings up an interesting point about the modules, that being lazy
 loading. One appealing reason to use a module loader instead of just
 concatenating the scripts or using multiple script tags is that you can do
 feature detection and load polyfills for things you need instead of just
 forcing the client to download all the polyfills, regardless of whether they
 need them or not. Does the modules proposal attempt to solve this in any
 way?

You may be able construct something with the Module Loaders API, but
nothing by default, so it would mean shipping a library to enable it.
I prefer built in support for loader plugins a la AMD though.

If you want to explore that further, I suggest starting a different
thread. I prefer this thread to be very targeted on the
eval/modify/eval approach. The code examples I had in my original
message were to give context on that approach.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ES Modules: suggestions for improvement

2012-06-27 Thread James Burke
On Wed, Jun 27, 2012 at 11:56 AM, Sam Tobin-Hochstadt sa...@ccs.neu.edu wrote:
 Then we can use the module like this:

    System.load(add_blaster, function(ab) { return ab.go(4,5); })

 or, since we know that add_blaster is a local module:

    let { go } = System.get(add_blaster);
    go(9,10);

 or, if we put the call to `System.set` in the previous script tag, we
 can just do:

    import go from add_blaster;
    go(2,2);

 At no point here did we have to write a module system.

This is not usually how we have found loading to be done in AMD.
'add_blaster' is usually not loaded before that import call is first
seen. Call this module foo:

   import go from add_blaster;

The developer asks for foo first. foo is loaded, and parsed.
'add_blaster' is seen and then loaded and parsed (although not sure
how 'add_blaster' is converted to a path…):

add_blaster calls the runtime:

   System.set(add_blaster, { go : function(n,m) { return n + m; } });

What happens according to the current modules proposal?

Does an error get generated for foo's import line stating that
add_blaster does not export go, or are those checks optional, as David
Bruant suggests on another message in this thread?

My previous interaction on this list led me to believe that I would
have to construct a userland library to make sure I load and execute
the script that does System.set(add_blaster) before foo is parsed.

If that is true, then that is what is fueling my particular feedback
about the eval deps, modify module, then eval module feedback.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ES Modules: suggestions for improvement

2012-06-27 Thread James Burke
On Wed, Jun 27, 2012 at 2:21 PM, Sam Tobin-Hochstadt sa...@ccs.neu.edu wrote:
 On Wed, Jun 27, 2012 at 3:37 PM, James Burke jrbu...@gmail.com wrote:
 On Wed, Jun 27, 2012 at 11:56 AM, Sam Tobin-Hochstadt sa...@ccs.neu.edu 
 wrote:
 Then we can use the module like this:

    System.load(add_blaster, function(ab) { return ab.go(4,5); })

 or, since we know that add_blaster is a local module:

    let { go } = System.get(add_blaster);
    go(9,10);

 or, if we put the call to `System.set` in the previous script tag, we
 can just do:

    import go from add_blaster;
    go(2,2);

 At no point here did we have to write a module system.

 This is not usually how we have found loading to be done in AMD.
 'add_blaster' is usually not loaded before that import call is first
 seen. Call this module foo:

   import go from add_blaster;

 The developer asks for foo first. foo is loaded, and parsed.
 'add_blaster' is seen and then loaded and parsed (although not sure
 how 'add_blaster' is converted to a path…):

 add_blaster calls the runtime:

   System.set(add_blaster, { go : function(n,m) { return n + m; } });

 What happens according to the current modules proposal?

 I'm not quite sure what you're asking.  If the question is: does
 importing foo automatically compile add_blaster, then yes, that
 happens automatically.  You can think about that as doing something
 internal that's similar to `System.set`.  But that's all implicit.  If
 we are in a system like NPM, where add_blaster might map
 automatically to add_blaster.js, we could have:

 foo.js:

    import go from add_blaster
    go(1,2)

 add_blaster.js:

    export function go(n,m) { return n + m; };

I was using the original code for 'add_blaster', say as you say, it is
in add_blaster.js:

System.set(add_blaster, { go : function(n,m) { return n + m; } });

My understanding that since add_blaster.js uses the runtime API and
not the export, the above code will not work unless I construct a
loader library that first loads and executes add_blaster.js before
foo.js is parsed.

The use case: scripts, like jquery/backbone/others that want to live
in a non-harmony and harmony world, I would expect that they could be
adapted to call the System.set() API, but not use the new syntax
keywords.

I am under the impression that library developers do not want to
provide two different versions of their scripts, just to participate
in es modules, but rather use a runtime check to register as part of
one script that works in es harmony and non-harmony browsers.
Otherwise, it feels like a 2 javascripts world.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


ES Modules: suggestions for improvement

2012-06-25 Thread James Burke
Posted here:
http://tagneto.blogspot.ca/2012/06/es-modules-suggestions-for-improvement.html

Some of it is a retread of earlier feedback, but some of it may have
been lost in my poor communication style. I did this as a post instead
of inline feedback since it is long, it has lots of hyperlinks and it
was also meant for outside es-discuss consumption.

I am not expecting a response as it should mostly be a retread, maybe
just phrased differently. Just passing along the link in the interest
of full disclosure, maybe the rephrasing helps understand the earlier
feedback.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Modules: compile time linking (was Re: Modules feedback, proposal)

2012-04-06 Thread James Burke
On Fri, Apr 6, 2012 at 2:04 AM, Claus Reinke claus.rei...@talk21.com wrote:
 I just noticed that James' original email had two more items:


 * The 'Math' module is evaluated before Foo is evaluated.
 * Only the properties on Math that are available at the time of Foo's
 execution are bound to local variables via the import *.


 which puts it in line with the first option I mentioned, contrary
 to my final paragraph. Also, he seems concerned mostly with
 static modules, not dynamic ones.

The goal is to allow dynamically declared modules using an API to work
with a module that is using new syntax keywords/declared as static
modules, and done in a way that does not require implementing a
userland script loader library to work out the correct load order,
which is what is needed with the current design.

So, easier for code to opt-in upgrade, while hopefully giving deeper
name/type checking, and allowing for import * and the desires for
considering macros and operator overloading later.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Modules: compile time linking (was Re: Modules feedback, proposal)

2012-04-06 Thread James Burke
On Fri, Apr 6, 2012 at 4:54 AM, Sam Tobin-Hochstadt sa...@ccs.neu.edu wrote:
 The properties available to `Foo` are exactly the ones declared with
 `export` in `Math`.   I don't think that should be a surprise to
 anyone -- that's what `export` is for.

 However, it is the case that the evaluation of `Math` and of `Foo`
 happen close together, even thought that doesn't make a difference in
 this case.  It would potentially make a difference if there was some
 third module that imported from `Foo`.

A clarification: 'Math' may have been a bad example. I chose it since
it was something concrete where an import * on it made sense.

The order of events I listed was to allow Math to opt in to the
registering a module using a runtime API. I do not expect the real,
core Math lib to do so, just choosing some script example that makes
sense to do import *.

By executing a dependent module before finishing the compilation of
the module pulling in the dependency was to allow better interop with
a runtime module API, in a way that would give deeper name/type
checking and support an import * syntax, but not be like with
because any properties dynamically added to Math after Foo executes
are not available to Foo (assuming dynamically adding properties to
Math were even allowed).

Since the import * burns in the local variables during the final AST
modification of the module after evaluating its dependency, deeper
name/type checking can be done besides the top level exports, since
the dependency has actually been evaluated.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Modules: compile time linking (was Re: Modules feedback, proposal)

2012-04-05 Thread James Burke
On Sat, Mar 31, 2012 at 6:47 PM, David Herman dher...@mozilla.com wrote:
 Static checking is a continuum. It's mathematically proven (e.g., Rice's 
 theorem) that there are tons of things a computer just can't do for you in 
 general. So we have to pick and choose which are the things that can be done 
 for us that are a) decidable, b) tractable, and c) useful. In my experience, 
 checking variables is really useful, even though it certainly can't check 
 every aspect of a program's correctness.

I would add d) does not compromise other high value features. For me,
one would be runtime module opt-in by code that also wants to work in
non-ES.next environments.

If there are reasons not to treat that as a high value feature, that
probably changes my feedback.

However to be clear, I would like more name/typing checking, and the
ability to use something like import *.

So to try to get a decent pass at all of those benefits, could the
following evaluate/compile model be used:

# Example:

module Foo {
import * from 'Math';
}

# Rules:

* The 'Math' module is evaluated before Foo is evaluated.

* Only the properties on Math that are available at the time of Foo's
execution are bound
to local variables via the import *.

So, assuming Math has no dependencies (just to make this shorter), the
sequence of events:

* Load Foo, convert to AST, find from usage.
* Load Math
* Compile Math
* Evaluate Math
* Inspect Math's exported module value for properties
* Modify the compiled structure for Foo to convert import * to have
local variables for all of Math's properties that are known, only at
this time (no funny dynamic 'with' stuff)
* Evaluate Foo

I may not have all the right terminology, in particular convert to
AST/work with compiled structure may not be correct, but hopefully
the idea comes across.

# Benefits:

* It is not with or its ilk.

* Allows both top level module export name/type checking and second
level checking since more info on Math is available after running
Math, including info prototypes on for any constructor functions.

* Opens up allowing opt-in to ES.next modules and still be run in old
world browsers.

* Still seems to allow for some kind of macros and operator overloading later?

# Possible hazards:

* Something could modify Math's properties before a different module
Bar is run, and Bar might see different * bindings than Foo. This
happens with JS now though -- depending on when you execute a
function, it may see different properties on objects it uses.

* Too many processing stages?

* Circular import * is a problem. This could be flagged as an error
though. Circular dependencies are minority cases (import * of this
case even smaller), and the benefit of opening up second level
name/type checking and runtime module opt-in may be worth the
tradeoff.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Modules: compile time linking (was Re: Modules feedback, proposal)

2012-03-31 Thread James Burke
On Fri, Mar 30, 2012 at 3:25 PM, James Burke jrbu...@gmail.com wrote:
 ---
 5) Compile time linking
 ---

 There is a tension between the runtime calls like System.load and the
 compile time linking of the module/import syntax. The runtime
 capabilities cannot be removed. However, I believe it would simplify
 the story for an end user if the the compile time linking is removed.

 While the compile time linking may give some kinds of type
 checks/export name checking, it is only one level deep, it does not
 help with this sort of checking:

 //Compile time checking can make sure
 //'jquery.js does export a $
 import $ from 'jquery.js';

 //However, it cannot help check if foo is
 //a real property
 $.foo();

 Similar story for prototypical properties on constructor functions.

 New possibilities open up if this the compile time stuff is removed,
 and I believe it simplifies the end user's day-to-day interaction with
 modules (more below).

Judging from some feedback on twitter, I may not have fully tied
together the costs and benefits for suggesting that the compile time
binding be dropped:

---
Benefits of compile time binding
---
This is what I need help in understanding. The benefits I have heard so far:

1) Being able to check export names/types. As mentioned, this feels
like a very shallow benefit, since it does not apply to properties
outside of the export properties. See constructor functions and
function libs like jQuery.

2) It may help allow some future things like macros?

---
Benefits of runtime only
---

1) Upgrading existing libraries

The biggest issue we have seen for AMD loaders so far is getting
libraries to update to register as modules. They still need to operate
in old world, non-module, browser globals situations.

Why is it important to opt in to module registration? It is not really
about being able to get the right export value. In a pinch, browser
globals could be read for that.

The really important part is knowing what dependencies the script
needs before it should be executed. Example:

Backbone needs jQuery and underscore to set itself up, (technically
just underscore for initial module evaluation, but the point is it
has two dependencies).

For ES.next, how can Backbone register itself as being an ES.next
module, and that it needs jQuery and underscore executed before it
runs, while still allowing Backbone to run in non-ES browsers?

When ES.next comes out, it will still be 2-5 years, at least, where
non-ES.next browsers are in play. This is my estimate based on typical
2 year mobile contracts with carriers not incentivized to upgrade
software, and the half-lives of older Windows OS/IE versions.

Libraries will need to work in old world browsers for a few years.

Possible solutions:

a) Ask libraries to provide a lib.es-next.js version of themselves in
addition to the old world version, so that compile time linking with
new module/import syntax can be used.

b) Have a way for the library to do a runtime type check, and opt-in
to the call.

c) Something else?

Option a) seems bad. It complicates the library
deployment/distribution story. As a library developer, I just want one
JS file I can deliver. It makes support much easier.

On b): the module_loaders API has a way to do a runtime module
registration, but as I understand it, it means that a consumer of my
library then needs to then use the System.load() API to get a hold of
it.

If I am writing my code for an ES.next browser, it seems very awkward
then for me to use Backbone, if Backbone used the runtime
module_loaders API to register a module:

//app.js
import draw from shape.js
System.load(backbone, function (backbone) {

});

What if I need Backbone to generate one of my exports?

If only runtime mechanics are used, then I could do something like this:

//app.js
let {draw} from shape.js,
backbone from backbone.js;

2) Simplifies returning a function for the module value. It sounded
like there were concerns about prototypes and such when doing compile
time checking for the function as the module export case.

It seems weird to me that it has special restrictions but doing
export function Foo() { /*constructor function */} is OK. This
points to the shallowness of the export checking that is enabled by
the compile time checking, and the complication of making that top
level exports level special.

Being able to just use return the module export is also easier for a
JS dev to understand, no special weird syntax. I'm not sure what the
export call syntax looks like (I'm guessing it starts with that),
but it sounds more complicated, as does the older export this
function (){} idea.

Using functions as exports has been used heavily in Node and AMD
modules. Not having support for them will be seen as a step backwards.

3) As mentioned, it may be possible to remove import as new syntax
for ES.next, and just rely on normal variables and destructuring.
While less syntax may

Re: Modules feedback, proposal

2012-03-31 Thread James Burke
On Sat, Mar 31, 2012 at 8:54 AM, Sebastian Markbåge
sebast...@calyptus.eu wrote:
 Hi James,

 I'd like to add some counter points.

 ---
 1) Multiple mrls for System.load():
 ---

 I think it makes sense to have this convenience, even though it's probably
 better practice to use the static syntax as much as possible. For
 performance and tooling support. There's some precedence in Web Workers.

In Dave's post:
http://calculist.org/blog/2012/03/29/synchronous-module-loading-in-es6/

the only way to use modules in inline script tags will be via
System.load(), no static syntax allowed. For simple test pages, like
jsbins/jsfiddles, the developer will need a way to pull in a few
scripts at once via System.load. Nested calls would be ugly.

 ---
 2) Default module path resolution
 ---

 There should be a way to define completely custom URL resolution for a
 specific loader, yes. The resolver could add a canonicalize method that
 can resolve a path relatively to the current file.

 I don't think it makes sense for ECMAScript to define the default scheme at
 all. That should be up to the host environment. The browser environment
 should default to the same resolution algorithm as script tags or
 importScripts in Web Workers.

The problem is that with ES.next, scripts are using a JS API to
specify dependencies. This is something new. The existing script
tags/importScripts relies on the developer to properly sequence the
order of the tags.

When Backbone and my app script want to use jquery there needs to be
a way to resolve that to the same resource. It seems awkward if
Backbone uses jquery.js which by default would map directly to local
disk instead of the CDN.

But say this is what is encouraged: jquery.js and the developer is
told to use the resolve API on the Module Loader to map always map
jquery.js to the CDN location.

The resolve API is a function. This means the developer has to code up
logic to do if tests on MRL names to see if it should switch the
resolution to a CDN URL.

That if() test will be brittle/wordy to always manually code, so the
developer will code a helper lib that allows them to pass a config
object that is used by the helper lib.

Now they have a script loader library dependency. More below.

 For more complex schemes that require configuration, you cannot statically
 resolve the configuration (unless you add the complexity of static
 configuration as well). Therefore you have to use the dynamic loader. Then
 you already have the option to override the resolution scheme when you call
 the loader.


 Trying to standardize a more complex default scheme is likely to open up a
 lot of bike shedding.

There does not need to be a bikeshed. There is already an existing
scheme that is not complex used by AMD module loaders (baseUrl and
paths config), and it meshes well with other envs like Node, which had
implemented the equivalent of its own resolve logic.

The alternative is to say we don't want to try to figure it out will
mean that config libraries pop up for doing declarative config,
because that is mostly what is needed.

So now to use modules, I need to include what is the equivalent of a
module loader script. This looks like no net improvement for the end
developer who has to use a module loader today -- they still need to
manage and load script that needs to be called before the rest of
their modules are run.

That loader script that sets the resolve logic will then need to
expose an API that accepts a list of modules to load, since the
configuration needs to be set before the rest of the loading occurs.

It starts to just look like requirejs then. I want script/module
loaders like requirejs to go away. The just use URLs approach is not
enough to do that.

 ---
 5) Compile time linking
 ---

 Yea, it's true it's just one level. Ultimately you'd hope that engine and
 tool implementors can dynamically link any property reference to frozen
 objects. At least this one level certainly makes it easier to predict this
 behavior for that single level.

The unfortunate thing is that it is a very shallow level, see
jQuery-type APIs or constructor functions. All the action happens that
second level down. Ideally whatever solution could be found for that
second level could also apply to that top level module. I definitely
want to have that kind of checking generically though.

The compile time module linking is not enough, and it makes other
higher value features harder or impossible.

 ---
 6) Import syntax
 ---

 Obviously I certainly don't agree that Node and AMD users have done just
 fine without having *. That's one of the reasons I created Labeled
 Modules. https://github.com/labeledmodules/labeled-modules-spec/wiki

We should try for the equivalent of import *, sorry if I was too short
on this point. I'll expand what I meant:

I think it should be a runtime-like thing (probably using the wrong
word there), and just allow it for 

Re: Modules: compile time linking (was Re: Modules feedback, proposal)

2012-03-31 Thread James Burke
On Sat, Mar 31, 2012 at 11:02 AM, Luke Hoban lu...@microsoft.com wrote:
 On Fri, Mar 30, 2012 at 3:25 PM, James Burke jrbu...@gmail.com wrote:
 [snip]
 The module_loaders API has a way to do a runtime module registration, but 
 as I understand it, it means that a consumer of my library then needs to 
 then use the System.load() API to get a hold of it.

 My understanding was that this is not necessarily true.  For example - in the 
 syntax of the current API design on the wiki:

 // app.html
 script src='backbone.js'/script
 script src='app.js/script

 // backbone.js
 System.set(backbone, {
    something: 42
 });

 //app.js
 import something from backbone
 console.log(something);

 The ES6 module syntax and static binding of app.js still works correctly, 
 because backbone.js has been fully executed and has added itself to the 
 module instance table before app.js is compiled (which is the point where the 
 static binding is established).  There are restrictions here of course, due 
 to the need for the dependent modules to have been made available before 
 compilation of the ES6 code.

This requires me as an app developer to have to know the dependencies
I will use that are not ES.next compatible, load them first, via
inline script tags, or via another script loader, then do the ES.next
work.

This does not seem to be like an improvement. There is no reason for
me to use ES.next modules in this case, and it makes my life really
difficult if I decide to accidentally use one or two dependencies that
are ES.next modules.

 At least in cases like the above though, libraries can continue to work on 
 non-ES6 browsers, and feature-detect ES6 module loaders to register into the 
 ES6 loader so that later processed modules (typically app code) can use ES6 
 syntax if desired.  Moreover, it ought to in principle be possible to build 
 AMD (or other current module API)-compliant shims over the ES6 module 
 definition API that allow existing modules to be used as-is, and still 
 consumed with ES6 module syntax.

One of the goals of my feedback is to actually get rid of AMD and its
associated loaders. If the ES.next modules require me as an app
developer to use a script loader to manage some of this complexity,
then I do not see it as a net gain over just using AMD directly.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Modules feedback, proposal

2012-03-30 Thread James Burke
First I'll give some smaller bits of feedback, and then at the end
propose something that integrates all the feedback.

I hope they can considered independently: if you do not like the
proposal, feel free to just take in the feedback. I tried to order the
feedback in order of craziness with the first item being least
crazy.


Feedback


Some of these just may be drift between current thinking and what is
on the wiki. Sorry for anything that is already covered by current
thinking, I mostly just have the wiki as a reference:

---
1) Multiple mrls for System.load():
--
Dave Herman's latest post on Synchronous module loading in ES6:
http://calculist.org/blog/2012/03/29/synchronous-module-loading-in-es6/

mentions only allowing System.load() in inline HTML script tags. I'm
assuming that is similar to the Loader.prototype.load( mrl, callback,
errback ) API in:
http://wiki.ecmascript.org/doku.php?id=harmony:module_loaders

Since System.load() is the only allowed option in HTML script tags, it
should allow specifying more than one mrl in the call. Either:

System.load(jquery.js, underscore.js, function ($, _) {})
or
System.load([jquery.js, underscore.js], function ($, _) {})

---
2) Default module path resolution
---
In the module path resolution thread, Dave mentioned that the
module_loaders resolve API should allow for some custom path
resolution logic. However it would be good to specify the default
logic.

For the default logic, it would be great to see it be:

baseUrl + ID + .js

unless the ID contains a protocol, or starts with a /, then ID is
assumed to be an URL and used as-is. I'm sure the real default
behavior needs a stronger definition, and I would like to see the
equivalent of AMD loaders's paths config as an option for default
config, but the main point is that in code, the IDs for
importing/referencing  would look like this:

import $ from jquery;

instead of this:

import $ from jquery.js;

I am just using an arbitrary import example, not sure what the most
current import syntax is. Nothing else implied here for import, but
the ability to use jquery for the name and it works by default given
the rules above.

This is a similar module ID-to-path resolution used by AMD loaders
today, and it has worked out well for us because it allows setting up
references to implementation providers vs. particular URLs. This is
nice when using a few libraries that depend on the same piece of
functionality. A concrete example:

I use jQuery in my app, and I use Backbone, which also uses jQuery.
Ideally my script that directly uses jQuery and the Backbone module
can both do:

import $ from jquery;

and it gets resolved to the same URL. With the paths config that is
possible in AMD loaders, we can map that jquery to be a CDN version,
or a local version, or a local version that is actually Zepto, just
something that can stand in for jQuery. I like the idea of having a
simpler paths config for that instead of requiring the developer to
implement a resolver function or worse include a library to set up the
resolver (might as well just use an AMD loader then :).

---
3) Modules IDs as strings
---
This item of feedback is assuming that the way to get better optimized
code delivery is to concatenate module files together. However, even
if that is not the case for browser delivery, I still believe that
allowing a way to combine a set of modules together in a file helps
just with code sharing -- it is easier to handle and pass around a
single JS file for distribution, but the library dev may still want to
work with the modules separately on disk.

With that assumption, module IDs should always be strings. This makes
it easier to combine modules together and match the module to the
string used in the import call:

module jquery {
}

module foo {
import $ from jquery;
}

This means that foo will get a handle on the jquery module. By
using string IDs even in the module {} part, it makes it easier to
match the references to the module provider, particularly for combined
files.

---
4) export call
---

There was mention in simpler, sweeter syntax for modules thread by
Dave that maybe with syntax like this:
import $ from jquery.js;

that having a way to export a function may not be needed.

However, I still feel some sort of export call is needed, or some
way to export a function as the module value, to give parity with the
runtime calls:

System.load('jquery.js', function ($) { });

It would be awkward to do this:

System.load('jquery.js', function (something) { var $ = something.$; });

---
5) Compile time linking
---

There is a tension between the runtime calls like System.load and the
compile time linking of the module/import syntax. The runtime
capabilities cannot be removed. However, I believe it would simplify
the story for an end user if the the compile time linking is removed.

While the compile time linking may give 

Re: Harmony modules feedback

2012-01-17 Thread James Burke
On Tue, Jan 17, 2012 at 3:34 AM, Mariusz Nowak
medikoo+mozilla@medikoo.com wrote:
 James Burke-5 wrote:
 This is provably false. You normally do not need hundreds of modules
 to build a site.


 I wasn't theorizing, I was talking about real applications that are already
 produced.

What I was objecting to is the characterization that an app that loads
hundreds of module would not work and therefore builds are always
needed for all-sized apps. Finding one or two pathological cases does
not prove the need for builds for all people, and I have experience
that points otherwise, even for larger apps. Maybe you did not mean to
take the connection that far, but that is how I read it.

 I think Harmony modules have more in common with CommonJS than with AMD, and
 transition from CommonJS will be easier than from AMD. See slides 86-89 from
 http://www.slideshare.net/medikoo/javascript-modules-done-right (it's
 presentation I've done once on local Warsaw meetup)

CommonJS modules have an imperative require, that is not something
that will work in Harmony modules. AMD does not, which matches the
current harmony proposal more closely. In other words, you can do this
today in CommonJS, but this is not translatable directly to harmony
modules:

var a = require(someCondition ? 'a' : 'a1');

In the current harmony proposals, you would need to use the
callback-style of the module_loaders API to load the module. This
implies an async resolution of the dependency, which likely changes
the above module's API to the outside world.

I agree that the surface syntax of CommonJS looks more like Harmony
modules, but the transform of AMD to harmony modules is really not
that much harder, and translating from vanilla AMD modules (ones that
do not use loader plugins) does not have the kinds of conversion where
the module needs to be manually re-architected, as mentioned above.

 Of course Harmony will allow you to load each module separately, but with
 default syntax I understand it will work synchronously, I'm not sure we will
 do that for large number of modules. For asynchronous loading you may use
 dynamic loader which I think would be great to load bigger module
 dependencies.

Harmony modules will parse the module code, pull out the module
dependencies, load them, do some compiling/linking, then execute the
code. This is similar to what AMD does with this form:

define(function(require) {
var a = require('a');
});

AMD loaders use Function.prototype.toString() and then parse that
function body to find the 'a' dependency, fetches and executes 'a',
then execute this function. Of course this is a bit low-tech and
having an JS engine get a real parse tree before code execution is
better. But the end behavior, as far as network traffic, is the same.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Harmony modules feedback

2012-01-16 Thread James Burke
Apologies to Sam, I sent this feedback to just him earlier, but meant
it also for the list, so resending:

On Thu, Jan 12, 2012 at 5:16 AM, Sam Tobin-Hochstadt sa...@ccs.neu.edu wrote:
 As to your current post, I think the fundamental disagreement is all
 encapsulated here:
  ES harmony may be able to do something a bit different,
   but the basic mechanisms will be the same: get a handle
   on a piece of code via a string ID and export a value.

 First, there are two things you might mean by piece of code.  One is
 the source code for a library/script, which is necessarily
 identified by a URL on the Web (perhaps by a filename in other
 contexts, like Node).  That stays as a string in every design.  The
 other is a library that I know I have available, and that we're not
 using strings for.   Instead, you refer to modules by name, just like
 variables are referred to by name in JS.

I believe this var name for known modules creates a two pathway
system that does not need to exist. It complicates module
concatenation for deployment performance. Also, it precludes allowing
things like loader plugins from being able to inline their resources
in a build, since loader plugin IDs can have funky characters, like !
in them.

For me, having (ideally native) support for loader plugins really
helps reduce some of the callback-itis/pyramid of doom for module
resources too (as demonstrated in that blog post).

 Second, we don't want to just stop with export a value.  By allowing
 the module system to know something ahead of time about what names a
 module exports, and therefore what names another module imports, we
 enable everything from errors when a variable is mistyped to
 cross-module inlining by the VM. Static information about modules is
 also crucial to other, long-term, desires that at least some of us
 have, like macros.

I believe loader plugins are much more valuable to the end developer
than the possible advantages under the covers to compile time wiring.
Says the end developer that does not have to implement a VM. :)

Since loader plugins require the ability to run and return their value
later, compile time wiring would not be able to support them. Or maybe
they could? I would love to hear more about how that would work.

As mentioned in the post, some of that static checking could be
achieved via a comment-based system which optimizes out cleanly, and
would give richer information than what could be determined via the
module checking (in particular usage notes). It is not perfect, and a
very easy bikeshed, but I believe would simplify end developers' lives
more. But let's put that on the backburner, I do not want to get into
what that might look like.

The main point: the compile time semantics in the current proposal
make it harder (impossible?) to support loader plugins and do not
allow for easier shimming/opt-in. As an end developer, I do not like
that trade-off.

But this is hard work, and you cannot please everyone. Just wanted to
mention that there are concrete advantages to an API that allows some
runtime resolution. Advantages I, and other AMD users, have grown to
love since they simplify module development in an async, IO
constrained environment.

Maybe my understanding is incomplete though and loader plugins might
be able to fit into the model.

 Third, the basic mechanisms available for existing JS module systems
 require the use of callbacks and the event loop for loading external
 modules.  By building the system into the language, we can not only
 make life easier for programmers, we statically expose the
 dependencies to the browser, enabling prefetching -- here, the basic
 mechanisms don't have to be the same.

Dependencies as string names still seem to support giving the browser
more info before running the code, even in an API-based system. I
really like the idea behind the module_loaders and node's vm module,
and I can see those kinds of containers being fancy enough to pull
this info out.

Thanks for the feedback,
James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Harmony modules feedback

2012-01-16 Thread James Burke
On Mon, Jan 16, 2012 at 9:44 PM, John J Barton
johnjbar...@johnjbarton.com wrote:
 Doesn't Script-loader make about as much sense as literal XML HTTP
 Request? Imagine how impoverished our Web App world would be if it had
 turned out that Ajax only supported valid XML.

 Shouldn't the API
 in http://wiki.ecmascript.org/doku.php?id=harmony:module_loaders be split
 into one layer that takes URL - buffers and another that takes buffers -
 modules?  Then clever folks can do clever things with and between the two
 layers.

There are two things:

It would be nice to set up a module that can be used to resolve some
types of module/resource IDs that are not just a simple static
script request. This should be possible without having to have a
bootstrap script that sets up a resolver in a script file -- if I need
a specific bootstrap script before loading any modules, this does not
give the harmony infrastructure much of an advantage over what can be
done today in AMD, as far as initial developer setup. This would help
with the async callback nesting needed otherwise. Note that the
resolved resource may not be a transformed piece of text, but an image
or a CSS file that is attached to the DOM and loaded. It could also be
a resource assembled from a few text resources, like an i18n bundle.

Second, having something like node's vm module would be nice for text
targets. Or maybe it is like you mention, it gets called on completion
of the text fetch, and allows transforms, then execute and have it
interpreted as a module, optionally in a sandbox. But that starts to
feel like it is drifting from the desired compile time attributes
currently specified for harmony modules. Maybe not though, maybe the
compiling waits for those transforms to complete. Although ideally the
transformers are modules.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Harmony modules feedback

2012-01-11 Thread James Burke
I did a blog post on AMD that also talks about harmony modules. In the
ECMAScript section I talk about some of the questions I have about
with harmony modules:

http://tagneto.blogspot.com/2012/01/simplicity-and-javascript-modules.html

It is a fairly long post, not all of it applies to ECMAScript, but
some of it ties into my comments on ECMAScript, so I think it is best
to leave it in the blog post vs. reproducing here. Also, I do not
expect answers to my questions right away, just throwing out things
that would be nice to have answered as part of the final design.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ES5 Module Systems (was: Alternative proposal to privateName.public)

2011-12-26 Thread James Burke
On Mon, Dec 26, 2011 at 1:46 PM, Mark S. Miller erig...@google.com wrote:
 On Mon, Dec 26, 2011 at 12:15 PM, Axel Rauschmayer a...@rauschma.de wrote:
 The adapters are all a bit cumbersome to use. IMHO, the best option is
 still boilerplate (to conditionally turn an AMD into a Node.js module):

 ({ define: typeof define === function
 ? define
 : function(A,F) { module.exports = F.apply(null, A.map(require)) }
 }).
 define([ ./module1, ./module2 ],
 function (module1, module2) {
 return ...
 }
 );

This type of boilerplate may not be processed properly by AMD
optimizers that insert module IDs into the define call when multiple
modules are combined together. While the above may work with the
requirejs optimizer, it is not guaranteed to work in the future. I
still prefer the define() call is a non-property function call and not
a call to an object method, to avoid possible conflicts with a module
export value (uglifyjs has one such conflict). A bit more background
in this message:

http://groups.google.com/group/nodejs-dev/msg/aaaf40dfeca04314

In that message I also mention encouraging the use of the amdefine
adapter module in Node. Use of this module will help test drive a
define() implementation that could be integrated into Node later, and
by having Node packages use the module, it would give Node committers
a way to scan the package.json info to find out if define() use is
used enough to warrant consideration in their core.

amdefine also supports a callback-style require (with callback fired
on nextTick()) and most of the loader plugin API, so it gives an idea
how big a complete define() implementation in Node might be:

https://github.com/jrburke/amdefine

But if inline boilerplate is desired instead of a dependency on an
npm-installed module, there are a set of options in this project:

https://github.com/umdjs/umd

which include boilerplates that also work in a just use globals with
ordered HTML script elements browser setup. I personally prefer this
one if AMD+Node support is desired:

https://github.com/umdjs/umd/blob/master/returnExports.js

I also encourage any ES.next module system to allow opt-in to an
ES.next module system using a similar type of boilerplate. However, I
am concerned that the declarative nature and new syntax needed for
ES.next modules would make this difficult to do. I have given feedback
offline to Dave Herman about it and was not going to bring it up more
publicly until later, but given the talk of boilerplate, seems
appropriate to mention it here.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Loyal Opposition to Const, Private, Freeze, Non-Configurable, Non-Writable...

2011-11-02 Thread James Burke
Accidentally sent this only to Brendan, meant to send to list, resending:

On Wed, Nov 2, 2011 at 10:39 AM, Brendan Eich bren...@mozilla.com wrote:
 2. Web developers both depend on and wish to minimize the size (to zero 
 bytes) of libraries.

 3. Programming in the large, even ignoring security, means many of us want to 
 save ourselves from assigning to something we want non-writable and 
 initialize-only. That's const. Or we want to avoid name collision worries, 
 never mind hostile attacks, on private data. That's private, or else you use 
 closures and pay some non-trivial price (verbosity and object allocation).

 (3) implies adding encapsulation and integrity features, on an opt-in basis.

Does encapsulation and integrity mean adding more things to the JS
language or addressing the how JS is encapsulated and run?

The SES and mashup use cases can be used as examples. Are the problems
with the JS language or are they better served by securing not just JS
but the DOM and network calls and having efficient containers that can
communicate?

iframes/the FRAG proposal maybe, or some other JS container entity,
with postMessage async passing of JSON-friendly data might be the way
forward for those things.

Getting those to perform well and perhaps building better APIs around
them might be more effective solutions than to adding more
security/integrity features to JS syntax.

Similar to how focusing on improving the VM container to get JS to
perform better was a better route than adding static type checking
syntax.

The Harmony module loader API feels like another parallel. That is
more of an API to a JS container of sorts with certain isolation
properties. It does not necessarily mean that a JS module syntax needs
to exist for that type of API to be useful.

Since it is an API, a loader API is easier to integrate into existing
systems. With legacy upgrade costs (IE and even older android) and the
possibility for editors to support comment-based systems that allow
type and intellisense support (for quicker turnaround on a deeper set
of errors than the module syntax could provide), I give a loader API a
stronger chance of having a wider impact than module syntax.

 (2) means we keep working on the language, until it reaches some 
 macro-apotheosis.

In any game, it is tempting, particularly for the game designers, to
try to keep tweaking the game to get better. But that easily turns
to destroying the reason the game existed in the first place, and why
it became popular.

I feel that getting to zero probably means not having a successful,
widely used and deployed language. This feedback is not so helpful
though, right? The devil is in the details. I just needed to get it
out of my system, and the mention of macros set off the too much game
tweaking bells.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: JsDoc based type guards

2011-10-14 Thread James Burke
On Fri, Oct 14, 2011 at 5:23 AM, Peter Dekkers pe...@jbaron.com wrote:
 1. JsDoc becomes a standard part of Language. This has to be a good
 thing  even without using it is as basis for stronger typing: a
 uniform way of documenting code.
 2. No new language constructs are required.
 3. Relatively simple to implement (I would assume).
 4. Even more IDE's will start using this to assist developers with
 things like code completion.
 5. Code that uses this mechanism can still run on older VM's that
 don't support this. They are just normal comments for those VM's.

I agree with these points: I prefer to have a comment-based system
because it works today and can be stripped for optimization easily. As
you can probably sense, trying to standardize that approach on this
list is probably unlikely. Or at least a long road.

What might work better is getting a coordinated effort together to
make sure JSDoc (or whatever the comment syntax is, JSDoc is fine
though) works well in editors that people use. Creating a
communication list/site to hash out the issues/track progress is a
good way to go. It is best to lay some groundwork first -- engage the
JSDoc folks, work up a plan of attack and make sure there are some
people who can make progress on the tasks.

I am not a doer for that set of tasks, but I am interested in
following the effort if you do get it set up, and I would like to use
this system for my own code. If you go that route, please post back
with the details.

You may have already thought of this path, and I do not mean to stifle
discussion of the issues on this list. Just trying to point to a route
that will likely lead to results in a quicker timeline. Since it seems
like the window is closing/closed for the next ES version proposals,
spending the time working on the hard issues of getting this type of
comment system in the tools used today will set up a better discussion
of this approach for the ES version after next.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Modules: allowed combinations of module, export

2011-05-19 Thread James Burke
Looking at harmony modules[1], I wanted to get some clarification on
the order of export, module and such in a module's body.

Looking at the Syntax section, this section:

ModuleElement(load) ::= Statement
| VariableDeclaration
| FunctionDeclaration
| ModuleDeclaration(load)
| ImportDeclaration(load)
| ExportDeclaration(load)

implies to me that the following is allowed:

//baz.js
module foo = require('foo.js');

//foo.js:
export var name = 'foo';
module bar = require('bar.js');
var barName = bar.name;

//bar.js:
export var name = 'bar';
module foo = require('foo.js');
//This works because foo.js did the export before the require?
var fooName = foo.name;

where as this would result in an error? (foo.js puts export below the
require() line):

//baz.js
module foo = require('foo.js');

//foo.js:
module bar = require('bar.js');
export var name = 'foo';
var barName = bar.name;

//bar.js:
export var name = 'bar';
module foo = require('foo.js');
var fooName = foo.name;

because I read the syntax section above as saying
ModuleDeclaration(load) or ImportDelcaration(load) or
ExportDelcaration(load), in other words any of those things can
occur, and they can occur in any order. Additionally, modules are
executed when the require() is reached.

However, the following would *not* work, since ModuleDeclaration(load)
can only show up as a direct child of a ProgramElement(load),
ExportDeclaration(load) or ModuleElement(load), and those items can
only have ProgramElement(load) or Program(load) as parents. so a
module bar statement could not appear as part of an if/else:

//foo.js
//This should error out as a syntax/compilation error
//(assume someCondition is valid here)
if (someCondition) {
   module bar = require('bar.js');
} else {
   module bar2 = require('bar2.js');
}

James

[1] http://wiki.ecmascript.org/doku.php?id=harmony:modules
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Questions about Harmony Modules

2011-04-06 Thread James Burke
On Wed, Apr 6, 2011 at 8:25 AM, David Herman dher...@mozilla.com wrote:
 Sure. Surface syntax isn't set in stone, but we aren't likely to go back to 
 just the string literal, since it looks too much like the module is being 
 assigned a string value.

I know you do not want to get into bikeshedding at this point, and I
will not follow this up any more unless you explicitly ask me, but I
strongly encourage just using the string value, sans require. It is
much more concise, and OK given that modules are special (compile-time
processing, special injection via import).

If string IDs are allowed for inline module declarations to allow
optimization bundling:

module some/thing {}

that would help feed the consistency when seeing: module M = some/thing.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Questions about Harmony Modules

2011-04-03 Thread James Burke
On Sat, Apr 2, 2011 at 4:33 PM, David Herman dher...@mozilla.com wrote:
 2) Set module export value
 
 That said, we could consider adding functionality to make a module callable, 
 to address the use case you describe. Thanks for bringing this up.

Allowing a callable module would go a long way towards bridging the
gap with settings exports as that is the primary use of that
feature, although it has been useful for text plugins to set the value
of an AMD module to a text string (see loader plugins section).

 3a) Is module needed inside a module?

 Yes. A module declaration declares a binding to a static module. A let/var 
 declaration declares a binding to a dynamic value. You only get compile-time 
 linking and scoping with static modules.

In that case, I can see the appeal for what Dmitry mentions in his
reply, just reducing it to:

module thing = some/thing;

Ideally, you could have multiple ones with one module word:

module datePicker = datePicker,
thing = some/thing,
Q = Q;

That would really help with the typing cost seen with traditional
CommonJS modules (require is typed a lot), and better local
identifier to module name alignment than what happens in AMD, where
the dependencies and local var names are separate lists :

define([datePicker, some/thing, Q],
function (datePicker, thing, Q({

});

I have gone to aligning the lists to avoid positioning problems, but I
can see how people could find it ugly. Example:
https://github.com/mozilla/f1/blob/master/web/dev/share/panel/index.js

 3b) Is import needed inside a module?
 For many cases, I think that's fine. But .* is an important convenience for 
 scripting. Also, FWIW, import is already reserved and has symmetry with 
 export, so it seems like a wasted opportunity to fill a natural space in the 
 grammar.

Understood. Thanks to you and Brendan for running that one down. I
like the idea of a let {} = someObject; where it only grabs the own
properties of that object at the time of the call, but I can
appreciate if that is difficult to work out and if it skates too close
to with.

 4) How to optimize
 

 I think this is beyond the scope of ECMAScript.next. People are still 
 figuring out how to optimize delivery, and web performance patterns are still 
 shaking out. ISTM there will be situations where it's more performant to 
 combine many modules into one (which can be done with nested module 
 declarations) and others where it's more performant to conditionally/lazily 
 load modules separately (which can be done with module loaders). I don't 
 currently have a clear picture of how we could build syntactic conveniences 
 or abstractions for doing this, but at least the pieces are there so 
 programmers have the building blocks to start constructing their own tools 
 for doing this.

The experience in RequireJS/AMD and in Dojo land is that different
people/projects want different things: sometimes build all modules
into one file, sometimes build some together in one or a couple of
files and have those sets of modules be usable by some other modules
that are loaded separately.

When you say a built file would be possible with nested module
declarations, that makes it sound like those nested modules may not be
usable/visible by other modules that are not loaded as part of that
built file. It would be interesting to explore that more at some
point.

Using string names as the module names in AMD has helped make it
possible to meet the optimization expectations we have seen so far. So
a module that has a 'some/thing' dependency:

define(my/thing, [some/thing, function (thing){})

when 'some/thing' module is built into the optimized file it has its
string name as the first arg:

define('some/thing', function () {});

I can see this as trickier in Harmony modules, at least for the
examples I have seen, where 'some/thing' needs to be an identifier
like:

module someThing {}

Maybe allow strings instead for the names (I say with blissful syntax
ignorance). Just trying to figure out how to match up string
references to modules in inside a module to a named thing in a built
layer.

 5) Loader plugins
 

 Some of the AMD loaders support loader plugins[7].

 Missing a reference in your bibliography. :)

Sorry, a link would be helpful:
http://requirejs.org/docs/plugins.html

 6) Loader API: multiple modules
 Perhaps. We're trying to get the building blocks in place first. We can 
 figure out what conveniences are needed on top of that.

 BTW, what you're asking for is essentially a concurrent join, which is 
 convenient to express in a new library I'm working on called jsTask:

    let [m1, m2, m3] = yield join(load(m1.js), load(m2.js), load(m3.js));

I like the ideas behind jsTask, although I would rather not type load that much:

let [m1, m2, m3] = yield load([m1.js, m2.js, m3.js]);

Array of modules via require([], function (){}) has been useful in
RequireJS, and has a nice parity with the 

Questions about Harmony Modules

2011-04-01 Thread James Burke
I was looking over the harmony proposals around modules[1], module
loaders[2], and the module examples[3], and have some questions. These
questions come from making a module system/loader that works in
today's browsers/Node/Rhino, via AMD[4] and RequireJS[5].

1) Files as modules do not need module wrapper

Just to confirm, if a JS file contains only a module definition, the
module X{} wrapper is not needed? So I could do the following in the
browser:

module JSON = require('http://json.org/modules/json2.js');

and json.js could just look like:

export stringify = ...;
export parse = ...;

Correct?

2) Set module export value


Is it possible to support setting the value of the module to be a
function or some other value? The Node implementation of CommonJS
modules allows setting the exported value via module.exports = and in
the AMD API, return value.

This is useful because many modules are just one function, for
instance, a constructor function, and having to create a property name
that is the same name as the module is very awkward, particularly
since module names are not fixed:

import (require('PersonV2')).{Person}

vs.

//This should probably be module
//instead of var, but see question #3
var Person = require('PersonV2');

What about the following rule in a module:

If return is used in the module, then it sets the value of the
module, and export cannot be used. If export is used in the module,
then return cannot be used.

I know that means a possible syntax issue that is not allowed in
traditional scripts: in the module examples wiki page, a module can
just be a file with no module wrapper, and in that case and with the
rule above, there could be a return at the top level. However, maybe
that can be allowed if it is a module?

3) Are module and import needed inside a module?


I can see that module may be needed when defining a module, like:

module Math {}

However, inside a module definition, is it possible to get by without
needing module and import. For instance, instead of this:

module foo {
   import Geometry.{draw: drawShape};
   module a = require('a');

   a.bar();
   drawShape();
}

what about using var or let instead of module, and use destructuring
for the import:

module foo {
   var {draw: drawShape} = Geometry;
   var a = require('a');

   a.bar();
   drawShape();
}

Maybe these are allowed, and the examples need to be expanded? But if
they are allowed, then I do not see the need to specify the use of
module and import inside a module. The only advantage for import
may be import Geometry.* but I would rather see a generalized syntax
for doing that for any object, not just a module, something like:

let {} = Geometry;

That is just an example, I am not a syntax expert, I am not proposing
syntax, and that example probably would not work. I am just trying to
demonstrate the general usefulness of create a local variable for
each property on this object.

The end goal is trying limit the number of new special things in the
language the developer needs to learn for modules. Ideally it would
be: use 'module' to declare an inline module, then use require('')
inside it to get handles on other modules, and use var/let and
destructuring as you normally would.

4) How to optimize


How can I combine some modules for optimized code delivery? Is that
out of scope? Is the hope that something like resource packages[6]
take hold? There is value in distributing a single file with many
modules, both for web performance and for command line tools: giving
the developer one file to execute on the command-line is better than
having to ship them a directory.

5) Loader plugins


Some of the AMD loaders support loader plugins[7]. This allows an AMD
module to load other types of resources, and a module can treat them
as a dependency. For instance:

   //The 'text' module knows how to load text files,
   //this one loads text for an HTML template
   var template = require('text!templates/one.html');

   //The 'cs' module knows how to load CoffeeScript-formatted
   //modules via XHR, then transforms the
   //CoffeeScript text to regular JavaScript and
   //then executes that code to define the module.
   var Widget = require('cs!Widget');

Would something like this be possible, perhaps via the module loaders
API? It looks like the resolver API may come close, but I am not sure
it is enough. This may be difficult if the module loader wants to
parse all the require() calls fetch those resources before doing any
module evaluation, where loader plugins are modules that implement an
API and need to be evaluated and allow them to process the dependency
string to satisfy the dependency.

6) Loader API: multiple modules


The loader API seems to only allow loading one module at a time with a
callback (looking at Loader.load). Is there a way to say load these
three modules, and call this callback 

Simple Modules: lazy dependency evaluation

2011-01-26 Thread James Burke
CommonJS Modules 1.1 allows this kind of construct in module code:

var a;
if (someCondition) {
a = require(a1);
} else {
a = require(a2);
}

and the module a1 is not actually evaluated until execution reaches
the a = require(a1) call.

1) Could something like this work in Simple Modules? If so, what would
be the syntax for it?
2) What are the design decisions behind only allowing module and
use at the top level of a module?

There are some discussions on the CommonJS list about eager vs. lazy
evaluation of dependencies (the above Modules 1.1 example is seen as a
lazy evaluation of either the a1 or a2 dependency), and it would
be helpful to know how or if Simple Modules could support the pattern
above and the design decisions around it.

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Function.prototype.toString to discover function dependencies

2010-09-20 Thread James Burke
On Sun, Sep 19, 2010 at 12:41 PM, Mark S. Miller erig...@google.com wrote:
 Given source code, of course, either recognition trick works. Given only
 meaning preserving source code, which is the most I was willing to propose
 on that strawman page, these recognition tricks fail
 http://wiki.ecmascript.org/doku.php?id=strawman:function_to_string#discussion.
 Would we be willing to specify enough fidelity with the original source code
 that your trick would work? I don't know. Perhaps AST preserving? Perhaps
 preserving of some abstraction over equivalent ASTs? I would like to clean
 up Function.prototype.toString for ES-Harmony. Opinions?

If meaning preserving source code had qualifications like:

1) If line returns are removed, comments must be removed.
2) String literals need to stay string literals.
3) The number of function arguments must stay the same, and function
argument names and their use within the function must be preserved.

that might be enough to give implementors enough flexibility but still
allow the require dependencies to be found? Although I am not a
language expert, maybe there are still holes with those
qualifications.

#3 could be relaxed to The number of function arguments need to stay
the same. The name can change, but code inside the function needs to
change to use the new name, and the use of those names needs to be
preserved.

That would require a bit more work from my side. If the toString
converted the function to something like function(r,e,m) {var
f=r('foo'),b=r('bar');}, then it would require more work to pull out
the r function arg. A bit less efficient, but doable. Probably not too
bad given that this should just be used in a source code loading
form, and not something that is used after optimization. For
optimization it will be common to place many require.def calls into
one file, so names for the modules and the dependencies can be pulled
out from the source and injected into the optimized require.def call.

But if it is all the same to implementors, then I prefer the first,
stronger wording of #3.

I went ahead and put support for this require.def toString approach
into the RequireJS implementation to try it out. There are still some
edges to clean up, but we will see how it goes. Right now I assume the
above qualifications on the toString() form. This seems to work out,
but more testing is needed. For the apparently small number of
implementations that do not preserve some usable string for
Function.prototype.toString, then the advice will be to use an
optimized/built version of the code that has pulled the dependencies
out already, or to use use the other syntax supported by RequireJS:

require.def(['foo', 'bar'], function(foo, bar){
//return a value to define the module. Alternatively,
//as for 'exports' as a dependency in the dependency
//array and specify it as a matching function arg
//if dealing with a rare circular dependency failure case.
return function(){};
});

James
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Function.prototype.toString to discover function dependencies

2010-09-15 Thread James Burke
First time posting to the es-discuss list:

Over on the CommonJS list, there is a thread about how to provide a
module format that works in today's browsers. The traditional CommonJS
module format cannot run in the browser via script tags without a
transform, and some of us want a format that will work without
transforms but can be easily hand-coded.

Tom Robinson made the following suggestion, is this an acceptable use
of Function.prototype.toString:

require.def(function (require, exports, module) {
var foo = require('foo'),
bar = require('bar');
});

Where require.def would call .toString() on the function passed to it,
regexp for the require() calls, then make sure to load those
dependencies before executing the function that defines the module.

So the toString() does not have to be an exact copy of the function,
just enough that allows the require() calls to be found.

I am leaving out a bunch of other stuff about the browser-friendly
module format (for instance there is a transform for this format to
allow grouping more than one module in a file for optimized browser
delivery and that form does not use .toString() and this approach
*would not* be used on minified code), but the main question for the
es-discuss list is:

Is this an acceptable use of Function.prototype.toString? What
problems would there be now or in the future?

What I have found so far:

The ECMAScript 3rd and 5th edition specs say the following:
--
15.3.4.2 Function.prototype.toString ( )

An implementation-dependent representation of the function is
returned. This representation has the syntax of a FunctionDeclaration.
Note in particular that the use and placement of white space, line
terminators, and semicolons within the representation string is
implementation-dependent.

The toString function is not generic; it throws a TypeError exception
if its this value is not a Function object. Therefore, it cannot be
transferred to other kinds of objects for use as a method.
--

In a 2008 es-discuss thread[1], Erik Arvidsson thought the
implementation-dependent part allowed for low-memory devices to not
keep a reversible implementation in memory.

A test was run in a few browsers to test the approach[2]: modern
desktop browsers (including IE 6+) give a usable toString() value, and
there are many mobile browsers that also work well: Android, iOS,
Windows Mobile, webOS and latest BlackBerry. So far BlackBerry 4.6 and
some version of Opera Mobile may not, and there are some like Symbian
and Bada that have not been tested yet.

So one issue is not universal browser support, but enough of today's
browsers both on desktop and mobile that it may make sense using it
going forward. While the spec technically allows an implementation to
produce something that might cause the above mentioned Function
toString() approach to fail, it seems in practice it may be workable.

I am a bit wary of the approach, but initial testing seems to indicate
it may work, so I am looking to have an expert review done before
proceeding with the approach. If any of you have other other
information or concerns it would be good to know.

James

[1 ]Thread starts with this post:
https://mail.mozilla.org/pipermail/es-discuss/2008-September/007632.html

[2] Browser test page:
http://requirejs.org/temp/fts.html
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss