Re: Simple Modules: lazy dependency evaluation
On Wed, Jan 26, 2011 at 5:04 PM, Brendan Eich bren...@mozilla.com wrote: CommonJS may do that on the server side, assuming fast enough file i/o. It's not necessarily a good idea even there (Ryan Dahl has talked about this). On the client, it's right out, which is why client-side CommonJS-like module systems require a preprocessor or else a callback. There is work under way at CommonJS to rectify this problem. A few different proposals have emerged, which all have approximately the same theme: - Explicitly decouple exports-lookup and module loading. - require() remains the exports-lookup interface - Introduce a way to know which modules are needed by a program, such as the explicit declaration of dependencies, or static analysis of the source code looking for require() statements. - Before the main module is executed, load all dependencies (recursively) Adding a mandatory function wrapper to the module source code also allows these modules to be loaded and executed directly by DOM script tag injection (an important technique as XHR has cross-domain restrictions); the function wrapper also provides a convenient place to hang dependencies. Here is what a module in one proposal (Modules/2.0-draft7) looks like: module.declare([*lib/dialog*], function(require, exports, module) { /* Any valid Modules/1.1.1 modules goes here */ require(*lib/dialog*).notify(hello, world); }) (Oh -- main-modules are modules which are executed automatically by the CommonJS host environment; e.g. by exec(3) shebang, HTML script tag, or other mechanism; the mechanism is not part of the specification) I should also say that all the proposals also have a way to explicitly load a particular module at run-time, rather than via the dependency graph. The interface specifies a module name (or names) and a callback. Once the module and its dependent modules are loaded, the callback is executed. This lets us do lazy-loading, like Simple Modules loaders, without breaking run-to-completion. On Wed, Jan 26, 2011 at 6:25 PM, Kam Kasravi kamkasr...@yahoo.com wrote: Are you guys following modules 2.0 at all that seems to be a parallel universe of sorts under co mmonjs? Full disclosure - I am the principle author of that document. It is one of the proposals mentioned above. Its current status is pompously-named document designed to get attention - it is not a standard, and carries only the weight of the paper it is printed on. FWIW, it discusses much more than the CommonJS module system -- it also attempts to nail down the execution environment. That said, there is no way Modules/2.0 belongs under consideration by TC39; it is a best-effort with limited tools proposal; Simple Modules gets to use new tools. On Wed, Jan 26, 2011 at 5:40 PM, David Herman dher...@mozilla.com wrote: Just to flesh this out a bit: simple modules were designed to make it possible to import and export variables into lexical scope, and to be compatible with checking valid imports and exports statically, as well as being able to check for unbound variables statically. Including, for example, in the case where you say import M.*; Moreover, they are designed to allow loading modules *without* having to use callbacks, in the common case where they can be pre-loaded at compile-time. James' query comes as attempt to understand design decisions made for Simple Modules with respect to the timing of the evaluation of the module factory function. With loading and exports now decoupled, the question becomes -- when does require actually evaluate the module body? The balancing point seems to be Is it worth breaking backwards compatibility on existing platforms in order to try and mimic Simple Modules?. Breaking backwards compatibility, in this case, means evaluating the factories eagerly, as they are loaded into the environment as the dependency tree is satisfied (before the main module runs). Currently, factories are executed as side-effects of the first require() call (per module - our modules are singletons). This timing is important, as factories have observable side effects (consider the module above). So, what we're talking about is not just the Simple Modules loader, but the static declaration form as well. Static declaration is analogous to loading a list of modules which is comprised of the main module and its recursive dependencies and then executing the main module. FWIW - my take on this is that porting CommonJS to Simple Modules is going to require a code-audit anyhow; I believe that breaking backwards compatibility within the CommonJS family is not worth trying to mimic such a small sliver of Simple Modules semantics. There are much larger mismatches (singletons, module-keyword, lexical scope, require() to name just a few) which make it a moot point IMO. Kris Kowal's query is interesting: is lazy evaluation worth considering for Simple Modules? module M { export var foo = 42; export
Re: Simple Modules: lazy dependency evaluation
On Jan 27, 2011, at 8:38 AM, Wes Garland wrote: Kris Kowal's query is interesting: is lazy evaluation worth considering for Simple Modules? module M { export var foo = 42; export function bar() { return foo; } alert(hello, world); } In the example above, the alert statement would occur when the first import from M statement were executed, rather than when the page containing the SCRIPT type=es-next were loaded. Believe me, I have considered it. :) We thought for a while about demand-driven evaluation of modules. There are a couple reasons why I believe it would be too problematic. First, we'd really like to make the act of throwing your code into a module as transparent as possible; changing the control flow would make modules more heavyweight. But more importantly, since as you mentioned, module evaluation can contain arbitrary side effects, evaluating them lazily means laziness with side effects. This makes for really hard-to-understand and hard-to-debug initialization errors, where you end up having to write mysterious top-level imports to force evaluation of modules in particular orders. Laziness + side-effects: bad scene, man. Dave ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
promises | Communicating Event-Loop Concurrency and Distribution
Hi, I was curious to know what is the state of the following proposal: http://wiki.ecmascript.org/doku.php?id=strawman:concurrency I do believe that having ES native promises could provide drastically better alternative for writing async code in comparison to currently popular nested callback style. Also even though there are few implementations of Q API adoption is still low IMO that's due to non obvious and verbose syntax. Syntactic sugar described in the proposal really makes a lot of difference. Also I don't see proposal for `Q.when` syntax and would love to know what is the a plan for that. Thanks -- Irakli Gozalishvili Web: http://www.jeditoolkit.com/ Address: 29 Rue Saint-Georges, 75009 Paris, France http://goo.gl/maps/3CHu ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: [ES Harmony Proxies] Fundamental trap definition
Le 27/01/2011 11:20, Tom Van Cutsem a écrit : - Agreed that if we contemplate adding 'proxy' as an argument to getPropertyDescriptor and getPropertyNames, we should add it to all other traps as well. - Agreed that this is the simplest way of allowing a shared handler to get at the proxy it's currently 'serving' - W.r.t. a handler not being able to perform typeof/instanceof on the proxy it's intercepting: this does have the benefit of detracting programmers from writing handlers that act differently for function vs object proxies. But agreed that this functionality might be needed to fix the quirky behavior of a host object. This last point is actually a concern. Is the (one?) point of proxies to being able to fully emulate (and by 'emulate', I also mean 'fix') host objects. If so, proxies should have access to all internal methodsproperties. As said in ES5 8.6.2 right under Table 8: Every object (including host objects) must implement all of the internal properties listed in Table 8.. If we want to be able to emulate proxies, we need to be as powerful with proxies. Question is: do we want to? I have recently been working on emulating Arrays (they are host objects, aren't they?) with proxies and native objects. I'll put the code on github and provide a longer feedback in another e-mail, but the basic idea was to use a forwarding proxy and implement the only method that matters: defineProperty. I have almost blindly implemented+adapted [[defineOwnProperty]] (ES5 15.4.5.1). Everything works perfectly so far. There are a few differences with native arrays, though: - I have no control over the [[class]] internal property, so even though my proxyArrays act like native ones, they can be discriminated quite easily. - I haven't tested, but according to the semantics of Array initialisers (ES5 11.1.4), [] calls new Array() where Array is the standard built-in constructor with that name.. So even if I try window.Array = proxyArray, [] should not create one of my arrays. This sounds perfectly fair. - I obviously initialize my proxyArrays with Array.prototype as prototype. I haven't tested if it works; it is allowed not to, because in ES5, under each array prototype method is written: The *** function is intentionally generic; it does not require that its this value be an Array object. Therefore it can be transferred to other kinds of objects for use as a method. Whether the *** function can be applied successfully to a host object is implementation-dependent. . They could be reimplemented if needed to fully emulate Arrays. (I've worked on that a billion years ago : https://github.com/DavidBruant/ecma5array/blob/test_conformance/ecma5array.js) So here is another question on proxies goal/rational: Should proxies be powerful enough to fully emulate native Arrays? (If so and if ES Harmony has proxies in it, then native Array could be specified as a proxy which is awesome and a half in my opinion, but that's a different problem). Re. adding 'proxy' as an optional last parameter to all traps: what worries me is that for some traps, this could be terribly confusing. Consider: Object.getOwnPropertyDescriptor(proxy, name); // will trap as: getOwnPropertyDescriptor: function(name, proxy) { ... } The reversed argument order is going to bite people. Adding 'proxy' as a last optional argument is confusing in this way for get{Own}PropertyDescriptor, defineProperty, delete, hasOwn It's OK for get{Own}PropertyNames, keys, fix, has, enumerate get and set would then take 4 arguments, and it's unclear to me whether tacking 'proxy' to the front or to the back is better. (alternatively, maybe 'get' and 'set' don't need access to the proxy after all. It seems to me that if 'receiver' has multiple proxies in its prototype chain, only the one closest to receiver is going to trigger a handler get/set trap. OTOH, without a built-in Proxy.isTrapping(obj) method, I don't think the handler can find out what object on the prototype chain is a proxy) We could choose to add 'proxy' at the right place in the argument list depending on the trap. This trades one kind of cognitive load (unexpected argument order for some traps) for another (no uniform position of 'proxy' across all traps, and no longer an optional argument for all traps). Either way, we need to decide whether this added API complexity is worth it. One more thing: giving the handler access to its proxy by default may increase chances of runaway recursion hazards, as every operation that the handler performs on its own proxy is going to call some trap on the handler itself. Ideally, we should be able to label the 'proxy' parameter with a big red warning sign saying handle with care!. I ran into this problem already a couple of times when writing get traps. The 'receiver' parameter often is the proxy itself, and doing even simple things like trying to print the proxy (e.g. for debugging) will call proxy.toString, hence will recursively
Re: Simple Modules: lazy dependency evaluation
On Thu, Jan 27, 2011 at 7:27 AM, David Herman dher...@mozilla.com wrote: We thought for a while about demand-driven evaluation of modules. There are a couple reasons why I believe it would be too problematic. First, we'd really like to make the act of throwing your code into a module as transparent as possible; changing the control flow would make modules more heavyweight. But more importantly, since as you mentioned, module evaluation can contain arbitrary side effects, evaluating them lazily means laziness with side effects. This makes for really hard-to-understand and hard-to-debug initialization errors, where you end up having to write mysterious top-level imports to force evaluation of modules in particular orders. Laziness + side-effects: bad scene, man. On the opposite side of the argument, I presume that this means that modules are evaluated when their transitive dependencies are loaded. This would imply that the order in which the modules are delivered, possibly over a network using multiple connections, would determine the execution order, which would in turn be non-deterministic. Non-determinisim + side-effects is also a bad scene. Is there an alternate method proposed in Simple Modules for deterministically linearizing the evaluation order? Non-determinism is definitely a greater evil than providing developers a means to explicate the order in which they would like their side-effects to be wrought. Kris Kowal ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Simple Modules: lazy dependency evaluation
On the opposite side of the argument, I presume that this means that modules are evaluated when their transitive dependencies are loaded. This would imply that the order in which the modules are delivered, possibly over a network using multiple connections, would determine the execution order, which would in turn be non-deterministic. No, that's not the case. At compile-time, the compiler may load the *files* non-deterministically, but it is required to evaluate them in their declared order, deterministically. Non-determinisim + side-effects is also a bad scene. Indeed. Dave ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Simple Modules: lazy dependency evaluation
On Thu, Jan 27, 2011 at 9:14 AM, David Herman dher...@mozilla.com wrote: …but it is required to evaluate them in their declared order, deterministically. Would you explain how declaration order is inferred from the contents of the unordered of files? It's clear that the order is at least partially knowable through the order of module declarations within a single file, and that load directives would be replaced with a nest of modules, which is similar in effect to loading on demand if the load directive is considered a point of demand at run-time. And we're guaranteed that there are no files that would be loaded that are not reachable through transitive load directives. I suppose I've answered my question, if all my assumptions are correct. Kris Kowal ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Simple Modules: lazy dependency evaluation
The easiest way to think about it is to imagine the fully loaded bits of the whole program as if they were just declared inline in one big file. Then the total order is manifest -- it's just the order they appear in the program. The non-deterministic I/O performed by the compiler is just an internal detail of the engine's implementation that doesn't leak into the semantics. Or here's a sort of more operational way to think about it: Start with the outermost program. It declares a bunch of modules, some of which are loaded from external files. But the declaration order of this first level of sub-modules is manifestly ordered in the program. Now maybe the compiler loads those files in parallel or out of order, but ultimately it has all the bits. Within each loaded file, the order of modules is (recursively now) itself totally ordered. Dave On Jan 27, 2011, at 11:30 AM, Kris Kowal wrote: On Thu, Jan 27, 2011 at 9:14 AM, David Herman dher...@mozilla.com wrote: …but it is required to evaluate them in their declared order, deterministically. Would you explain how declaration order is inferred from the contents of the unordered of files? It's clear that the order is at least partially knowable through the order of module declarations within a single file, and that load directives would be replaced with a nest of modules, which is similar in effect to loading on demand if the load directive is considered a point of demand at run-time. And we're guaranteed that there are no files that would be loaded that are not reachable through transitive load directives. I suppose I've answered my question, if all my assumptions are correct. Kris Kowal ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: [ES Harmony Proxies] Fundamental trap definition
I will let Tom and Mark field this one in full, but Arrays are *not* host objects, they are native objects in ECMA-262 terms. However, Allen has taken the action to reconcile and perhaps unify the spec's internal methods (its Meta-Object Protocol) and Proxies' MOP. This would allow the spec to treat arrays as if they were proxies with native handlers. In reality, of course, a proxy is an empty object with a handler object (shared or not) that holdsl mutable state (somehow: closure, weakmap, whatever). Arrays OTOH must be efficient, dense if possible, single allocation if small, etc. No practical implementation would make a proxy and handler holding mutable state. This beautiful slide of Tom's (I believe) shows the parallelism and differences: http://brendaneich.com/brendaneich_content/uploads/selective-interception.png In C and C++ implementations, the handler is a suite of functions or methods, and all the array data is in the green object circle, or attached to it for efficient access with dynamic range over length. /be On Jan 27, 2011, at 8:33 AM, David Bruant wrote: Le 27/01/2011 11:20, Tom Van Cutsem a écrit : - Agreed that if we contemplate adding 'proxy' as an argument to getPropertyDescriptor and getPropertyNames, we should add it to all other traps as well. - Agreed that this is the simplest way of allowing a shared handler to get at the proxy it's currently 'serving' - W.r.t. a handler not being able to perform typeof/instanceof on the proxy it's intercepting: this does have the benefit of detracting programmers from writing handlers that act differently for function vs object proxies. But agreed that this functionality might be needed to fix the quirky behavior of a host object. This last point is actually a concern. Is the (one?) point of proxies to being able to fully emulate (and by 'emulate', I also mean 'fix') host objects. If so, proxies should have access to all internal methodsproperties. As said in ES5 8.6.2 right under Table 8: Every object (including host objects) must implement all of the internal properties listed in Table 8.. If we want to be able to emulate proxies, we need to be as powerful with proxies. Question is: do we want to? I have recently been working on emulating Arrays (they are host objects, aren't they?) with proxies and native objects. I'll put the code on github and provide a longer feedback in another e-mail, but the basic idea was to use a forwarding proxy and implement the only method that matters: defineProperty. I have almost blindly implemented+adapted [[defineOwnProperty]] (ES5 15.4.5.1). Everything works perfectly so far. There are a few differences with native arrays, though: - I have no control over the [[class]] internal property, so even though my proxyArrays act like native ones, they can be discriminated quite easily. - I haven't tested, but according to the semantics of Array initialisers (ES5 11.1.4), [] calls new Array() where Array is the standard built-in constructor with that name.. So even if I try window.Array = proxyArray, [] should not create one of my arrays. This sounds perfectly fair. - I obviously initialize my proxyArrays with Array.prototype as prototype. I haven't tested if it works; it is allowed not to, because in ES5, under each array prototype method is written: The *** function is intentionally generic; it does not require that its this value be an Array object. Therefore it can be transferred to other kinds of objects for use as a method. Whether the *** function can be applied successfully to a host object is implementation-dependent. . They could be reimplemented if needed to fully emulate Arrays. (I've worked on that a billion years ago : https://github.com/DavidBruant/ecma5array/blob/test_conformance/ecma5array.js) So here is another question on proxies goal/rational: Should proxies be powerful enough to fully emulate native Arrays? (If so and if ES Harmony has proxies in it, then native Array could be specified as a proxy which is awesome and a half in my opinion, but that's a different problem). Re. adding 'proxy' as an optional last parameter to all traps: what worries me is that for some traps, this could be terribly confusing. Consider: Object.getOwnPropertyDescriptor(proxy, name); // will trap as: getOwnPropertyDescriptor: function(name, proxy) { ... } The reversed argument order is going to bite people. Adding 'proxy' as a last optional argument is confusing in this way for get{Own}PropertyDescriptor, defineProperty, delete, hasOwn It's OK for get{Own}PropertyNames, keys, fix, has, enumerate get and set would then take 4 arguments, and it's unclear to me whether tacking 'proxy' to the front or to the back is better. (alternatively, maybe 'get' and 'set' don't need access to the proxy after all. It seems to me that if 'receiver' has multiple proxies in its prototype chain, only the one
Re: promises | Communicating Event-Loop Concurrency and Distribution
On Thu, Jan 27, 2011 at 8:00 AM, Irakli Gozalishvili rfo...@gmail.comwrote: Hi, I was curious to know what is the state of the following proposal: http://wiki.ecmascript.org/doku.php?id=strawman:concurrency It's on the agenda for the upcoming March meeting. I was planning to do some more work on it before declaring it ready for discussion. But since you raise it, I'm happy enough with it in its current state. I can clarify my remaining issues with it as discussion proceeds. So This page is now ready for discussion. I do expect that this strawman is too large to be accepted into ES-next as a whole. To get some of it accepted -- syntactic sugar + some supporting semantics -- we need to find a clean boundary between what kernel and extension mechanism the platform needs to provide vs the remaining functionality that should be provided by libraries using these extension mechanisms. For example, Tyler's web_send uses such an extension API in ref_send to stretch these operations onto the network, by mapping them onto JSON/RESTful HTTPS operations. Kris Kowal's qcomm library uses Q.makePromise (like that proposed above) to stretch these operations over WebSockets, whose connection-oriented semantics enables a better mapping at the price of more specialized usage. I hope that Kevin Reid's caja-captp can also be reformulated as a library extending the Q API. (See links to ref_send, web_send, qcomm, and caja-captp at the bottom of the strawman page.) I do believe that having ES native promises could provide drastically better alternative for writing async code in comparison to currently popular nested callback style. Also even though there are few implementations of Q API adoption is still low IMO that's due to non obvious and verbose syntax. Syntactic sugar described in the proposal really makes a lot of difference. Also I don't see proposal for `Q.when` syntax and would love to know what is the a plan for that. Given a lightweight closure syntax, I don't think Q.when actually needs any further syntactic sugar. For example, here's the asyncAnd example from Figure 18.1 of http://erights.org/talks/thesis/ in JS with this strawman + destructuring + the lightweight closure syntax from HOBD (Harmony of Brendan's Dreams): #asyncAnd(answerPs) { let countDown = answerPs.length; if (countDown === 0) { return true; } const {promise, resolver} = Q.defer(); answerPs.forEach(#(answerP) { Q.when(answerP, #(answer) { if (answer) { if (--countDown === 0) { resolver(true); } } else { resolver(false); } }, #(ex) { resolver(Q.reject(ex)); }); }); return promise; } The original asyncAnd in Figure 18.1 is in E, whose syntax was designed without legacy burden to make such code pleasant. Nevertheless, I don't think the code above suffers much in comparison. Of course, if you have a suggestion of how a sugared Q.when can improve on this enough to be worth the cost of yet more sugar, please suggest it. Thanks. Thanks -- Irakli Gozalishvili Web: http://www.jeditoolkit.com/ Address: 29 Rue Saint-Georges, 75009 Paris, Francehttp://goo.gl/maps/3CHu ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss -- Cheers, --MarkM ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: promises | Communicating Event-Loop Concurrency and Distribution
This looks like a case of creating in-language for a library. This was done with json2.js because it was one of the most widely used libraries in existence and similar JSON handlers were used in numerous libraries. The ref_send library and sibling Q style implementations are not anywhere close to that level of adoption. It seems like there are a large number of library functions that have proven useful enough (by real usage) ripe for incorporation into the language before the Q api. In fact, even with the realm of JavaScript promise libraries, this implementation/API, which has been discussed and known about for a number of years has not become the dominate interface API, there are other approaches that have been found more use. While I certainly like and appreciate the Q API, I don't think it has proven itself worthy to be added to the language. The real pain point with promises isn't that it takes too many characters to type Q.post. How many bytes does the average JavaScript program spend on promise API calls? It negligible, this proposal isn't solving any real world problem. The main challenge and overhead of promises, or any CPS-esque code flow is the complexity and overhead of continuing flow through callbacks. This is why there are proposals for shorter anonymous functions and single-frame continuations. Single frame continuations provide the tools for building cleaner, simpler code flow with promises or any other callback style mechanism without standardizing on a single library. Like Brendan mentioned in last blog post (well, I guess it came from Guy Steele), good language empowers users to build on it, not stifle them with single library approach. Thanks, Kris On 1/27/2011 5:09 PM, Mark S. Miller wrote: On Thu, Jan 27, 2011 at 8:00 AM, Irakli Gozalishvili rfo...@gmail.com mailto:rfo...@gmail.com wrote: Hi, I was curious to know what is the state of the following proposal: http://wiki.ecmascript.org/doku.php?id=strawman:concurrency It's on the agenda for the upcoming March meeting. I was planning to do some more work on it before declaring it ready for discussion. But since you raise it, I'm happy enough with it in its current state. I can clarify my remaining issues with it as discussion proceeds. So This page is now ready for discussion. I do expect that this strawman is too large to be accepted into ES-next as a whole. To get some of it accepted -- syntactic sugar + some supporting semantics -- we need to find a clean boundary between what kernel and extension mechanism the platform needs to provide vs the remaining functionality that should be provided by libraries using these extension mechanisms. For example, Tyler's web_send uses such an extension API in ref_send to stretch these operations onto the network, by mapping them onto JSON/RESTful HTTPS operations. Kris Kowal's qcomm library uses Q.makePromise (like that proposed above) to stretch these operations over WebSockets, whose connection-oriented semantics enables a better mapping at the price of more specialized usage. I hope that Kevin Reid's caja-captp can also be reformulated as a library extending the Q API. (See links to ref_send, web_send, qcomm, and caja-captp at the bottom of the strawman page.) I do believe that having ES native promises could provide drastically better alternative for writing async code in comparison to currently popular nested callback style. Also even though there are few implementations of Q API adoption is still low IMO that's due to non obvious and verbose syntax. Syntactic sugar described in the proposal really makes a lot of difference. Also I don't see proposal for `Q.when` syntax and would love to know what is the a plan for that. Given a lightweight closure syntax, I don't think Q.when actually needs any further syntactic sugar. For example, here's the asyncAnd example from Figure 18.1 of http://erights.org/talks/thesis/ in JS with this strawman + destructuring + the lightweight closure syntax from HOBD (Harmony of Brendan's Dreams): #asyncAnd(answerPs) { let countDown = answerPs.length; if (countDown === 0) { return true; } const {promise, resolver} = Q.defer(); answerPs.forEach(#(answerP) { Q.when(answerP, #(answer) { if (answer) { if (--countDown === 0) { resolver(true); } } else { resolver(false); } }, #(ex) { resolver(Q.reject(ex)); }); }); return promise; } The original asyncAnd in Figure 18.1 is in E, whose syntax was designed without legacy burden to make such code pleasant. Nevertheless, I don't think the code above suffers much in comparison. Of course, if you have a suggestion of how a sugared Q.when can improve on this enough to be worth the cost