Re: weakref proposal update

2018-03-19 Thread Dean Tribble
er, by "it looks larger than it appears" I meant "it's smaller than it
appears" or "it looks much larger than it is" or some such :).

On Mon, Mar 19, 2018 at 5:21 PM, Dean Tribble <trib...@e-dean.com> wrote:

> A first round of spec-text is now pushed for the new APIs at
> https://github.com/tc39/proposal-weakrefs/blob/master/specs/spec.md.
>
> A note on the API change in the last presentation: it looks larger than it
> appears.
>
> 1) Some of WeakRef was pushed in to a new parent, "WeakCell", so that it
> could better support long terms in wasm, and because there's now two types,
> 2) WeakRefGroup was renamed to WeakFactory.
>
> WeakRef is unchanged (and creation still preserves the Target until the
> end of turn). The new "WeakCell" is for finalization only, so it doesn't
> have a deref() *and* creation does not strongly preserve the Target until
> the end of the turn.
>
>
> On Fri, Mar 16, 2018 at 1:09 AM, Dean Tribble <trib...@e-dean.com> wrote:
>
>> We got another round of insurmountably good feedback, so I revised the
>> presentation again and am back to writing spec text.
>>
>> I will get the revised spectext out before finishing the revision of the
>> accompanying proposal doc, which is primarily background and explanation.
>>
>> On Tue, Mar 13, 2018 at 10:35 PM, Dean Tribble <trib...@e-dean.com>
>> wrote:
>>
>>> This is just a heads up that the WeakRefs *presentation* has been
>>> updated with the improved API and semantics, based on various issues and
>>> discussions from the first version of the proposal.  The new version is
>>> updated in place and is still at:
>>>
>>> https://github.com/tc39/proposal-weakrefs/blob/master/specs/
>>> Weak%20References%20for%20EcmaScript.pdf
>>>
>>> The update for the proposal document itself is still in progress. I will
>>> reply here when it is updated (this week).
>>>
>>> Thank you to all the people who posted issues and examples and
>>> participated in those discussions.
>>>
>>
>>
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: weakref proposal update

2018-03-19 Thread Dean Tribble
A first round of spec-text is now pushed for the new APIs at
https://github.com/tc39/proposal-weakrefs/blob/master/specs/spec.md.

A note on the API change in the last presentation: it looks larger than it
appears.

1) Some of WeakRef was pushed in to a new parent, "WeakCell", so that it
could better support long terms in wasm, and because there's now two types,
2) WeakRefGroup was renamed to WeakFactory.

WeakRef is unchanged (and creation still preserves the Target until the end
of turn). The new "WeakCell" is for finalization only, so it doesn't have a
deref() *and* creation does not strongly preserve the Target until the end
of the turn.


On Fri, Mar 16, 2018 at 1:09 AM, Dean Tribble <trib...@e-dean.com> wrote:

> We got another round of insurmountably good feedback, so I revised the
> presentation again and am back to writing spec text.
>
> I will get the revised spectext out before finishing the revision of the
> accompanying proposal doc, which is primarily background and explanation.
>
> On Tue, Mar 13, 2018 at 10:35 PM, Dean Tribble <trib...@e-dean.com> wrote:
>
>> This is just a heads up that the WeakRefs *presentation* has been updated
>> with the improved API and semantics, based on various issues and
>> discussions from the first version of the proposal.  The new version is
>> updated in place and is still at:
>>
>> https://github.com/tc39/proposal-weakrefs/blob/master/specs/
>> Weak%20References%20for%20EcmaScript.pdf
>>
>> The update for the proposal document itself is still in progress. I will
>> reply here when it is updated (this week).
>>
>> Thank you to all the people who posted issues and examples and
>> participated in those discussions.
>>
>
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: weakref proposal update

2018-03-15 Thread Dean Tribble
We got another round of insurmountably good feedback, so I revised the
presentation again and am back to writing spec text.

I will get the revised spectext out before finishing the revision of the
accompanying proposal doc, which is primarily background and explanation.

On Tue, Mar 13, 2018 at 10:35 PM, Dean Tribble <trib...@e-dean.com> wrote:

> This is just a heads up that the WeakRefs *presentation* has been updated
> with the improved API and semantics, based on various issues and
> discussions from the first version of the proposal.  The new version is
> updated in place and is still at:
>
> https://github.com/tc39/proposal-weakrefs/blob/master/
> specs/Weak%20References%20for%20EcmaScript.pdf
>
> The update for the proposal document itself is still in progress. I will
> reply here when it is updated (this week).
>
> Thank you to all the people who posted issues and examples and
> participated in those discussions.
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


weakref proposal update

2018-03-13 Thread Dean Tribble
This is just a heads up that the WeakRefs *presentation* has been updated
with the improved API and semantics, based on various issues and
discussions from the first version of the proposal.  The new version is
updated in place and is still at:

https://github.com/tc39/proposal-weakrefs/blob/master/specs/Weak%20References%20for%20EcmaScript.pdf

The update for the proposal document itself is still in progress. I will
reply here when it is updated (this week).

Thank you to all the people who posted issues and examples and participated
in those discussions.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Enable async/await to work on functions that don't just return promises.

2017-02-26 Thread Dean Tribble
>
> Should `callee()` be asynchronous here?  To my mind, no, it shouldn't.
> Every single line here is synchronous, so the function itself should surely
> be synchronous.  Shouldn't functions that may not have `await` in them, but
> instead that are actually asynchronous and hence use the `async return`
> keyword be the ones we define with `async`?


In the Javascript (and Midori) model, concurrent execution of multiple
activities is achieved by breaking those activities up into coarse-grained,
application-defined "turns" (or "jobs") and interleaving those.  An async
boundary is where the current turn could end, and the turns for other
concurrent activities might run, changing the state before the current
activity proceeds.

Therefore, callee must be async, because that declares that there could be
a turn boundary within it, and thus, the rest of the state of the program
could change as a result of the call.  The caller of callee *must *ensure
that it's invariants are correct before allowing other code to interleave
with it.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Feedback on Iterable Numbers Proposal?

2017-02-26 Thread Dean Tribble
A Range type seems to me clearer, more powerful, and less magical.  Even
without syntax, the clarity seems better:

//for-of syntaxfor (const i of Range.upto(5)){
  //do something with i
}


for(const i of Range.from(3, 15)){
  //do something with i
}


Whether Range's are a class or it's just a set of iterator constructors
depends on what else you can do with it. The larger proposed change does
not seem to me like it offsets the confusion introduced by magical syntax
(e.g., what is the result of new Array(4)?)



On Sun, Feb 26, 2017 at 11:00 AM, John Henry  wrote:

> Howdy!,
>
> My name is John and I have a (hopefully non-contentious) addition for the
> ECMA Script Language described here: https://github.com/
> johnhenry/make-numbers-iterable. I wonder if there are good folks out
> there willing to give me feedback? I also wonder if someone might be
> willing to champion the proposal as described here:
> https://github.com/tc39/proposals/blob/master/CONTRIBUTING.md
>
> Thanks,
> -- John
>
> ___
> es-discuss mailing list
> es-discuss@mozilla.org
> https://mail.mozilla.org/listinfo/es-discuss
>
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: non-self referencial cyclical promises?

2016-02-24 Thread Dean Tribble
I agree that the standard shoudl require a deterministic error, and I
thought it did. In
https://tc39.github.io/ecma262/#sec-promise-resolve-functions:

6. If SameValue(resolution, promise) is true, then
>   a.Let selfResolutionError be a newly created TypeError object.
>   b.Return RejectPromise(promise, selfResolutionError).


I suspect I assumed to much for "SameValue" here though. There's no
well-defined other semantic answer to a cycle; it essentially always
represents a bug, but could emerge dynamically out of bad code. You must
either catch that as soon as possible or it's extremely difficult to
isolate.

Another approach to efficiently achieve this in the implementation is to
vigorously shorten targets. In this approach, the `bf(a)` would first
shorten `a` so that it's internally pointing at the cell that `bf` will
resolve to (chains are typically short, so keeping chains short is
typically fast). Then the cycle check is simple and O(1). All the work is
in shortening. There are some patterns that can make for interim long
chains but they are straightforward to avoid.

On Wed, Feb 24, 2016 at 12:16 PM, Mark S. Miller  wrote:

>
>
> On Wed, Feb 24, 2016 at 11:54 AM, Bergi  wrote:
>
>> Bradley Meck wrote:
>>
>>> I was doing some recursive data structure work and ended up with a
>>> cyclical
>>> promise that did not use a direct self reference. It can be reduced down
>>> to:
>>>
>>> ```javascript
>>> var af, a = new Promise(f=>af=f);
>>> var bf, b = new Promise(f=>bf=f);
>>>
>>> af(b);bf(a); // the problem
>>>
>>> a.then(_=>_) // some env/libs need this to start checking status
>>> ```
>>>
>>> According to
>>> https://tc39.github.io/ecma262/#sec-promise-resolve-functions
>>> it looks like this should cause a recursive and infinite set of
>>> `EnqueueJob("PromiseJobs",...)`
>>>
>>
>> I fear that's what the standard says, yes. The ES6 spec does too many
>> (and in some cases, unreasonably many) `then` calls on promises anyway to
>> be followed by an efficient promise implementation.
>>
>> [Promises/A+](https://promisesaplus.com/) in contrast says
>>
>> | Implementations are encouraged, but not required, to detect such
>> | recursion and reject promise with an informative TypeError as the
>> | reason.
>>
>
> I think the standard *should* require a deterministic error. E <
> https://github.com/kpreid/e-on-java/blob/master/src/jsrc/org/erights/e/elib/ref/ViciousCycleException.java>,
> Q, and my own Q-like system <
> https://github.com/tvcutsem/es-lab/blob/master/src/ses/makeQ.js#L700> all
> do. Within an engine, this technique should be straightforward to implement
> without slowing down the non-cyclic case.
>
>
>
>
>> Regards,
>>  Bergi
>>
>> ___
>> es-discuss mailing list
>> es-discuss@mozilla.org
>> https://mail.mozilla.org/listinfo/es-discuss
>>
>
>
>
> --
> Cheers,
> --MarkM
>
> ___
> es-discuss mailing list
> es-discuss@mozilla.org
> https://mail.mozilla.org/listinfo/es-discuss
>
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Weak Reference proposal

2016-02-19 Thread Dean Tribble
Thanks for your comments.

A practical answer to your question:  If you drop references to a subsystem
that internally uses weak references, the "finalization" it would engage is
just death throes. For example, if you drop an Xml parser, then there's no
reason to muck out it's internal cache since that's going to be collected
anyway. Thus, this variant is more expressive.

It also breaks the retention properties of the system. In order to require
the executor to run, *something *has to point at it (and the holdings)
strongly. Otherwise for example the holdings and executor might not be
retained (and you couldn't run finalization). You can end up with cycles of
executors pointing at each other's targets such that neither can ever be
collected because the system is keeping them around strongly.

On Thu, Feb 18, 2016 at 10:42 PM, John Lenz <concavel...@gmail.com> wrote:

> This seems like a very solid proposal.  I like that the finalizers run on
> their own turn (it had to be that way in retrospect).
>
> I'm unclear about one thing: the reasoning for not running finalizers when
> weak-references them become unreferenced.  Did I misunderstand this?
> Doesn't this force he "hard" reference to also a soft reference "weak
> reference" to insure that finalizer run (such as closing a file, etc)?  If
> aren't concerned about non-memory resources is there any point to having
> holding at all?
>
>
>
> On Sun, Feb 14, 2016 at 11:35 PM, Dean Tribble <trib...@e-dean.com> wrote:
>
>> I have posted a stage 1 proposal for weak references in ES7 for your
>> perusal and feedback.
>>
>> https://github.com/tc39/proposal-weakrefs.git
>>
>> Thanks to Mark Miller and the authors of earlier proposals for help with
>> the document and content!  Finally thanks to a few intrepid early reviewers
>> for their edits, comments, and feedback.
>>
>> ___
>> es-discuss mailing list
>> es-discuss@mozilla.org
>> https://mail.mozilla.org/listinfo/es-discuss
>>
>>
>
> ___
> es-discuss mailing list
> es-discuss@mozilla.org
> https://mail.mozilla.org/listinfo/es-discuss
>
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Weak Reference proposal

2016-02-16 Thread Dean Tribble
I'm happy to do whatever is appropriate here. Is there a message I should
put in the Readme to emphasize the early stage?

On Tue, Feb 16, 2016 at 4:38 PM, Mark S. Miller  wrote:

>
>
> On Tue, Feb 16, 2016 at 4:26 PM, Kevin Smith  wrote:
>
>> I have no problem with that, but do wonder, why? What is the downside of
 proposals being on the tc39 hub starting at an earlier stage, if the
 authors are so inclined? The upside is fewer broken links.

>>>
>> Because having the tc39 "brand" on things sends a signal to the broader
>> community about the future state of ecma262?
>>
>
> Makes sense.
>
>
>>
>> Also, github forwards URLs when the repo is transferred, so we're good
>> there.  Case in point:  https://github.com/zenparsing/async-iteration
>>
>>>
> Cool! I did not know that.
>
>
> --
> Cheers,
> --MarkM
>
> ___
> es-discuss mailing list
> es-discuss@mozilla.org
> https://mail.mozilla.org/listinfo/es-discuss
>
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Weak Reference proposal

2016-02-14 Thread Dean Tribble
I have posted a stage 1 proposal for weak references in ES7 for your
perusal and feedback.

https://github.com/tc39/proposal-weakrefs.git

Thanks to Mark Miller and the authors of earlier proposals for help with
the document and content!  Finally thanks to a few intrepid early reviewers
for their edits, comments, and feedback.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Promises as Cancelation Tokens

2016-01-04 Thread Dean Tribble
>From experience, I'm very much in favor of the cancellation token.  Though
promises provide a good inspiration for cancellation, they don't quite fit
the problem directly.

The approach has been explored, though the details are not published. I
implemented cancellation for the Midori system. Midori was an incubation
project to build a new OS, applications, etc. in a safe language (a C#
variant) using lightweight processes communicating via promises (easily 2M+
lines of C#). Among other things, it strongly influenced the C# async
support. See http://joeduffyblog.com/2015/11/19/asynchronous-everything/
for more info. After looking at various options, the approach developed for
.Net covered all the requirements we had and could be adapted surprisingly
well to the async world of promises and Midori. There were some important
differences though.

In a concurrent, async, or distributed system, cancellation is
*necessarily *asynchronous: there's fundamentally a race between a
computation "expanding" (spawning more work) and cancellation running it
down. But as you note, a cancellation system must support the ability to
*synchronously *and cheaply test whether a computation is being cancelled.
That allows one for example to write a loop over thousands of records that
will abort quickly when cancelled without scheduling activity for the
remaining records. (Because of the inherent race in cancellation, I named
that "isCancelling" rather than "isCancelled" to be a small reminder that
computation elsewhere might not yet have heard that it should stop.)

In async cancellation, the promise "then" seems like it could support the
asycn "cleanup" action on cancellation. However there are a lot of
cancellation use cases in which the appropriate cleanup action changes as a
computation proceeds. For those, using "then" is not adequate. For example,
a browser would have a cancellation token associated with a page load. Thus
the same token is used during parsing, retrieving secondary images, layout,
etc. If the user hits "stop", the token is cancelled, and so all the
various heterogeneous page rendering activities are cancelled. But the
cleanup action to "close the socket that you are retrieving an image over"
becomes expensive deadweight once the image has been retrieved. On a page
that loads 100 images four at a time, you would want 4 cleanup actions
registered, not 100.

For that and other reasons, we found it much clearer to give
cancellationToken it's own type. That also allows convenient patterns to be
directly supported, such as:

async function f(cancel) {
  await cheapOperation(cancel);
  cancel.*throwIfCancelling*(); // throws if the token is cancelling
  await expensiveOperation(cancel);
}

Note that the above would more likely have been written simply:

async function f(cancel) {
  await cheapOperation(cancel);
  await expensiveOperation(cancel);
}

The reason is that if the cheapOperation was aborted, the first await would
throw (assuming cheapOperation terminates abruptly or returns a broken
promise). If it got past cheapOperation, we already know that
expensiveOperation is going to be smart enough to cancel, so why clutter
our world with redundant aborts?  e.g.,

async function expensiveOperation(cancel) {
  while (hasFramesToRender() && !cancel.isCancelling()) {
  await renderFrame(this.nextFrame(), cancel);
  }
}

Thus, using cancellation tokens becomes simpler as cancellation becomes
more pervasive in the libraries. Typically, if an operation takes a
cancellation token as an argument, then you don't need to bother protecting
it from cancellation. As a result, explicit cancellation handling tends to
only be needed in lower level library implementation, and client code
passes their available token to either operations or to the creation of
objects (e.g., pass in your token when you open a file rather than on every
file operation).



On Mon, Jan 4, 2016 at 7:46 AM, Kevin Smith  wrote:

> I'm interested in exploring the idea of using an approach similar to
> .NET's cancelation tokens in JS for async task cancelation.  Since the
> cancelation "flag" is effectively an eventual value, it seems like promises
> are well-suited to modeling the token.  Using a promise for a cancelation
> token would have the added benefit that the eventual result of any
> arbitrary async operation could be used as a cancelation token.
>
> First, has this idea been fully explored somewhere already?  We've
> discussed this idea on es-discuss in the past, but I don't remember any
> in-depth analysis.
>
> Second, it occurs to me that the current promise API isn't quite ideal for
> cancelation tokens, since we don't have synchronous inspection
> capabilities.  For example, suppose that we have this async function:
>
> async function f(cancel) {
>   let canceled = false;
>   cancel.then(_=> canceled = true);
>   await cheapOperation(cancel);
>  

Re: Exponentiation operator precedence

2015-08-27 Thread Dean Tribble
Ideally syntax proposals should include some frequency information to
motivate any change. Is there an easy search to estimate the frequency of
Math.pow? In my application codebase (financial app with only modest JS
use), there are very few uses, and there are as many uses of Math.sin as
there are of Math.pow.

Anecdotally, my eyes caught on: -Math.pow(2,-10*a/1) (from a charting
library) which makes me not want to have to review code where I'm worried
about the precedence of exponentiation.

On Thu, Aug 27, 2015 at 7:32 AM, Kevin Smith zenpars...@gmail.com wrote:

 because the right-side-up way to say that is:

 e - a * c


 Yeah, I was waiting for someone to point that out, after I hit send.  : )
  I should spend more time setting up a better examples...


 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Cancellation architectural observations

2015-03-01 Thread Dean Tribble
Another thread here brought up the challenge of supporting cancellation in
an async environment. I spent some time on that particular challenge a few
years ago, and it turned out to be bigger and more interesting than it
appeared on the surface. In the another thread, Ron Buckton pointed at the
.Net approach and it's use in JavaScript:


 AsyncJS (http://github.com/rbuckton/asyncjs) uses a separate abstraction
 for cancellation based on the .NET
 CancellationTokenSource/CancellationToken types. You can find more
 information about this abstraction in the MSDN documentation here:
 https://msdn.microsoft.com/en-us/library/dd997364(v=vs.110).aspx


It's great that asyncjs already has started using it. I was surprised at
how well the cancellationToken approach worked in both small applications
and when extended to a very large async system. I'll summarize some of the
architectural observations, especially from extending it to async:

*Cancel requests, not results*
Promises are like object references for async; any particular promise might
be returned or passed to more than one client. Usually, programmers would
be surprised if a returned or passed in reference just got ripped out from
under them *by another client*. this is especially obvious when considering
a library that gets a promise passed into it. Using cancel on the promise
is like having delete on object references; it's dangerous to use, and
unreliable to have used by others.

*Cancellation is heterogeneous*
It can be misleading to think about canceling a single activity. In most
systems, when cancellation happens, many unrelated tasks may need to be
cancelled for the same reason. For example, if a user hits a stop button on
a large incremental query after they see the first few results, what should
happen?

   - the async fetch of more query results should be terminated and the
   connection closed
   - background computation to process the remote results into renderable
   form should be stopped
   - rendering of not-yet rendered content should be stopped. this might
   include retrieval of secondary content for the items no longer of interest
   (e.g., album covers for the songs found by a complicated content search)
   - the animation of loading more should be stopped, and should be
   replaced with user cancelled
   - etc.

Some of these are different levels of abstraction, and for any non-trivial
application, there isn't a single piece of code that can know to terminate
all these activities. This kind of system also requires that cancellation
support is consistent across many very different types of components. But
if each activity takes a cancellationToken, in the above example, they just
get passed the one that would be cancelled if the user hits stop and the
right thing happens.

*Cancellation should be smart*
Libraries can and should be smart about how they cancel. In the case of an
async query, once the result of a query from the server has come back, it
may make sense to finish parsing and caching it rather than just
reflexively discarding it. In the case of a brokerage system, for example,
the round trip to the servers to get recent data is the expensive part.
Once that's been kicked off and a result is coming back, having it
available in a local cache in case the user asks again is efficient. If the
application spawned another worker, it may be more efficient to let the
worker complete (so that you can reuse it) rather than abruptly terminate
it (requiring discarding of the running worker and cached state).

*Cancellation is a race*
In an async system, new activities may be getting continuously scheduled by
asks that are themselves scheduled but not currently running. The act of
cancelling needs to run in this environment. When cancel starts, you can
think of it as a signal racing out to catch up with all the computations
launched to achieve the now-cancelled objective. Some of those may choose
to complete (see the caching example above). Some may potentially keep
launching more work before that work itself gets signaled (yeah it's a bug
but people write buggy code). In an async system, cancellation is not
prompt. Thus, it's infeasible to ask has cancellation finished? because
that's not a well defined state. Indeed, there can be code scheduled that
should and does not get cancelled (e.g., the result processor for a pub/sub
system), but that schedules work that will be cancelled (parse the
publication of an update to the now-cancelled query).

*Cancellation is don't care*
Because smart cancellation sometimes doesn't stop anything and in an async
environment, cancellation is racing with progress, it is at most best
efforts. When a set of computations are cancelled, the party canceling the
activities is saying I no longer care whether this completes. That is
importantly different from saying I want to prevent this from completing.
The former is broadly usable resource reduction. The latter is only
usefully achieved in systems with 

Re: Precedence of yield operator

2013-06-14 Thread Dean Tribble
This is a familiar discussion from C#. I forwarded it to the mediator of
that convresation and got a nice summary, pasted here:

-- Forwarded message --
From: Mads Torgersen mads.torger...@microsoft.com
Date: Fri, Jun 14, 2013 at 2:11 PM
Subject: RE: Precedence of yield operator
To: Dean Tribble trib...@e-dean.com


I’m not on the mailing list. Feel free to forward to it.

** **

In C# we have separate keywords too, and indeed the precedence differs as
described below. For “yield return” (our yield) the lower precendence falls
out naturally since it engenders a statement, not an
expression.


** “await” is not a reserved keyword in C# either, but we managed to wedge
it in all the same. Just adding await as an operator would lead to all
kinds of ambiguity; e.g. “await (x)” could be a function call or an await
expression, and the statement “await x;” could be a variable declaration or
an await statement.

** **

However, in C# “await” is only allowed inside methods marked “async”, and
since there weren’t any of those around before the feature was introduced,
it is not a breaking change. Inside non-async methods, therefore, “await”
continues to be just an identifier.

** **

I don’t know if a similar thing is possible in EcmaScript. But I believe
that a low-precedence yield as a substitute for a high-precedence await is
problematic: you never want “yield a + yield b” to mean “yield (a + (yield
b))”: the things you await – Task, Promises, Futures, whatever you call
them – just don’t have operators defined on them, and it would be silly to
parse them as if they might and then give errors (at runtime in EcmaScript,
at compile time in e.g. TypeScript).

** **

Mads

On Fri, Jun 14, 2013 at 11:07 AM, Brendan Eich bren...@mozilla.com wrote:

 Bruno Jouhier wrote:

 While playing with my little async/await library, I noticed that I was
 often forced to parenthesize yield expressions as (yield exp) because of
 the low precedence of the yield operator. Typical patterns are:

 var foo = (yield a()) + (yield b()) + (yield c());


 That's actually a hard case, IMHO -- and hard cases make bad law.

 Many programmers would rather have the extra parens for uncertain cases
 (C's misplaced bitwise-logical and shift operators, vs.
 equality/relational; anything novel such as yield).

 But the real reason for yield being low precedence is to avoid precedence
 inversion. Consider if yield could be high-precedence, say a unary operator
 like delete:

 let x = yield a + b;

 Oops, many people would want that to be equivalent to

 let x = yield (a + b);

 but if yield is at delete's precence level, it's rather:

 let x = (yield a) + b;

 Which is the rare (hard, from law school) case.

 For commutative operators such as +, over-parenthesizing is better again,
 because

 let x = b + (yield a);

 and

 let x = (yield a) + b;

 ignoring order of effects in shared mutable store, and ignoring floating
 point non-determinism, are equivalent.

 /be

 __**_
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/**listinfo/es-discusshttps://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A Challenge Problem for Promise Designers (was: Re: Futures)

2013-04-25 Thread Dean Tribble
I've built multiple large systems using promises. A fundamental distinction
that must be clear to the client of a function is whether the function
goes async:  does it return a result that can be used synchronously or
will the result only be available in a later turn. The .Net async libraries
require the async keyword precisely to surface that in the signature of the
function; i.e., it is a breaking change to a function to go from returning
a ground result vs. a promise for a result.  The same is basically isn't
true for returning a promise that will only be resolved after several turns.

For example: I have a helper function to get the size of contents at the
other end of a URL. Since that requires IO, it must return a Promiseint.

size: (url) = {
return url.read().then(contents = contents.length)
}

This is obviously an expensive way to do it, and later when I get wired
into some nice web caching abstraction, I discover that the cache has a
similar operation, but with a smarter implementation; e.g., that can get
the answer back by looking at content-length in the header or file length
in the cache. That operation of course may require IO so it returns a
promise as well. Should the client type be different just because the
implementation uses any of several perfectly reasonable approaches for the
implementation.

size: (url) = {
return _cacheService.getLength(url)
}

If in order to not change the signature, I have to then the result, it
leads to

size: (url) = {
return _cacheService.getLength(url).then(length = length)
}

This just adds allocation and scheduling overhead for the useless then
block, precludes (huge) tail return optimization, and clutters the code.
This also leads to a depth of nesting types which is comparable to the
function nesting depth (i.e., if x calls y calls z do I have
promisepromisepromiseZ?), which is overwhelming both to the type
checkers and to the programmers trying to reason about the code. the client
invoked an operation that will eventually produce the integer they need.

There is also a relation between flattening and error propagation: consider
that returning a broken promise is analogous to throwing an exception in
languages with exceptions. In the above code, if the cache service fails
(e..g, the URL is bogus), the result from the cache service will
(eventually) be a rejected promise. Should the answer from the size
operation be a fulfilled promise for a failed result? That would extremely
painful in practice.  Adding a layer of promise at each level is equivalent
in sequential to requiring that every call site catch exceptions at that
site (and perhaps deliberately propagate them).  While various systems have
attempted that, they generally have failed the usability test. It certainly
seems not well-suited to the JS environment.

There are a few cases that may require promisepromiseT. Most can be
more clearly expresses with an intermediate type. For example, in an
enterprise security management system, the service manager returned a
promise for a (remote) authorization service, but the authorization service
might have been broken. Instead of returning a
PromisePromiseAuthorizationService, it returned
PromiseAuthorizationConnection where AuthorizationConnection had a member
service that returned a PromiseAuthorizationService.  When you deal
with higher level abstractions in a parameterized type system like C#s,
however, you may end up with APIs that want to work across any T, including
promises.  If the abstractions internally use promises, then they may well
end up with PromiseT where T : PromiseU or some such.  Those are very
rare in practice, and can typically make use of operators (e.g., like Q) to
limit their type nesting depth.

On Thu, Apr 25, 2013 at 3:31 PM, Domenic Denicola 
dome...@domenicdenicola.com wrote:

   Can you point to any code in wide use that makes use of this thenables
 = monads idea you seem to be implicitly assuming? Perhaps some of this
 generic thenable library code? I have never seen such code, whereas the
 use of thenable to mean object with a then method, which we will try to
 treat as a promise as in Promises/A+ seems widely deployed throughout
 libraries that are used by thousands of people judging by GitHub stars
 alone.

 Thus I would say it's not promise libraries that are harming the thenable
 operations, but perhaps some minority libraries who have misinterpreted
 what it means to be a thenable.
  --
 From: Claus Reinke claus.rei...@talk21.com
 Sent: 4/25/2013 18:21
 To: Mark Miller erig...@gmail.com; David Bruant bruan...@gmail.com
 Cc: Mark S. Miller erig...@google.com; es-discusses-discuss@mozilla.org
 Subject: Re: A Challenge Problem for Promise Designers (was: Re: Futures)

   I'm still wading through the various issue tracker threads, but only two
 concrete rationales for flattening nested Promises have emerged so far:

 1 library author doesn't want nested Promises.
 2 crossing Promise 

Re: A Challenge Problem for Promise Designers (was: Re: Futures)

2013-04-25 Thread Dean Tribble
Hmm. I agree that the example code isn't relevant to JavaScript. For
background, the last time issues this came up for me was in the context of
a language keyword (which had other interesting but unrelated trade offs),
where it really did impose that interaction (call sites had to declare that
the type was a promise, and handle that, even though they were then
returning promises).  I'm glad we agree that needing to then in the
tail-call case would be silly for a promise library. So what's an example
that motivates you to want to build a tower of promise types?  The main one
I know of is the implementation (not use of) higher-order collection
constructs that use promises internally (e.g., the implementation of map
and reduce for an async,  batching, flow-controlled stream of PromiseT).
That kind of rare example can have more advanced hooks (like Q).

On Thu, Apr 25, 2013 at 5:08 PM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 ... If cacheService.getLength() returns a future, then you don't need to do
 anything special in the size() function - just return the future that
 it returns.  It sounds like you're nesting values in futures for the
 hell of it, which of course is problematic.  Hiding the application's
 mistakes by auto-flattening isn't a good idea
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Notification proxies (Was: possible excessive proxy invariants for Object.keys/etc??)

2012-11-28 Thread Dean Tribble
On Wed, Nov 28, 2012 at 1:09 PM, Tom Van Cutsem tomvc...@gmail.com wrote:

 2012/11/26 Dean Tribble dtrib...@gmail.com

 ...
 I agree. My usual expectation for proxies is to support remote and
 persistent objects. While supporting other scenarios is great, usually
 that's incidental.  Is there a broader list of aspirations for proxies? or
 is this just a all else being equal it would be good if we can do this?


 Let's talk about aspirations for proxies. It will help us set priorities.
 First, some terminology (originating from CLOS, the mother of all MOPs
 ;-)


As a general point, I encourage you to look for other inspiration than CLOS
MOP for doign proxies (whose mother was really InterlispD). Meta-level
access deeply impacts security,maintainability,
reliability, understandability, etc. The tighter and more structured you
can make your meta-level access, the easier it will be to to implement,
use, and maintain (e.g., both coroutines and downward functions are more
understandable, easier to implement, easier to secure, etc. than general
continuations and call-cc).


 CLOS method combinations allow a composer to distinguish between before,
 after and around-style composition:
 - before-style wrapping gives you only the ability to get notified
 before an operation happens. You can abort, but not change, the result of
 the operation. This is what notification-proxies offer.


You *can* change the result of the operation. You do so by modifying the
state before the operation proceeds, of course. You could also extend the
notification support to notify after so you could clenup (avoiding a
callback hack).


 - after-style wrapping allows you to get notified of an operation
 after-the-fact. Depending on the API, the after-wrapper may or may not
 get to see the outcome of the operation, and may or may not change the
 final outcome passed on to clients.
 - around-style wrapping is the most general and allows the composer to
 decide if and when to forward, and what result to return. It subsumes
 before/after wrapping. This is what direct proxies currently provide.


It does not subsume before/after wrapping, because it loses the integrity
of before/after (e.g., the wrapper can lie and cheat, where the before and
after cannot).  That may be worth it, but it is substantially different.

Another variant is the differential version:  the differential trap is
like a notification, but it can also return virtual additions (or an
iterator of additions).  The proxy then invokes the primitive on the
target, and appends (with de-dupping, etc.) the virtual additions. This
allows the simple case to just use hte target, but also allows all of
Allen's additional cases.

 As far as I can tell, virtual object abstractions like remote/persistent
 objects require around-style wrapping, because there's otherwise no
 meaningful target to automatically forward to.


I thought the target in that case is an internal object to represent or
reify the meta-state of the remote or persistent object. I think that still
makes sense in both the persistent object and remote object cases.


 Here's a list of use cases that I frequently have in mind when thinking
 about proxies, categorized according to whether the use case requires
 before/after/around wrapping:

 Virtual objects, hence around-style:
 - self-hosting exotic objects such as Date, Array (i.e. self-host an
 ES5/ES6 environment)
 - self-hosting DOM/WebIDL objects such as NodeList


I should note that I'm not advocating a notification-only style for all
your proxy needs; having get operations able to generate virtual results
makes lots of sense. I primary suggest it for operations that are currently
implemented by the system (i.e., user code cannot normally intervene) and
that might be relied on for security-relevant behavior. wrapping return
results of user operations in a proxy makes perfect sense to me.


 Around-style wrapping (need to be able to change the result of an
 operation):
 - membranes
 - higher-order contracts

 Before-style wrapping:
 - revocable references


You can validate arguments, the state of the destination object (e.g., if
you were implementing a state machine), logging

 What else?


There is the pattern derived from the meter pattern in KeyKOS:  the handler
is only invoked on exception (e.g., like a page fault).  For example, a
primitive stream gets read operations against. Normally they proceed as a
primitive against an implementation-provided buffer so that next is
really darned fast. When the buffer is exhausted, instead of throwing an
error to the caller, the error is thrown to the handler (called a keeper)
which goes through some user-defined effort to refill the buffer, then the
read is retried.  This allows most data transfer to such a stream to use
fast, batch-oriented primitives, while supporting an arbitrary source of
contents.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org

Re: Notification proxies (Was: possible excessive proxy invariants for Object.keys/etc??)

2012-11-26 Thread Dean Tribble
I started to respond to Allan's message, but I'll combine them to here.
 Note the additional proposal in the middle of the message.

On Mon, Nov 26, 2012 at 11:33 AM, Tom Van Cutsem tomvc...@gmail.com wrote:

 2012/11/25 Allen Wirfs-Brock al...@wirfs-brock.com


 I have a couple virtual object use cases in mind where I don't think I
 would want to make all properties concrete on the target.


 Thanks for spelling out these examples. While they still don't feel like
 actual important use cases to support, they give a good flavor of the kinds
 of compromises we'd need to make when turning to notification-only proxies.


I agree. My usual expectation for proxies is to support remote and
persistent objects. While supporting other scenarios is great, usually
that's incidental.  Is there a broader list of aspirations for proxies? or
is this just a all else being equal it would be good if we can do this?



 1) A bit vector abstraction where individual bits are accessible as
 numerically indexed properties.

 Assume I have a bit string of fairly large size (as little as 128 bits)
 and I would like to abstract it as an array of single bit numbers  where
 the indexes correspond to bit positions in the bit string.  Using Proxies I
 want be able to use Get and Put traps to direct such indexed access to a
 binary data backing store I maintain.  I believe that having to reify on
 the target each bit that is actually accessed  would be too  expensive in
 both time and space to justify using this approach.


 Yes. As another example, consider a self-hosted sparse Array
 implementation.

 The paradox here is that it's precisely those abstractions that seek to
 store/retrieve properties in a more compact/efficient way than allowed by
 the standard JS object model would turn to proxies, yet having to reify
 each accessed property precisely voids the more compact/efficient storage
 of properties.


I don't have a good sense of how often and for what purpose clients call
getOwnPropertyNames and the like. That frankly seems like a terrible
operation for any client to be calling; it's architecturally necessarily
inefficient; especially since it currently demands a fresh array. Worst
case, I'd like to see it have a frozen result or be deprecated in favor of
an operation that is more architecturally efficient (e.g., return an
iterator of names so they need never all be reified). If the operation is
typically only called for debugging and inspection, or once per type or
some such, then the performance questions are less important. If libraries
constantly call it for web services, then having an improved API might be a
big win.

BTW, this is a scenario where I might not even brother trying to make sure
 that Object.getOwnPropertyNames listed all of the bit indexes.  I could,
 include them in an array of own property names, but would anybody really
 care if I didn't?


So for this example, you might want to suppress the integer properties from
getOwnPropertyNames *regardless *of the proxy approach. Otherwise you are
indeed doing O(N) work for all your otherwise efficiently-implemented bit
fields. Such a hack would work poorly with meta-driven tools (e.g.,
something that maps fields to a display table for object inspection), but
that's not because of the proxy support.

(It is conceivable to me that integer-indexed fields deserve explicit
support in a meta-protocol anyway, since their usage patterns are typically
so different from that of named fields.)


 Well, yes and no.

 Yes, in the sense that your object abstraction will break when used with
 some tools and libraries. For instance, consider a debugger that uses
 [[GetOwnPropertyNames]] to populate its inspector view, or a library that
 contains generic algorithms that operate on arbitrary objects (say copying
 an object, or serializing it, by using Object.getOwnPropertyNames).

 No, in the sense that even if you would implement getOwnPropertyNames
 consistently, copying or serializing your bit vector abstraction would not
 lead to the desired result anyway (the copy or deserialized version would
 be a normal object without the optimized bit representation) (although the
 result might still be usable!)


 More generally, notification proxies are indeed
 even-more-direct-proxies. They make the wrapping use case (logging,
 profiling, contract checking, etc.) simpler, at the expense of virtual
 objects (remote objects, test mock-ups), which are forced to always
 concretize the virtual object's properties on a real Javascript object.


 Yes, I also like the simplicity of notification proxies but don't want to
 give up the power of virtual objects.  Maybe having both would be a
 reasonable alternative.


 Brandon beat me to it, but indeed, having two kinds of proxies for the two
 different use cases makes sense. Except that there's a complexity budget we
 need to take into account. If we can avoid the cost of two APIs, we should.


I too would like to avoid two kinds of proxies. And 

Re: possible excessive proxy invariants for Object.keys/etc??

2012-11-24 Thread Dean Tribble
I am looking forward to proxies in JavaScript, and had a thought on the
issues below.  You could extend the the direct proxy approach for this.

When the Proxy receives getOwnPropertyNames, it
1) notifies the handler that property names are being requested
2) the handler adds/removes any properties (configurable or otherwise
subject to the normal constraints) on the target
3) upon return, the proxy invokes getOwnPropertyNames directly on the
target (e..g, invoking the *normal *system primitive)

This approach appears to have consistent behavior for configurability and
extensibility. For example, the trap operation above could add configurable
properties to an extensible target, and remove them later.  It could add
non-configurable properties, but they are permanent once added, etc. Thus
there's no loss of generality.  In addition to optionally setting up
properties on the target, the handler trap above would need to indicate to
the proxy (via exception or boolean result) that the getOwnPropertyNames
operation should proceed ahead or fail.

This extension of the direct proxy approach applies to all query
operations, eliminates the copying and validation overhead discussed below,
simplifies the implementation, retains full backwards compatibility, and
enables most if not all the expressiveness we might expect for proxies.

Dean

From: Allen Wirfs-Brock al...@wirfs-brock.com
 Date: Tue, Nov 20, 2012 at 2:18 PM
 Subject: Fwd: possible excessive proxy invariants for Object.keys/etc??
 To: es-discuss discussion es-discuss@mozilla.org


 Tom Van Custem have been having some email discussion while I work on
 integrating Proxys into the ES6 spec.  He and I agree that some
 broader input would be useful so I'm going to forward some of the
 message here to es-discuss and carry the discussion forward here.
 Here is the first message with other to follow:

 Begin forwarded message:

 From: Allen Wirfs-Brock al...@wirfs-brock.com
 Date: November 18, 2012 1:26:14 PM PST
 To: Tom Van Cutsem tomvc...@gmail.com, Mark S. Miller 
 erig...@google.com
 Cc: Jason Orendorff jorendo...@mozilla.com
 Subject: possible excessive proxy invariants for Object.keys/etc??

 I'm wondering if the wiki spec. for these functions aren't doing
 invariant checking that goes beyond what is required for the integrity
 purposes you have stated.

 In general, proxies  traps check to ensure that the invariants of a
 sealed/frozen target object aren't violated.  Generally, only minimal
 processing needs to be done if the target is extensible and has no
 non-configurable properties.  In fact the Virtual Object proposal says
 As long as the proxy does not expose non-configurable properties or
 becomes non-extensible, the target object is fully ignored (except to
 acquire internal properties such as [[Class]]). .

 The proxy spec.for Object.getOwnPropertyNames/kets/etc. seem to be
 doing quite a bit more than this.  They

 1) always copy the array returned from the trap?  Why is this
 necessary?  Sure the author of a trap should probably always return a
 fresh object but not doing so doesn't violate the integrity of the
 frozen/sealed invariants?  In most cases they will provide a fresh
 object and  copying adds unnecessary  work  that is proportional to
 the number of names to every such call.

 2) ensuring that the list of property keys contains no duplicates.
 Why is this essential?  Again, I don't see what it has to do with the
 integrity of the frozen/sealed invariants.  It is extra and probably
 unnecessary work that is at least proportional to the number of
 names).

 3) Every name in the list returned by the trap code is looked up on
 the target to determine whether or not it exists, even if the target
 is extensible.   Each of those lookup is observable (the target might
 itself be a proxy) so, according to the algorithm they all must be
 performed.

 4) Every own property of the target, is observably looked up (possibly
 a second time) even if the object is extensible  and has no
 non-configurable properties.


 It isn't clear to me if any of this work is really necessary to ensure
 integrity.  After all, what can you do with any of these names other
 than use them as the property key argument to some other trap/internal
 method such as [[SetP]], [[DefineOwnProperty]], etc.  Called on a
 proxy, those fundamental operations are going to enforce the integrity
 invariants of the actual properties involved so the get name checks
 doesn't really seem to be adding anything essential.

 Perhaps we can just get rid of all the above checking.  It seems like
 a good idea to me.

 Alternatively,  it suggests that a [[GetNonConfigurablePropertyNames]]
 internal method/trap would be a useful call to have as the integrity
 invariants only care about non-configurable properties. That would
 significantly limit the work in the case where there are none and
 limit the observable trap calls to only the non-configurable
 properties.

 Allen