On Thu, Aug 8, 2013 at 2:56 PM, Jonas Sicking <jo...@sicking.cc> wrote:

> On Thu, Aug 8, 2013 at 6:42 AM, Domenic Denicola
> <dome...@domenicdenicola.com> wrote:
> > From: Takeshi Yoshino [mailto:tyosh...@google.com]
> >
> >> On Thu, Aug 1, 2013 at 12:54 AM, Domenic Denicola <
> dome...@domenicdenicola.com> wrote:
> >>> Hey all, I was directed here by Anne helpfully posting to
> public-script-coord and es-discuss. I would love it if a summary of what
> proposal is currently under discussion: is it [1]? Or maybe some form of
> [2]?
> >>>
> >>> [1]: https://rawgithub.com/tyoshino/stream/master/streams.html
> >>> [2]:
> http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0727.html
> >>
> >> I'm drafting [1] based on [2] and summarizing comments on this list in
> order to build up concrete algorithm and get consensus on it.
> >
> > Great! Can you explain why this needs to return an
> AbortableProgressPromise, instead of simply a Promise? All existing stream
> APIs (as prototyped in Node.js and in other environments, such as in
> js-git's multi-platform implementation) do not signal progress or allow
> aborting at the "during a chunk" level, but instead count on you recording
> progress by yourself depending on what you've seen come in so far, and
> aborting on your own between chunks. This allows better pipelining and
> backpressure down to the network and file descriptor layer, from what I
> understand.
> Can you explain what you mean by "This allows better pipelining and
> backpressure down to the network and file descriptor layer"?

I believe the term is "congestion control" such as the TCP congestion
control algorithm. That is, don't send data to the application faster than
it can parse it or pass it off, or otherwise some mechanism to allow the
application to throttle down the incoming "flow", essential to any
networked application like the Web.

I think there's some confusion as to what the abort() call is going to do

Reply via email to