Re: Overlap between StreamReader and FileReader

2013-08-08 Thread Isaac Schlueter
On Thu, Aug 8, 2013 at 7:40 PM, Austin William Wright  wrote:
> I believe the term is "congestion control" such as the TCP congestion
> control algorithm.

As I've heard the term used, "congestion control" is slightly
different than "flow control" or "tcp backpressure", but they are
related concepts, and yes, your point is dead-on, Austin, this is
absolutely 100% essential.  Any Stream API that treats backpressure as
an issue to handle later is not a Stream API, and is clearly not ready
to even bother discussing.

On Thu, Aug 8, 2013 at 7:40 PM, Austin William Wright  wrote:
> I think there's some confusion as to what the abort() call is going to do
> exactly.

Yeah, I'm rather confused by that as well.  A read[2] operation
typically can't be "canceled" because it's synchronous.


Let's back up just a step here, and talk about the fundamental purpose
of an API like this.  Here's a strawman:


-
A "Readable Stream" is an abstraction representing an ordered set of
data which may or may be finite, some or all of which may arrive at a
future time, which can be consumed at any arbitrary rate up to the
rate at which data is arriving, without causing excessive memory use.
It provides a mechanism to send the data into a Writable Stream, and
for being alerted to errors in the underlying implementation.

A "Writable Stream" is an abstraction representing a destination where
data is written, where any given write operation may be completely
flushed to the underlying implementation immediately or at some point
in the future.  It provides a mechanism for determining when more data
can be safely written without causing excessive memory usage, and for
being alerted to errors in the underlying implementation.

A "Duplex Stream" is an abstraction that implements both the Readable
Stream and Writable Stream interfaces.  There may or may not be any
specific connection between the two sets of functionality.  (For
example, it may represent a tcp socket file descriptor, or any
arbitrary readable/writable API that one can imagine.)
-


For any stream implementation, I typically try to ask: How would you
build a non-blocking TCP implementation using this abstraction?  This
might just be my bias coming from Node.js, but I think it's a fair
test of a Stream API that will be used on the web, where TCP is the
standard.  Here are some things that need to work 100%, assuming a
Readable.pipe(Writable) method:

fastReader.pipe(slowWriter)
slowReader.pipe(fastWriter)
socket.pipe(socket) // echo server
socket.pipe(new gzipDeflate()).pipe(socket)
socket.pipe(new gzipInflate()).pipe(socket)



Node's streams, as of 0.11.5, are pretty good.  However, they've
"evolved" rather than having been "intelligently designed", so in many
areas, the API surface is not as elegant as it could be.  In
particular, I think that relying on an EventEmitter interface is an
unfortunate choice that should not be repeated in this specification.
The language has new features, and Promises are somewhat
well-understood now (and weren't as much then).  But Node streams have
definitely got a lot of play-testing that we can lean on when
designing something better.

Calling read() repeatedly is much less convenient than doing something
like `stream.on('data', doSomething)`.  Additionally, you often want
to "spy" on a Stream, and get access to its data chunks as they come
in, without being the main consumer of the Stream.




Re: Overlap between StreamReader and FileReader

2013-08-08 Thread Austin William Wright
On Thu, Aug 8, 2013 at 2:56 PM, Jonas Sicking  wrote:

> On Thu, Aug 8, 2013 at 6:42 AM, Domenic Denicola
>  wrote:
> > From: Takeshi Yoshino [mailto:tyosh...@google.com]
> >
> >> On Thu, Aug 1, 2013 at 12:54 AM, Domenic Denicola <
> dome...@domenicdenicola.com> wrote:
> >>> Hey all, I was directed here by Anne helpfully posting to
> public-script-coord and es-discuss. I would love it if a summary of what
> proposal is currently under discussion: is it [1]? Or maybe some form of
> [2]?
> >>>
> >>> [1]: https://rawgithub.com/tyoshino/stream/master/streams.html
> >>> [2]:
> http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0727.html
> >>
> >> I'm drafting [1] based on [2] and summarizing comments on this list in
> order to build up concrete algorithm and get consensus on it.
> >
> > Great! Can you explain why this needs to return an
> AbortableProgressPromise, instead of simply a Promise? All existing stream
> APIs (as prototyped in Node.js and in other environments, such as in
> js-git's multi-platform implementation) do not signal progress or allow
> aborting at the "during a chunk" level, but instead count on you recording
> progress by yourself depending on what you've seen come in so far, and
> aborting on your own between chunks. This allows better pipelining and
> backpressure down to the network and file descriptor layer, from what I
> understand.
>
> Can you explain what you mean by "This allows better pipelining and
> backpressure down to the network and file descriptor layer"?
>

I believe the term is "congestion control" such as the TCP congestion
control algorithm. That is, don't send data to the application faster than
it can parse it or pass it off, or otherwise some mechanism to allow the
application to throttle down the incoming "flow", essential to any
networked application like the Web.

I think there's some confusion as to what the abort() call is going to do
exactly.


Re: Overlap between StreamReader and FileReader

2013-08-08 Thread Jonas Sicking
On Thu, Aug 8, 2013 at 6:42 AM, Domenic Denicola
 wrote:
> From: Takeshi Yoshino [mailto:tyosh...@google.com]
>
>> On Thu, Aug 1, 2013 at 12:54 AM, Domenic Denicola 
>>  wrote:
>>> Hey all, I was directed here by Anne helpfully posting to 
>>> public-script-coord and es-discuss. I would love it if a summary of what 
>>> proposal is currently under discussion: is it [1]? Or maybe some form of 
>>> [2]?
>>>
>>> [1]: https://rawgithub.com/tyoshino/stream/master/streams.html
>>> [2]: http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0727.html
>>
>> I'm drafting [1] based on [2] and summarizing comments on this list in order 
>> to build up concrete algorithm and get consensus on it.
>
> Great! Can you explain why this needs to return an AbortableProgressPromise, 
> instead of simply a Promise? All existing stream APIs (as prototyped in 
> Node.js and in other environments, such as in js-git's multi-platform 
> implementation) do not signal progress or allow aborting at the "during a 
> chunk" level, but instead count on you recording progress by yourself 
> depending on what you've seen come in so far, and aborting on your own 
> between chunks. This allows better pipelining and backpressure down to the 
> network and file descriptor layer, from what I understand.

Can you explain what you mean by "This allows better pipelining and
backpressure down to the network and file descriptor layer"?

I definitely agree that we don't want to cause too large performance
overheads. But it's not obvious to me how performance is affected by
putting progress and/or aborting functionality on the returned Promise
instance, rather than on a separate object (which you suggested in
another thread).

We should absolutely learn from Node.js and other environments. Do you
have any pointers to discussions about why they didn't end up with
progress in their "read a chunk" API?

/ Jonas



RE: Overlap between StreamReader and FileReader

2013-08-08 Thread Domenic Denicola
From: Takeshi Yoshino [tyosh...@google.com]

> Sorry, which one? stream.Readable's readable event and read method?

Exactly.

> I agree flow control is an issue not addressed well yet and needs to be fixed.

I would definitely suggest thinking about it as soon as possible, since it will 
likely have a significant effect on the overall API. For example, all this talk 
of standardizing ProgressPromise (much less AbortableProgressPromise) will 
likely fall by the wayside once you consider how it hurts flow control.




RE: Overlap between StreamReader and FileReader

2013-08-08 Thread Domenic Denicola
From: Takeshi Yoshino [mailto:tyosh...@google.com] 

> On Thu, Aug 1, 2013 at 12:54 AM, Domenic Denicola 
>  wrote:
>> Hey all, I was directed here by Anne helpfully posting to 
>> public-script-coord and es-discuss. I would love it if a summary of what 
>> proposal is currently under discussion: is it [1]? Or maybe some form of [2]?
>>
>> [1]: https://rawgithub.com/tyoshino/stream/master/streams.html
>> [2]: http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0727.html
>
> I'm drafting [1] based on [2] and summarizing comments on this list in order 
> to build up concrete algorithm and get consensus on it.
 
Great! Can you explain why this needs to return an AbortableProgressPromise, 
instead of simply a Promise? All existing stream APIs (as prototyped in Node.js 
and in other environments, such as in js-git's multi-platform implementation) 
do not signal progress or allow aborting at the "during a chunk" level, but 
instead count on you recording progress by yourself depending on what you've 
seen come in so far, and aborting on your own between chunks. This allows 
better pipelining and backpressure down to the network and file descriptor 
layer, from what I understand.


Re: [webcomponents]: Changing names of custom element callbacks

2013-08-08 Thread Anne van Kesteren
On Wed, Aug 7, 2013 at 11:41 PM, Dimitri Glazkov  wrote:
> The only thing peeps asked for consistently was the knowledge of when
> the in-a-document state changes. I haven't heard any other requests
> for callbacks.

I'm having a hard time following how we're designing this thing. Are
we just following whatever people happen to be asking for or are we
actually trying to expose the functionality browsers use to implement
elements, so pages have the same amount of power (minus the powers we
do not wish to grant, such as synchronous creation)?


-- 
http://annevankesteren.nl/