From: willc...@google.com <willc...@google.com> on behalf of William Chan (陈智昌) 
<willc...@chromium.org>

> Can you explain this in more detail? AFAICT, the fundamental difference we're 
> talking about here is push vs pull sources. Option 1 is a push model, where 
> fetch() creates a writable stream that has an underlying sink (Domenic, did I 
>  get that right? I assume you meant sink and not source) of the request body. 
> And the user code writes to the stream to stream output to the request body. 
> In contrast, option 2 is a pull model, where fetch() accepts as input a 
> readable stream and pulls from  it as the network is ready to write the 
> request body out.

This is interesting. I am not sure it is correct... At least, I don't think it 
is from a theoretical streams-model perspective. But whether that theory 
applies is an important question I hope you can help with.

The way Node-esque streams model these things, is that readable streams wrap 
push vs. pull sources. Writable streams don't have such a distinction: you 
write to them as data becomes available. The interesting part comes when you 
pipe a readable stream to a writable stream. The pipe API makes it possible to 
either pull from the readable stream's underlying source as soon as the 
writable stream is ready to accept data, or to feed pushed data into the 
writable stream as it is pushed, subject to writable backpressure.

So in both cases 1 and 2, if you have a readable stream and want to pipe it to 
the request body, it doesn't matter whether the readable stream is wrapping a 
push or a pull source: both will be handled by waiting for the writable stream 
to signal readiness, then giving the writable stream as much data as is ready: 
either by pushing, or by pulling.

Does this match up well with your thoughts on push vs. pull and clients from 
later in your message? Or is my model pretty far off?

Looking at it from another direction, the purpose of the writable stream 
abstraction is to wrap resources like a file opened with the write flag, or a 
HTTP request body, into an interface people can both write chunks to, and pipe 
readable streams to. There is another possibility opposed to piping, though: 
just having functions that accept readable streams and read from them as 
necessary. Do you think this latter idea is inherently better than the piping 
idea?

The advantage of the piping idea is that it allows you to reuse the same code 
for piping another stream as for writing a chunk directly: that is, you can 
write code like [1], encapsulating all the interfacing with the underlying sink 
via a couple functions, and then you get an interface for both piping and 
chunk-writing to the resulting writable stream. Whereas, with the 
readable-stream-accepting-function interface, you would not have chunk-writing 
at all, and would have to develop a separate interface for that.

I'd be really interested to hear if you think that the piping model is 
inherently worse than the accepting-function model. That would definitely make 
things interesting. My intuition is that it superficially looks different, but 
in reality it works the same after you go through a bit of indirection.

P.S.: I haven't made time to read your blog posts yet; sorry. If you feel this 
conversation would go a lot smoother if I did so then let me know and I can 
refrain from replying until then :)

[1]: https://whatwg.github.io/streams/#ws-intro

Reply via email to