Here for the examples: http://lists.w3.org/Archives/Public/public-webapps/2013JulSep/0453.html

Simple ones leading to a simple Streams interface, I thought this was the spirit of the original Streams API proposal.

Now you want a stream interface so you can code some js like mspack on top of it.

I am still missing a part of the puzzle or how to use it: as you mention the stream is coming from somewhere (File, indexedDB, WebSocket, XHR, WebRTC, etc) you have a limited choice of APIs to get it, so msgpack will act on top of one of those APIs, no? (then back to the examples above)

How can you get the data another way?

Regards,

Aymeric

Le 13/09/2013 06:36, Takeshi Yoshino a écrit :
On Fri, Sep 13, 2013 at 5:15 AM, Aymeric Vitte <vitteayme...@gmail.com <mailto:vitteayme...@gmail.com>> wrote:

    Isaac said too "So, just to be clear, I'm **not** suggesting that
    browser streams copy Node streams verbatim.".


I know. I wanted to kick the discussion which was stopped for 2 weeks.

    Unless you want to do node inside browsers (which would be great
    but seems unlikely) I still don't see the relation between this
    kind of proposal and existing APIs.


What do you mean by "existing APIs"? I was thinking that we've been discussing what Stream read/write API for manual consuming/producing by JavaScript code should be like.

    Could you please give an example very different from the ones I
    gave already?


Sorry, which mail?

One of what I was imaging is protocol parsing. Such as msgpack, protocol buffer. It's good that ArrayBuffers of exact size is obtained.

OTOH, as someone pointed out, Stream should have some flow control mechanism not to pull unlimited amount of data from async storage, network, etc. readableSize in my proposal is an example of how we make the limit controllable by an app.

We could also depend on the size argument of read() call. But thinking of protocol parsing again, it's common that we have small fields such as 4, 8, 16 bytes. If read(size) is configured to pull size bytes from async storage, it's inefficient. Maybe we could have some hard coded limit, e.g. 1MiB and use max(hardCodedLimit, requestedReadSize).

I'm fine with the latter.

    You have reverted to EventTarget too instead of promises, why?


There was no intention to object against use of Promise. Sorry that I wasn't clear. I'm rather interested in receiving sequence of data as they become available (corresponds to Jonas's ChunkedData version read methods) with just one read call. Sorry that I didn't mention explicitly, but listeners on the proposed API came from ChunkedData object. I thought we can put them on Stream itself by giving up multiple read scenario.

writeable/readableThreshold can be safely removed from the API if we agree it's not important. If the threshold stuff are removed, flush() and pull() will also be removed.


--
jCore
Email :  avi...@jcore.fr
Peersm : http://www.peersm.com
iAnonym : http://www.ianonym.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Web :    www.jcore.fr
Extract Widget Mobile : www.extractwidget.com
BlimpMe! : www.blimpme.com

Reply via email to