The idea did not come from mimicing WebRTC:
- pause/unpause: insert pause in the stream, stop processing the data
when pause is reached (but don't close the operation, see below), buffer
next data coming in, restart from pause on unpause
Use case: flow control, window flow control gets
On Wed, Oct 23, 2013 at 11:42 PM, Aymeric Vitte vitteayme...@gmail.comwrote:
Your filter idea seems to be equivalent to a createStream that I
suggested some time ago (like node), what about:
var encryptionPromise = crypto.subtle.encrypt(aesAlgorithmEncrypt, aesKey,
On Wed, Oct 30, 2013 at 8:14 PM, Takeshi Yoshino tyosh...@google.comwrote:
On Wed, Oct 23, 2013 at 11:42 PM, Aymeric Vitte vitteayme...@gmail.comwrote:
- pause: pause the stream, do not send eof
Sorry, what will be paused? Output?
Your filter idea seems to be equivalent to a createStream that I
suggested some time ago (like node), what about:
var encryptionPromise = crypto.subtle.encrypt(aesAlgorithmEncrypt,
aesKey, sourceStream).createStream();
So you don't need to modify the APIs where you can not specify the
Sorry for blank of ~2 weeks.
On Fri, Oct 4, 2013 at 5:57 PM, Aymeric Vitte vitteayme...@gmail.comwrote:
I am still not very familiar with promises, but if I take your preceeding
example:
var sourceStream = xhr.response;
var resultStream = new Stream();
var fileWritingPromise =
I am still not very familiar with promises, but if I take your
preceeding example:
var sourceStream = xhr.response;
var resultStream = new Stream();
var fileWritingPromise = fileWriter.write(resultStream);
var encryptionPromise = crypto.subtle.encrypt(aesAlgorithmEncrypt,
aesKey, sourceStream,
Formatted and published my latest proposal at github after incorporating
Aymeric's multi-dest idea.
http://htmlpreview.github.io/?https://github.com/tyoshino/stream/blob/master/streams.html
On Sat, Sep 28, 2013 at 11:45 AM, Kenneth Russell k...@google.com wrote:
This looks nice. It looks like
Looks good, comments/questions :
- what's the use of readEncoding?
- StreamReadType: add MediaStream? (and others if existing)
- would it be possible to pipe from StreamReadType to other StreamReadType?
- would it be possible to pipe from a source to different targets (my
example of
On Thu, Sep 26, 2013 at 6:36 PM, Aymeric Vitte vitteayme...@gmail.comwrote:
Looks good, comments/questions :
- what's the use of readEncoding?
Overriding charset specified in .type for read op. It's weird but we can
ask an app to overwrite .type instead.
- StreamReadType: add
Le 24/09/2013 21:24, Takeshi Yoshino a écrit :
On Wed, Sep 25, 2013 at 12:41 AM, Aymeric Vitte
vitteayme...@gmail.com mailto:vitteayme...@gmail.com wrote:
Did you see
http://lists.w3.org/Archives/Public/public-webapps/2013JulSep/0593.html
?
Yes. This example seems to be showing
On Wed, Sep 25, 2013 at 10:55 PM, Aymeric Vitte vitteayme...@gmail.comwrote:
My understanding is that the flow control APIs like mine are intended to
be used by JS code implementing some converter, consumer, etc. while
built-in stuff like WebCrypt would be evolved to accept Stream directly
As we don't see any strong demand for flow control and sync read
functionality, I've revised the proposal.
Though we can separate state/error signaling from Stream and keep them done
by each API (e.g. XHR) as Aymeric said, EoF signal still needs to be
conveyed through Stream.
enum
Did you see
http://lists.w3.org/Archives/Public/public-webapps/2013JulSep/0593.html ?
Attempt to find a link between the data producers APIs and a Streams API
like yours.
Regards
Aymeric
Le 20/09/2013 15:16, Takeshi Yoshino a écrit :
On Sat, Sep 14, 2013 at 12:03 AM, Aymeric Vitte
On Wed, Sep 25, 2013 at 12:41 AM, Aymeric Vitte vitteayme...@gmail.comwrote:
Did you see
http://lists.w3.org/Archives/Public/public-webapps/2013JulSep/0593.html ?
Yes. This example seems to be showing how to connect only producer/consumer
APIs which support Stream. Right?
In such a case,
On Sat, Sep 14, 2013 at 12:03 AM, Aymeric Vitte vitteayme...@gmail.comwrote:
I take this example to understand if this could be better with a built-in
Stream flow control, if so, after you have defined the right parameters (if
possible) for the streams flow control, you could process delta
Here for the examples:
http://lists.w3.org/Archives/Public/public-webapps/2013JulSep/0453.html
Simple ones leading to a simple Streams interface, I thought this was
the spirit of the original Streams API proposal.
Now you want a stream interface so you can code some js like mspack on
top of
On Fri, Sep 13, 2013 at 6:08 PM, Aymeric Vitte vitteayme...@gmail.comwrote:
Now you want a stream interface so you can code some js like mspack on top
of it.
I am still missing a part of the puzzle or how to use it: as you mention
the stream is coming from somewhere (File, indexedDB,
Le 13/09/2013 14:23, Takeshi Yoshino a écrit :
On Fri, Sep 13, 2013 at 6:08 PM, Aymeric Vitte vitteayme...@gmail.com
mailto:vitteayme...@gmail.com wrote:
Now you want a stream interface so you can code some js like
mspack on top of it.
I am still missing a part of the puzzle or
Since I joined discussion recently, I don't know the original idea behind
the Stream+XHR integration approach (response returns Stream object) as in
current Streams API spec. But one advantage of it I come up with is that we
can keep change to those producer APIs small. If we decide to add methods
On Fri, Sep 13, 2013 at 9:50 PM, Aymeric Vitte vitteayme...@gmail.comwrote:
Le 13/09/2013 14:23, Takeshi Yoshino a écrit :
Do you mean that those data producer APIs should be changed to provide
read-by-delta-data, and manipulation of data by js code should happen there
instead of at the
Le 13/09/2013 15:11, Takeshi Yoshino a écrit :
On Fri, Sep 13, 2013 at 9:50 PM, Aymeric Vitte vitteayme...@gmail.com
mailto:vitteayme...@gmail.com wrote:
Le 13/09/2013 14:23, Takeshi Yoshino a écrit :
Do you mean that those data producer APIs should be changed to
provide
Apparently we are not talking about the same thing, while I am thinking
to a high level interface your interface is taking care of the
underlying level.
Like node's streams, node had to define it since it was not existing
(but is someone using node's streams as such or does everybody use the
On Thu, Sep 12, 2013 at 10:58 PM, Aymeric Vitte vitteayme...@gmail.comwrote:
Apparently we are not talking about the same thing, while I am thinking
to a high level interface your interface is taking care of the underlying
level.
How much low level stuff to expose would basically affect
On Fri, Sep 13, 2013 at 5:15 AM, Aymeric Vitte vitteayme...@gmail.comwrote:
Isaac said too So, just to be clear, I'm **not** suggesting that
browser streams copy Node streams verbatim..
I know. I wanted to kick the discussion which was stopped for 2 weeks.
Unless you want to do node
Isaac said too So, just to be clear, I'm *not* suggesting that browser
streams copy Node streams verbatim..
Unless you want to do node inside browsers (which would be great but
seems unlikely) I still don't see the relation between this kind of
proposal and existing APIs.
Could you please
Here's my all-in-one strawman proposal including some new stuff for flow
control. Yes, it's too big, but may be useful for glancing what features
are requested.
enum StreamReadType {
,
arraybuffer,
text
};
[Constructor(optional DOMString mime, optional [Clamp] long long
I forgot to add an attribute to specify the max size of backing store.
Maybe it should be added to the constructor.
On Wed, Sep 11, 2013 at 11:24 PM, Takeshi Yoshino tyosh...@google.comwrote:
any peek(optional [Clamp] long long size, optional [Clamp] long long
offset);
peek with offset
On Fri, Aug 23, 2013 at 2:41 AM, Isaac Schlueter i...@izs.me wrote:
1. Drop the read n bytes part of the API entirely. It is hard to do
I'm ok with that. But then, instead we need to evolve ArrayBuffer to have
powerful concat/slice functionality for performance. Re: slicing, we can
just make
On Fri, Aug 9, 2013 at 12:47 PM, Isaac Schlueter i...@izs.me wrote:
Jonas,
What does *progress* mean here?
So, you do something like this:
var p = stream.read()
to get a promise (of some sort). That read() operation is (if we're
talking about TCP or FS) a single operation. There's
On Fri, Aug 9, 2013 at 7:36 PM, Domenic Denicola
dome...@domenicdenicola.com wrote:
Another way of looking at it, is that a streaming API is itself incremental
and cancellable. It makes no sense to say that each read from or write to the
stream is *also* incremental and cancellable; why
Le 22/08/2013 09:28, Jonas Sicking a écrit :
Does anyone have examples of code that uses the Node.js API? I'd love
to look at how people practically end up consuming data?
I am doing something like this:
var parse=function() {
//process this.stream_
this.queue_.shift();
if
So, just to be clear, I'm *not* suggesting that browser streams copy
Node streams verbatim.
In Node.js doing A looks something like:
stream.on('readable', function() {
var buffer;
while((buffer = stream.read())) {
processData(buffer);
}
});
Not quite. In Node.js, doing A looks
On Thu, Aug 8, 2013 at 7:40 PM, Austin William Wright a...@bzfx.net wrote:
I believe the term is congestion control such as the TCP congestion
control algorithm.
As I've heard the term used, congestion control is slightly
different than flow control or tcp backpressure, but they are
related
On Thu, Aug 8, 2013 at 7:40 PM, Austin William Wright a...@bzfx.net wrote:
On Thu, Aug 8, 2013 at 2:56 PM, Jonas Sicking jo...@sicking.cc wrote:
On Thu, Aug 8, 2013 at 6:42 AM, Domenic Denicola
dome...@domenicdenicola.com wrote:
From: Takeshi Yoshino [mailto:tyosh...@google.com]
On Thu,
Jonas,
What does *progress* mean here?
So, you do something like this:
var p = stream.read()
to get a promise (of some sort). That read() operation is (if we're
talking about TCP or FS) a single operation. There's no 50% of the
way done reading moment that you'd care to tap into.
Even
Isaac has essentially explained what I was getting at earlier, except much more
clearly. When I said this allows better pipelining and backpressure down to
the network and file descriptor layer, I was essentially saying implementing
read or write operations as cancellable and incremental does
From: Takeshi Yoshino [mailto:tyosh...@google.com]
On Thu, Aug 1, 2013 at 12:54 AM, Domenic Denicola
dome...@domenicdenicola.com wrote:
Hey all, I was directed here by Anne helpfully posting to
public-script-coord and es-discuss. I would love it if a summary of what
proposal is currently
From: Takeshi Yoshino [tyosh...@google.com]
Sorry, which one? stream.Readable's readable event and read method?
Exactly.
I agree flow control is an issue not addressed well yet and needs to be fixed.
I would definitely suggest thinking about it as soon as possible, since it will
likely have
On Thu, Aug 8, 2013 at 6:42 AM, Domenic Denicola
dome...@domenicdenicola.com wrote:
From: Takeshi Yoshino [mailto:tyosh...@google.com]
On Thu, Aug 1, 2013 at 12:54 AM, Domenic Denicola
dome...@domenicdenicola.com wrote:
Hey all, I was directed here by Anne helpfully posting to
On Thu, Aug 8, 2013 at 2:56 PM, Jonas Sicking jo...@sicking.cc wrote:
On Thu, Aug 8, 2013 at 6:42 AM, Domenic Denicola
dome...@domenicdenicola.com wrote:
From: Takeshi Yoshino [mailto:tyosh...@google.com]
On Thu, Aug 1, 2013 at 12:54 AM, Domenic Denicola
dome...@domenicdenicola.com
On Tue, Jul 30, 2013 at 10:27 PM, Takeshi Yoshino tyosh...@google.com wrote:
On Tue, Jul 30, 2013 at 12:07 PM, Jonas Sicking jo...@sicking.cc wrote:
could contain an encoding. Then when stream.readText is called, if
there's an explicit encoding, it would use that encoding when
Do you
Hey all, I was directed here by Anne helpfully posting to public-script-coord
and es-discuss. I would love it if a summary of what proposal is currently
under discussion: is it [1]? Or maybe some form of [2]?
[1]: https://rawgithub.com/tyoshino/stream/master/streams.html
[2]:
From: Anne van Kesteren [ann...@annevk.nl]
Stream.prototype.readType takes an enumerated string value which is
arraybuffer (default) or text.
Stream.prototype.read returns a promise fulfilled with the type of value
requested.
I believe this is somewhat similar to how Node streams have
On Wed, Jul 31, 2013 at 5:03 PM, Domenic Denicola
dome...@domenicdenicola.com wrote:
In this way, the encoding is a stateful aspect of the stream itself. I don't
think there's a way to get around this, without ending up with dangling
half-character bytes hanging around.
It seems though that
From: Anne van Kesteren [ann...@annevk.nl]
It seems though that if you can change the way bytes are consumed while
reading a stream you will end up with problematic scenarios. E.g. you consume
2 bytes of a 4-byte utf-8 sequence. Then switch to reading code points...
Instantiating a
On Wed, Jul 31, 2013 at 10:17 AM, Domenic Denicola
dome...@domenicdenicola.com wrote:
From: Anne van Kesteren [ann...@annevk.nl]
It seems though that if you can change the way bytes are consumed while
reading a stream you will end up with problematic scenarios. E.g. you
consume 2 bytes of a
I read quickly the thread but it seems like this is exactly the issue I
had doing [1].
The use case was just decoding utf-8 html chunked buffers and modifying
the content on the fly to stream it somewhere else.
It had to work inside browsers and with node (which as far as I know
does not
Couldn't we simply let the Stream class have a content type, which
could contain an encoding. Then when stream.readText is called, if
there's an explicit encoding, it would use that encoding when
converting to text.
/ Jonas
On Mon, Jul 29, 2013 at 6:38 AM, Takeshi Yoshino tyosh...@google.com
On Mon, Jul 29, 2013 at 1:16 PM, Jonas Sicking jo...@sicking.cc wrote:
Couldn't we simply let the Stream class have a content type, which
could contain an encoding. Then when stream.readText is called, if
there's an explicit encoding, it would use that encoding when
converting to text.
How
On Mon, Jul 29, 2013 at 3:20 PM, Anne van Kesteren ann...@annevk.nl wrote:
On Mon, Jul 29, 2013 at 1:16 PM, Jonas Sicking jo...@sicking.cc wrote:
Couldn't we simply let the Stream class have a content type, which
could contain an encoding. Then when stream.readText is called, if
there's an
On Mon, Jul 29, 2013 at 4:13 PM, Jonas Sicking jo...@sicking.cc wrote:
On Mon, Jul 29, 2013 at 3:20 PM, Anne van Kesteren ann...@annevk.nl wrote:
How about we use what XMLHttpRequest and WebSocket have?
Stream.prototype.readType takes an enumerated string value which is
arraybuffer (default)
On Mon, Jul 29, 2013 at 5:37 PM, Anne van Kesteren ann...@annevk.nl wrote:
On Mon, Jul 29, 2013 at 4:13 PM, Jonas Sicking jo...@sicking.cc wrote:
On Mon, Jul 29, 2013 at 3:20 PM, Anne van Kesteren ann...@annevk.nl wrote:
How about we use what XMLHttpRequest and WebSocket have?
On Jul 29, 2013 7:53 PM, Takeshi Yoshino tyosh...@google.com wrote:
On Tue, Jul 30, 2013 at 5:16 AM, Jonas Sicking jo...@sicking.cc wrote:
Couldn't we simply let the Stream class have a content type, which
That's what I meant. In Feras's proposal Stream has type attribute. I
copied it to my
On Wed, Jul 10, 2013 at 7:02 AM, Anne van Kesteren ann...@annevk.nl wrote:
On Tue, Jul 2, 2013 at 12:21 AM, Takeshi Yoshino tyosh...@google.com wrote:
What I have in my mind is like this:
if (this.readyState == this.LOADING) {
stream = xhr.response;
// XHR has already written some data
On Tue, Jul 16, 2013 at 11:10 PM, Jonas Sicking jo...@sicking.cc wrote:
Reading any format that contains textual data. I.e. things like HTML,
OpenDocument, pdf, etc. While many of those are compressed, it seems
likely that you could pass a stream through a decompressor which
produces a
On Wed, Jul 17, 2013 at 10:47 AM, Anne van Kesteren ann...@annevk.nl wrote:
On Tue, Jul 16, 2013 at 11:10 PM, Jonas Sicking jo...@sicking.cc wrote:
Reading any format that contains textual data. I.e. things like HTML,
OpenDocument, pdf, etc. While many of those are compressed, it seems
likely
On Wed, Jul 17, 2013 at 11:05 AM, Jonas Sicking jo...@sicking.cc wrote:
What do you mean by such features? Are you saying that a Stream zip
decompressor should be responsible for both decompressing as well as
binary-text conversion? And thus output something other than a
Stream?
I meant that
On Wed, Jul 17, 2013 at 11:46 AM, Anne van Kesteren ann...@annevk.nl wrote:
On Wed, Jul 17, 2013 at 11:05 AM, Jonas Sicking jo...@sicking.cc wrote:
What do you mean by such features? Are you saying that a Stream zip
decompressor should be responsible for both decompressing as well as
On Tue, Jul 2, 2013 at 12:21 AM, Takeshi Yoshino tyosh...@google.com wrote:
What I have in my mind is like this:
if (this.readyState == this.LOADING) {
stream = xhr.response;
// XHR has already written some data x0 to stream
stream.read().progress(progressHandler);
}
...loop...
//
On Mon, Jul 1, 2013 at 9:03 AM, Takeshi Yoshino tyosh...@google.com wrote:
Moved to github.
https://github.com/tyoshino/stream/blob/master/streams.html
http://htmlpreview.github.io/?https://github.com/tyoshino/stream/blob/master/streams.html
Why would it be neutered if size is not given?
On Wed, Jun 26, 2013 at 6:48 AM, Takeshi Yoshino tyosh...@google.com wrote:
I wrote a strawman spec for Stream.readAsArrayBuffer. Comment please.
Calling the stream associated concepts the same as the variables in
the algorithm is somewhat confusing (read_position vs read_position).
4. If
On Sat, May 18, 2013 at 1:38 PM, Jonas Sicking jo...@sicking.cc wrote:
For File reading I would now instead do something like
partial interface Blob {
AbortableProgressFutureArrayBuffer readBinary(BlobReadParams);
AbortableProgressFutureDOMString readText(BlobReadTextParams);
Stream
On Sat, May 18, 2013 at 1:56 PM, Jonas Sicking jo...@sicking.cc wrote:
On Fri, May 17, 2013 at 9:38 PM, Jonas Sicking jo...@sicking.cc wrote:
For Stream reading, I think I would do something like the following:
interface Stream {
AbortableProgressFutureArrayBuffer readBinary(optional
On Sat, May 18, 2013 at 5:56 AM, Jonas Sicking jo...@sicking.cc wrote:
where the ProgressFutures returned from
readBinaryChunked/readBinaryChunked delivers the data in the progress
notifications only, and no data is delivered when the future is
actually resolved. Though this might be abusing
On Sat, May 18, 2013 at 7:36 AM, Anne van Kesteren ann...@annevk.nl wrote:
On Sat, May 18, 2013 at 5:56 AM, Jonas Sicking jo...@sicking.cc wrote:
where the ProgressFutures returned from
readBinaryChunked/readBinaryChunked delivers the data in the progress
notifications only, and no data is
On Thu, May 16, 2013 at 10:14 PM, Takeshi Yoshino tyosh...@google.com wrote:
I skimmed the thread before starting this and saw that you were pointing out
some issues but didn't think you're opposing so much.
Well yes. I removed integration from XMLHttpRequest a while back too.
Let me check
On Thu, May 16, 2013 at 8:26 PM, Feras Moussa feras.mou...@hotmail.com wrote:
Can you please go into a bit more detail? I've read through the thread, and
it mostly focuses on the details of how a Stream is received from XHR and
what behaviors can be expected - it only lightly touches on how you
Sorry, I just took over this work and so was misunderstanding some point in
the Streams API spec.
On Fri, May 17, 2013 at 6:09 PM, Anne van Kesteren ann...@annevk.nl wrote:
On Thu, May 16, 2013 at 10:14 PM, Takeshi Yoshino tyosh...@google.com
wrote:
I skimmed the thread before starting this
On Fri, May 17, 2013 at 12:09 PM, Takeshi Yoshino tyosh...@google.com wrote:
I thought the spec is clear about this but sorry it isn't. In the spec we
should say that StreamReader invalidates consumed data in Stream and buffer
for the invalidated bytes will be released at that point. Right?
On Fri, May 17, 2013 at 6:15 PM, Anne van Kesteren ann...@annevk.nl wrote:
The main problem is that Stream per Streams API is not what you expect
from an IO stream, but it's more what Blob should've been (Blob
without synchronous size). What we want I think is a real IO stream.
If we also
I figured I should chime in with some ideas of my own because, well, why not :-)
First off, I definitely think the semantic model of a Stream shouldn't
be a Blob without a size, but rather a Blob without a size that you
can only read from once. I.e. the implementation should be able to
discard
On Fri, May 17, 2013 at 9:38 PM, Jonas Sicking jo...@sicking.cc wrote:
For Stream reading, I think I would do something like the following:
interface Stream {
AbortableProgressFutureArrayBuffer readBinary(optional unsigned
long long size);
AbortableProgressFutureString readText(optional
On Thu, May 16, 2013 at 5:58 PM, Takeshi Yoshino tyosh...@google.com wrote:
StreamReader proposed in the Streams API spec is almost the same as
FileReader. By adding the maxSize argument to the readAs methods (new
methods or just add it to existing methods as an optional argument) and
adding
From: annevankeste...@gmail.com [mailto:annevankeste...@gmail.com]
On Thu, May 16, 2013 at 5:58 PM, Takeshi Yoshino tyosh...@google.com
wrote:
StreamReader proposed in the Streams API spec is almost the same as
FileReader. By adding the maxSize argument to the readAs methods (new
From: annevankeste...@gmail.com [mailto:annevankeste...@gmail.com]
On Thu, May 16, 2013 at 6:31 PM, Travis Leithead
travis.leith...@microsoft.com wrote:
Since we have Streams implemented to some degree, I'd love to hear
suggestions to improve it relative to IO. Anne can you summarize the
the Streams spec accordingly.
Date: Thu, 16 May 2013 18:41:21 +0100
From: ann...@annevk.nl
To: travis.leith...@microsoft.com
CC: tyosh...@google.com; slightly...@google.com; public-webapps@w3.org
Subject: Re: Overlap between StreamReader and FileReader
On Thu, May 16, 2013 at 6:31 PM, Travis
I skimmed the thread before starting this and saw that you were pointing
out some issues but didn't think you're opposing so much.
Let me check requirements.
a) We don't want to introduce a completely new object for streaming HTTP
read/write, but we'll realize it by adding some extension
77 matches
Mail list logo