Re: Art steps down - thank you for everything

2016-02-01 Thread Takeshi Yoshino
Thank you Art!

Takeshi

On Mon, Feb 1, 2016 at 12:39 AM, Tobie Langel  wrote:

> So long, Art, and thanks for all the fish.
>
> --tobie
>
> On Thu, 28 Jan 2016, at 16:45, Chaals McCathie Nevile wrote:
> > Hi folks,
> >
> > as you may have noticed, Art has resigned as a co-chair of the Web
> > Platform group. He began chairing the Web Application Formats group about
> > a decade ago, became the leading co-chair when it merged with Web APIs to
> > become the Web Apps working group, and was instrumental in making the
> > transition from Web Apps to the Web Platform Group. (He also chaired
> > various other W3C groups in that time).
> >
> > I've been very privileged to work with Art on the webapps group for so
> > many years - as many of you know, without him it would have been a much
> > poorer group, and run much less smoothly. He did a great deal of work for
> > the group throughout his time as co-chair, efficiently, reliably, and
> > quietly.
> >
> > Now we are three co-chairs, we will work between us to fill Art's shoes.
> > It won't be easy.
> >
> > Thanks Art for everything you've done for the group for so long.
> >
> > Good luck, and I hope to see you around.
> >
> > Chaals
> >
> > --
> > Charles McCathie Nevile - web standards - CTO Office, Yandex
> >   cha...@yandex-team.ru - - - Find more at http://yandex.com
> >
>
>


Re: [charter] What is the plan for Streams API?

2015-08-05 Thread Takeshi Yoshino
+domenic

We've recently finished the ReadableStream part of the spec and
experimenting integration with the Fetch API. Most of the spec is still
unstable. I don't have bandwidth to maintain W3C version of the spec even
briefly currently...

Takeshi

On Tue, Aug 4, 2015 at 11:13 PM, Arthur Barstow 
wrote:

> On 7/30/15 8:46 AM, Arthur Barstow wrote:
>
>> 
>>
>
> The WebApps + HTML WG draft charter says the following about WebApps'
> Streams API:
>
> [[
> Streams API 
>An API for representing a stream of data in web applications.
> ]]
>
> I believe the previously agreed "plan of record" for this spec was to
> create an API on top of . Is that still
> something this group wants to do, and if so, who can commit to actually
> doing the work, in particular: editing, implementation, and test suite?
>
> If we no longer have committed resources for doing the above tasks then
> this spec should be removed from the draft charter.
>
> -Thanks, AB
>
>
>


Re: Allow custom headers (Websocket API)

2015-02-06 Thread Takeshi Yoshino
Usually,
- IETF HyBi ML
http://www.ietf.org/mail-archive/web/hybi/current/maillist.html for
protocol stuff
- Here or WHATWG ML
https://lists.w3.org/Archives/Public/public-whatwg-archive/ for API stuff


On Thu, Feb 5, 2015 at 11:07 PM, Michiel De Mey 
wrote:

> Standardizing the approach would definitely help developers,
> however where will we communicate this?
>


Re: Allow custom headers (Websocket API)

2015-02-05 Thread Takeshi Yoshino
On Thu, Feb 5, 2015 at 10:57 PM, Anne van Kesteren  wrote:

> On Thu, Feb 5, 2015 at 2:48 PM, Bjoern Hoehrmann 
> wrote:
> > A Websocket connection is established by making a HTTP Upgrade request,
> > and the protocol is HTTP unless and until the connection is upgraded.
>
> Sure, but the server can get away with supporting a very limited
> subset of HTTP, no? Anyway, perhaps a combination of a CORS preflight
> followed by the HTTP Upgrade that then includes the headers is the
> answer, would probably be best to ask some WebSocket library
> developers what they think.
>

Agreed. Even if we don't make any change on the existing specs, we need to
standardize (or just announce to developers) that they need to make servers
understand that combination if they want to develop apps that uses custom
headers. Then, client vendors could implement that.


Re: Allow custom headers (Websocket API)

2015-02-05 Thread Takeshi Yoshino
http://www.w3.org/TR/cors/#cross-origin-request-0

> 2. If the following conditions are true, follow the simple cross-origin
request algorithm:
> - The request method is a simple method and the force preflight flag is
unset.
> - Each of the author request headers is a simple header or author request
headers is empty.
> 3. Otherwise, follow the cross-origin request with preflight algorithm.

https://fetch.spec.whatwg.org/#dfnReturnLink-7

> request's unsafe request flag is set and either request's method is not a
simple method or a header in request's header list is not a simple header
>   Set request's response tainting to CORS.
>   The result of performing an HTTP fetch using request with the CORS flag
and CORS preflight flag set.

Authorization header is not a simple header.


On Thu, Feb 5, 2015 at 10:48 PM, Florian Bösch  wrote:

> On Thu, Feb 5, 2015 at 2:44 PM, Takeshi Yoshino 
> wrote:
>
>> IIUC, CORS prevents clients from issuing non-simple cross-origin request
>> (even idempotent methods) without verifying that the server understands
>> CORS. That's realized by preflight.
>>
>
> Incorrect, the browser will perform idempotent requests (for instance
>  or XHR GET) across domains without a preflight request. It will
> however not make the data available to the client (javascript specifically)
>

That's the tainting part.


> unless CORS is satisfied (XHR GET will error out, and  will throw a
> glError on gl.texImage2D if CORS isn't satisfied).
>


Re: Allow custom headers (Websocket API)

2015-02-05 Thread Takeshi Yoshino
On Thu, Feb 5, 2015 at 10:41 PM, Florian Bösch  wrote:

> On Thu, Feb 5, 2015 at 2:39 PM, Takeshi Yoshino 
> wrote:
>
>> To prevent WebSocket from being abused to attack existing HTTP servers
>> from malicious non-simple cross-origin requests, we need to have WebSocket
>> clients to do some preflight to verify that the server is not an HTTP
>> server that don't understand CORS. We could do e.g. when a custom header is
>> specified,
>>
> No further specification is needed because CORS already covers the case of
> endpoints that do not understand CORS (deny by default). Hence above
> assertion is superfluous.
>

IIUC, CORS prevents clients from issuing non-simple cross-origin request
(even idempotent methods) without verifying that the server understands
CORS. That's realized by preflight.


>
>
>> So, anyway, I think we need to make some change on the WebSocket spec.
>>
> Also bogus assertion.
>


Re: Allow custom headers (Websocket API)

2015-02-05 Thread Takeshi Yoshino
To prevent WebSocket from being abused to attack existing HTTP servers from
malicious non-simple cross-origin requests, we need to have WebSocket
clients to do some preflight to verify that the server is not an HTTP
server that don't understand CORS. We could do e.g. when a custom header is
specified,

(a) the client issues a CORS preflight to verify that the server does
understand CORS and it is willing to accept the request.
(b) the client issues a WebSocket preflight to verify that the server is a
WebSocket server.

(a) may work, needs change on the WebSocket spec to ask WebSocket servers
to understand CORS preflight. It's not required for now.

(b) may be implemented by issuing an extra WebSocket handshake without the
custom headers but with Sec-WebSocket-Key, etc. just for checking that the
server is WebSocket server, but this may be not no-op to the server. So, I
think we should specify something new that is specified to be no-op.

So, anyway, I think we need to make some change on the WebSocket spec.

Takeshi

On Thu, Feb 5, 2015 at 10:23 PM, Florian Bösch  wrote:

> Well,
>
> 1) Clients do apply CORS to WebSocket requests already (and might've
> started doing so quite some time ago) and everything's fine and you don't
> need to change anything.
>
> 2) Clients do not apply CORS to WebSocket requests, and you're screwed,
> because any change you make will break existing deployments.
>
> Either way, this will result in no change made, so you can burry it right
> here.
>
> On Thu, Feb 5, 2015 at 2:12 PM, Anne van Kesteren 
> wrote:
>
>> On Thu, Feb 5, 2015 at 1:27 PM, Florian Bösch  wrote:
>> > CORS is an adequate protocol to allow for additional headers, and
>> websocket
>> > requests could be subjected to CORS (I'm not sure what the current
>> client
>> > behavior is in that regard, but I'm guessing they enforce CORS on
>> websocket
>> > requests as well).
>>
>> I think you're missing something. A WebSocket request is subject to
>> the WebSocket protocol, which does not take the same precautions as
>> the Fetch protocol does used elsewhere in the platform. And therefore
>> we cannot provide this feature until the WebSocket protocol is fixed
>> to take the same precautions.
>>
>>
>> --
>> https://annevankesteren.nl/
>>
>
>


Re: =[xhr]

2014-11-24 Thread Takeshi Yoshino
On Wed, Nov 19, 2014 at 1:45 AM, Domenic Denicola  wrote:

> From: annevankeste...@gmail.com [mailto:annevankeste...@gmail.com] On
> Behalf Of Anne van Kesteren
>
> > On Tue, Nov 18, 2014 at 12:50 PM, Takeshi Yoshino 
> wrote:
> >> How about padding the remaining bytes forcefully with e.g. 0x20 if the
> WritableStream doesn't provide enough bytes to us?
> >
> > How would that work? At some point when the browser decides it wants to
> terminate the fetch (e.g. due to timeout, tab being closed) it attempts to
> transmit a bunch of useless bytes? What if the value is really large?
>

It's a problem that we'll provide a very easy way (compared to building a
big ArrayBuffer by doubling its size repeatedly) to a malicious script to
have a user agent send very large data. So, we might want to place a limit
to the maximum size of Content-Length that doesn't hurt the benefit of
streaming upload too much.


> I think there are several different scenarios under consideration.
>
> 1. The author says Content-Length 100, writes 50 bytes, then closes the
> stream.
> 2. The author says Content-Length 100, writes 50 bytes, and never closes
> the stream.
> 3. The author says Content-Length 100, writes 150 bytes, then closes the
> stream.
> 4. The author says Content-Length 100 , writes 150 bytes, and never closes
> the stream.
>
> It would be helpful to know how most servers handle these. (Perhaps HTTP
> specifies a mandatory behavior.) My guess is that they are very capable of
> handling such situations. 2 in particular resembles a long-polling setup.
>
> As for whether we consider this kind of thing an "attack," instead of just
> a new capability, I'd love to get some security folks to weigh in. If they
> think it's indeed a bad idea, then we can discuss mitigation strategies; 3
> and 4 are easily mitigatable, whereas 1 could be addressed by an idea like
> Takeshi's. I don't think mitigating 2 makes much sense as we can't know
> when the author intends to send more data.
>

The extra 50 bytes for the case 3 and 4 should definitely be ignored by the
user agent. The user agent should probably also error the WritableStream
when extra bytes are written.

2 is useful but new situation to web apps. I agree that we should consult
security experts.


Re: =[xhr]

2014-11-18 Thread Takeshi Yoshino
How about padding the remaining bytes forcefully with e.g. 0x20 if the
WritableStream doesn't provide enough bytes to us?

Takeshi

On Tue, Nov 18, 2014 at 7:01 PM, Anne van Kesteren  wrote:

> On Tue, Nov 18, 2014 at 10:34 AM, Domenic Denicola  wrote:
> > I still think we should just allow the developer full control over the
> Content-Length header if they've taken full control over the contents of
> the request body (by writing to its stream asynchronously and piecemeal).
> It gives no more power than using CURL. (Except the usual issues of
> ambient/cookie authority, but those seem orthogonal to Content-Length
> mismatch.)
>
> Why? If a service behind a firewall is vulnerable to Content-Length
> mismatches, you can now attack such a service by tricking a user
> behind that firewall into visiting evil.com.
>
>
> --
> https://annevankesteren.nl/
>


Re: =[xhr]

2014-11-17 Thread Takeshi Yoshino
On Tue, Nov 18, 2014 at 1:45 PM, Domenic Denicola  wrote:

> From: Takeshi Yoshino [mailto:tyosh...@google.com]
>
> > On Tue, Nov 18, 2014 at 12:11 AM, Anne van Kesteren 
> wrote:
> >> On Mon, Nov 17, 2014 at 3:50 PM, Domenic Denicola  wrote:
> >>> What do we think of that kind of behavior for fetch requests?
> >
> >> I'm not sure we want to give a potential hostile piece of script that
> much control over what goes out. Being able to lie about Content-Length
> would be a new feature that does not really seem desirable. Streaming
> should probably imply chunked given that.
> >
> > Agreed.
>
> That would be very sad. There are many servers that will not accept
> chunked upload (for example Amazon S3). This would mean web apps would be
> unable to do streaming upload to such servers.
>

Hmm, is this kinda protection against DoS? It seems S3 SigV4 accepts
chunked but still requires a custom-header indicating the final size. This
may imply that even if sending with chunked T-E becomes popular with the
Fetch API they won't accept such requests without length info in advance.


Re: =[xhr]

2014-11-17 Thread Takeshi Yoshino
On Tue, Nov 18, 2014 at 12:11 AM, Anne van Kesteren 
wrote:

> On Mon, Nov 17, 2014 at 3:50 PM, Domenic Denicola  wrote:
> > What do we think of that kind of behavior for fetch requests?
>
> I'm not sure we want to give a potential hostile piece of script that
> much control over what goes out. Being able to lie about
> Content-Length would be a new feature that does not really seem
> desirable. Streaming should probably imply chunked given that.
>

Agreed.

stream.write(new ArrayBuffer(1024));
> setTimeout(() => stream.write(new ArrayBuffer(1024)), 100);
> setTimeout(() => stream.write(new ArrayBuffer(1024)), 200);
> setTimeout(() => stream.close(), 300);


And, for abort(), underlying transport will be destroyed. For TCP FIN
without last-chunk. For http2 maybe RST_STREAM with INTERNAL_ERROR? Need
consult httpbis.


Re: =[xhr]

2014-11-17 Thread Takeshi Yoshino
We're adding Streams API  based response
body receiving feature to the Fetch API

See
- https://github.com/slightlyoff/ServiceWorker/issues/452
- https://github.com/yutakahirano/fetch-with-streams

Similarly, using WritableStream + Fetch API, we could allow for sending
partial chunks. It's not well discussed/standardized yet. Please join
discussion there.

Takeshi

On Sat, Nov 15, 2014 at 3:49 AM, Rui Prior  wrote:

> AFAIK, there is currently no way of using XMLHttpRequest to make a POST
> using Transfer-Encoding: Chunked.  IMO, this would be a very useful
> feature for repeatedly sending short messages to a server.
>
> You can always make POSTs repeatedly, one per chunk, and arrange for the
> server to "glue" the chunks together, but for short messages this
> process adds a lot of overhead (a full HTTP request per chunk, with full
> headers for both the request and the response).  Another option would
> using websockets, but the protocol is no longer HTTP, which increases
> complexity and may bring some issues.
>
> Chunked POSTs using XMLHttpRequest would be a much better option, were
> they available.  An elegant way of integrating this feature in the API
> would be adding a second, optional boolean argument to the send()
> method, defaulting to false, that, when true, would trigger chunked
> uploading;  the last call to send() would have that argument set to
> true, indicating the end of the object to be uploaded.
>
> Is there any chance of such feature getting added to the standard in the
> future?
>
> Rui Prior
>
>
>


Re: [streams] Seeking status and plans [Was: [TPAC2014] Creating focused agenda]

2014-10-24 Thread Takeshi Yoshino
On Fri, Oct 24, 2014 at 8:24 PM, Arthur Barstow 
wrote:

> On 10/23/14 1:26 PM, Domenic Denicola wrote:
>
>> From: Arthur Barstow [mailto:art.bars...@gmail.com]
>>
>>  I think recent threads about Streams provided some useful information
>>> about the status and plans for Streams. I also think it could be useful if
>>> some set of you were available to answer questions raised at the meeting.
>>> Can any of you commit some time to be available? If so, please let me know
>>> some time slots you are available. My preference is Monday morning, if
>>> possible.
>>>
>> I'd be happy to call in at that time or another. Just let me know so I
>> can put it in my calendar.
>>
>
> OK, I just added Streams to the 11:00-11:30 slot on Monday Oct 27:
>
>  Agenda_Monday_October_27>
>
> IRC and Zakim phone bridge info is:
>
> 
>
> Takeshi, Feras - please do join the call.
>
> If members of the p-html-media group want to join, that should be fine.
>

Thanks.

Me too over telephone and IRC.

Takeshi


Re: [streams-api] Seeking status of the Streams API spec

2014-10-22 Thread Takeshi Yoshino
Hi Arthur,

OK. Since I hurried, there're some odd texts left. Fixed:
https://dvcs.w3.org/hg/streams-api/rev/891635210233


On Tue, Oct 21, 2014 at 10:53 PM, Arthur Barstow 
wrote:

> On 10/14/14 11:06 PM, Takeshi Yoshino wrote:
>
>> Not to confuse people, too late but I replaced the W3C Streams API spec
>> WD with a pointer to the WHATWG Streams spec and a few sections discussing
>> what we should add to the spec for browser use cases.
>>
>
> Takeshi - given the magnitude of the changes in [changeset], a new WD
> should be published. I'll start a related PSA targeting a publication on
> October 23.
>
> (I'll start working on the draft WD if you don't have the time.)
>
> -Thanks, AB
>
> [changeset] <https://dvcs.w3.org/hg/streams-api/rev/e5b689ded0d6>
>


Re: [streams-api] Seeking status of the Streams API spec

2014-10-14 Thread Takeshi Yoshino
Re: establishing integration plan for the consumers and producers listed in
the W3C spec, we haven't done anything than what Domenic introduced in this
thread.

I wrote some draft of XHR+ReadableStream integration spec and is
implemented on Chrome, but the plan is not to ship it but wait for the
Fetch API as discussed at WHATWG.


On Wed, Oct 15, 2014 at 9:10 AM, Paul Cotton 
wrote:

> This thread was recently re-started at
> http://lists.w3.org/Archives/Public/public-webapps/2014OctDec/0084.html
>
> Domenic's latest document is at https://streams.spec.whatwg.org/  The W3C
> document has NOT been updated since
> http://www.w3.org/TR/2013/WD-streams-api-20131105/ .
>

Not to confuse people, too late but I replaced the W3C Streams API spec WD
with a pointer to the WHATWG Streams spec and a few sections discussing
what we should add to the spec for browser use cases.


>
> /paulc
>
> Paul Cotton, Microsoft Canada
> 17 Eleanor Drive, Ottawa, Ontario K2E 6A3
> Tel: (425) 705-9596 Fax: (425) 936-7329
>
>
> -Original Message-
> From: Jerry Smith (WINDOWS)
> Sent: Tuesday, October 14, 2014 8:03 PM
> To: Domenic Denicola; Aaron Colwell
> Cc: Anne van Kesteren; Paul Cotton; Takeshi Yoshino; public-webapps;
> Arthur Barstow; Feras Moussa; public-html-me...@w3.org
> Subject: RE: [streams-api] Seeking status of the Streams API spec
>
> Where is the latest Streams spec?
> https://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm doesn't have
> much about WritableStreams.
>
> Jerry
>
> -Original Message-
> From: Domenic Denicola [mailto:dome...@domenicdenicola.com]
> Sent: Tuesday, October 14, 2014 10:18 AM
> To: Aaron Colwell
> Cc: Anne van Kesteren; Paul Cotton; Takeshi Yoshino; public-webapps;
> Arthur Barstow; Feras Moussa; public-html-me...@w3.org
> Subject: RE: [streams-api] Seeking status of the Streams API spec
>
> From: Aaron Colwell [mailto:acolw...@google.com]
>
> > MSE is just too far along, has already gone through a fair amount of
> churn, and has major customers like YouTube and Netflix that I just don't
> want to break or force to migrate...again.
>
> Totally understandable.
>
> > I haven't spent much time looking at the new Stream spec so I can't
> really say yet whether I agree with you or not. The main reason why people
> wanted to be able to append a stream is to handle larger, open range,
> appends without having to make multiple requests or wait for an XHR to
> complete before data could be appended. While I understand that you had
> your reasons to expand the scope of Streams to be more general, MSE really
> just needs them as a "handle" to route bytes being received with XHR to the
> SourceBuffer w/o having to actually surface them to JS. It would be really
> unfortunate if this was somehow lost in the conversion from the old spec.
>
> The way to do this in Streams is to pipe the fetch stream to a writable
> stream:
>
> fetch(url)
>   .then(response => response.body.pipeTo(writableStream).closed)
>   .then(() => console.log("all data written!"))
>   .catch(e => console.log("error fetching or piping!", e));
>
> By piping between two UA-controlled streams, you can establish an
> off-main-thread relationship between them. This is why it would be ideal
> for SourceBuffer (or a future alternative to it) to be WritableStream,
> especially given that it already has abort(), appendBuffer(), and various
> state-like properties that are very similar to what a WritableStream
> instance has. The benefit here being that people could then use
> SourceBuffers as generic destinations for any writable-stream-accepting
> code, since piping to a writable stream is idiomatic.
>
> But that said, given the churn issue I can understand it not being
> feasible or desirable to take that path.
>
> > Perhaps, although I expect that there may be some resistance to dropping
> this at this point. Media folks were expecting the Streams API to progress
> in such a way that would at least allow appendStream() to still work
> especially since it only needs a stream for recieving bytes. I'll dig into
> the latest Streams spec so I can better understand the current state of
> things.
>
> One speculative future evolution would be for SourceBuffer to grow a
> `.stream` or `.writable` property, which exposes the actual stream. Then
> appendStream could essentially be redefined as
>
> SourceBuffer.prototype.appendStream = function (readableStream) {
>   readableStream.pipeTo(this.writable);
> };
>
>
Needs porting some part of "Prepare Append Algorithm" in order not to run
two or more pipeTo at the same time?


> and similarly appendBuffer could be recast in terms of
> `this.writable.write`, etc. Then developers who wish to treat the
> SourceBuffer as a more generic writable stream can just do
> `myGenericWritableStreamLibrary(mediaSource.sourceBuffers[0].writable)` or
> similar.
>
>


Re: File API: reading a Blob

2014-09-17 Thread Takeshi Yoshino
+Aaron

On Thu, Sep 11, 2014 at 6:48 PM, Aymeric Vitte 
wrote:

>  But I suppose that should be one of the first use case for Google to
> introduce streams with MSE, no?
>
>
We're (and will) sharing updates about Streams with MSE folk (Aaron). MSE
side would be updated once Streams API becomes stable enough for them to
start revising.


> To be more clear about what I mean by "back pressure for things coming
> from outside of the browser":
>
> - XHR: the Streams API should define how xhr gets chunks using Range
> according to the flow and adapt accordingly transparently for the users
>
>
We're not thinking of involving multiple HTTP transactions for one send().
That's beyond current XHR semantics. I think even if we implement that kind
of feature, we should add it as a new mode or API, not as a new
responseType.

What we plan to do is connecting the Streams API's back pressure with TCP
congestion control. Since socket API doesn't expose much details to user,
we'll just control frequency and the size of buffer to pass to read(2).


> - WebSockets: use something like bufferedAmount but that can be notified
> to the users, so the users can adapt the flow, currently bufferedAmount is
> not extremely usefull since you might need to do some polling to check it.
>
>
For WebSockets, there're two approaches of Streams integration.
ReadableStream of messages or ReadableStream of ArrayBuffer representing
chunks of the payload of a message.

We need to discuss that first.


>
> Le 11/09/2014 08:36, Takeshi Yoshino a écrit :
>
>  On Thu, Sep 11, 2014 at 8:47 AM, Aymeric Vitte 
> wrote:
>
>>  Does your experimentation pipe the XHR stream to MSE? Obviously that
>> should be the target for yt, this would be a first real application of the
>> Streams API.
>>
>
>  It's not yet updated to use the new Streams. Here's our layout test for
> MSE. responseType = 'legacystream' makes the XHR return the old version of
> the stream.
>
>
> https://code.google.com/p/chromium/codesearch#chromium/src/third_party/WebKit/LayoutTests/http/tests/media/media-source/mediasource-append-legacystream.html&q=createMediaXHR&sq=package:chromium&type=cs&l=12
>
>  You can find the following call in the file.
>
>  sourceBuffer.appendStream(xhr.response);
>
>
>>
>> Because the Streams API does not define how to apply back pressure to
>> XHR, but does define how to apply back pressure between XHR and MSE.
>>
>> Probably the spec should handle on a per case basis what should be the
>> behavior in term of back pressure for things coming from outside of the
>> browser (xhr, websockets, webrtc, etc , not specified as far as I know) and
>> for things going on inside the browser (already specified)
>>
>> Le 08/09/2014 06:58, Takeshi Yoshino a écrit :
>>
>>  On Thu, Sep 4, 2014 at 7:02 AM, Aymeric Vitte 
>> wrote:
>>
>>> The fact is that most of the W3C groups impacted by streams (File,
>>> indexedDB, MSE, WebRTC, WebCrypto, Workers, XHR, WebSockets, Media Stream,
>>> etc, I must forget a lot here) seem not to care a lot about it and maybe
>>> just expect streams to land in the right place in the APIs when they are
>>> available, by some unknown magic.
>>>
>>> I still think that the effort should start from now for all the APIs (as
>>> well as the implementation inside browsers, which apparently has started
>>> for Chrome, but Chrome was supposed to have started some implementation of
>>> the previous Streams APIs, so it's not very clear), and that it should be
>>> very clearly synchronized, disregarding vague assumptions from the groups
>>> about low/high level and Vx releases, eluding the issue.
>>
>>
>>  Chrome has an experimental implementation [1] of the new Streams API
>> [2] integrated with XHR [3] behind a flag.
>>
>>  We receive data from the browser process over IPC (both network and
>> blob case). The size of data chunks and arrival timing depend on various
>> factors. The received chunks are passed to the XMLHttpRequest class on the
>> same thread as JavaScript runs. We create a new instance of ReadableStream
>> [4] on arrival of the first chunk. On every chunk arrival, we create an
>> ArrayBuffer from the chunk and then call [[enqueue]](chunk) [5] equivalent
>> C++ function to put it into the ReadableStream.
>>
>>  The ReadableStream is available from the "response" attribute in the
>> LOADING and DONE state (if no error). The chunks pushed to the
>> ReadableStream become available for read immediately.
>>
>>  Any problem

Re: File API: reading a Blob

2014-09-10 Thread Takeshi Yoshino
On Thu, Sep 11, 2014 at 8:47 AM, Aymeric Vitte 
wrote:

>  Does your experimentation pipe the XHR stream to MSE? Obviously that
> should be the target for yt, this would be a first real application of the
> Streams API.
>

It's not yet updated to use the new Streams. Here's our layout test for
MSE. responseType = 'legacystream' makes the XHR return the old version of
the stream.

https://code.google.com/p/chromium/codesearch#chromium/src/third_party/WebKit/LayoutTests/http/tests/media/media-source/mediasource-append-legacystream.html&q=createMediaXHR&sq=package:chromium&type=cs&l=12

You can find the following call in the file.

sourceBuffer.appendStream(xhr.response);


>
> Because the Streams API does not define how to apply back pressure to XHR,
> but does define how to apply back pressure between XHR and MSE.
>
> Probably the spec should handle on a per case basis what should be the
> behavior in term of back pressure for things coming from outside of the
> browser (xhr, websockets, webrtc, etc , not specified as far as I know) and
> for things going on inside the browser (already specified)
>
> Le 08/09/2014 06:58, Takeshi Yoshino a écrit :
>
>  On Thu, Sep 4, 2014 at 7:02 AM, Aymeric Vitte 
> wrote:
>
>> The fact is that most of the W3C groups impacted by streams (File,
>> indexedDB, MSE, WebRTC, WebCrypto, Workers, XHR, WebSockets, Media Stream,
>> etc, I must forget a lot here) seem not to care a lot about it and maybe
>> just expect streams to land in the right place in the APIs when they are
>> available, by some unknown magic.
>>
>> I still think that the effort should start from now for all the APIs (as
>> well as the implementation inside browsers, which apparently has started
>> for Chrome, but Chrome was supposed to have started some implementation of
>> the previous Streams APIs, so it's not very clear), and that it should be
>> very clearly synchronized, disregarding vague assumptions from the groups
>> about low/high level and Vx releases, eluding the issue.
>
>
>  Chrome has an experimental implementation [1] of the new Streams API [2]
> integrated with XHR [3] behind a flag.
>
>  We receive data from the browser process over IPC (both network and blob
> case). The size of data chunks and arrival timing depend on various
> factors. The received chunks are passed to the XMLHttpRequest class on the
> same thread as JavaScript runs. We create a new instance of ReadableStream
> [4] on arrival of the first chunk. On every chunk arrival, we create an
> ArrayBuffer from the chunk and then call [[enqueue]](chunk) [5] equivalent
> C++ function to put it into the ReadableStream.
>
>  The ReadableStream is available from the "response" attribute in the
> LOADING and DONE state (if no error). The chunks pushed to the
> ReadableStream become available for read immediately.
>
>  Any problem occurs while loading data from network/blob, we call
> [[error]](e) [6] equivalent C++ function with an exception as defined in
> the XHR spec for sync XHR.
>
>  Currently, XMLHttpRequest doesn't exert any back pressure. We plan to do
> something not to read too much data from disk/network. It might be worth
> specifying something about the flow control in the abstract read from
> blob/network operation at standard level.
>
>  [1]
> https://code.google.com/p/chromium/codesearch#chromium/src/third_party/WebKit/LayoutTests/http/tests/xmlhttprequest/response-stream.html
>  [2] https://github.com/whatwg/streams
> <https://github.com/whatwg/streams#readablestream>
> [3] https://github.com/tyoshino/streams_integration/
>  [4] https://github.com/whatwg/streams#readablestream
> [5] https://github.com/whatwg/streams#enqueuechunk
>  [6] https://github.com/whatwg/streams#errore
>
>
>
> --
> Peersm : http://www.peersm.com
> torrent-live: https://github.com/Ayms/torrent-live
> node-Tor : https://www.github.com/Ayms/node-Tor
> GitHub : https://www.github.com/Ayms
>
>


Re: File API: reading a Blob

2014-09-07 Thread Takeshi Yoshino
On Thu, Sep 4, 2014 at 7:02 AM, Aymeric Vitte 
wrote:

> The fact is that most of the W3C groups impacted by streams (File,
> indexedDB, MSE, WebRTC, WebCrypto, Workers, XHR, WebSockets, Media Stream,
> etc, I must forget a lot here) seem not to care a lot about it and maybe
> just expect streams to land in the right place in the APIs when they are
> available, by some unknown magic.
>
> I still think that the effort should start from now for all the APIs (as
> well as the implementation inside browsers, which apparently has started
> for Chrome, but Chrome was supposed to have started some implementation of
> the previous Streams APIs, so it's not very clear), and that it should be
> very clearly synchronized, disregarding vague assumptions from the groups
> about low/high level and Vx releases, eluding the issue.


Chrome has an experimental implementation [1] of the new Streams API [2]
integrated with XHR [3] behind a flag.

We receive data from the browser process over IPC (both network and blob
case). The size of data chunks and arrival timing depend on various
factors. The received chunks are passed to the XMLHttpRequest class on the
same thread as JavaScript runs. We create a new instance of ReadableStream
[4] on arrival of the first chunk. On every chunk arrival, we create an
ArrayBuffer from the chunk and then call [[enqueue]](chunk) [5] equivalent
C++ function to put it into the ReadableStream.

The ReadableStream is available from the "response" attribute in the
LOADING and DONE state (if no error). The chunks pushed to the
ReadableStream become available for read immediately.

Any problem occurs while loading data from network/blob, we call
[[error]](e) [6] equivalent C++ function with an exception as defined in
the XHR spec for sync XHR.

Currently, XMLHttpRequest doesn't exert any back pressure. We plan to do
something not to read too much data from disk/network. It might be worth
specifying something about the flow control in the abstract read from
blob/network operation at standard level.

[1]
https://code.google.com/p/chromium/codesearch#chromium/src/third_party/WebKit/LayoutTests/http/tests/xmlhttprequest/response-stream.html
[2] https://github.com/whatwg/streams

[3] https://github.com/tyoshino/streams_integration/
[4] https://github.com/whatwg/streams#readablestream
[5] https://github.com/whatwg/streams#enqueuechunk
[6] https://github.com/whatwg/streams#errore


Re: [Streams API] Add load-using-streams functionality to XHR or only to Fetch API?

2014-08-14 Thread Takeshi Yoshino
On Thu, Aug 14, 2014 at 4:29 PM, Anne van Kesteren  wrote:

> On Thu, Aug 14, 2014 at 9:19 AM, Takeshi Yoshino 
> wrote:
> > We'd like to continue prototyping in Blink anyway to provide feedback to
> the
> > spec as browser developer's point of view, but if we're sure that we
> won't
> > add it to XHR, we want to stop the work on XHR (both spec and impl) at
> some
> > early stage after figuring out very basic issues and focus on the Fetch.
>
> I recommend putting all your energy on Fetch and Streams then. We can
> still add features to XMLHttpRequest if there's compelling reasons
> (e.g. we hit deployment issues with fetch()), but we should not put
> our focus there.
>

Thanks. I'd like to hear opinions from others too.



BTW, Anne, as the fetch() is planned to be available in non-ServiceWorker
scopes, could you please do either of:
a) discuss issues about fetch() at public-webapps@
b) add a link to https://github.com/slightlyoff/ServiceWorker/labels/fetch
in http://fetch.spec.whatwg.org/ as well as the link to public-webapps@

I guess you don't want to do (a). (b) is fine. Then, readers of
http://fetch.spec.whatwg.org/ can easily know that discussions about
fetch() are happening at the ServiceWorker issue tracker even if he/she is
not familiar with ServiceWorker, and join it if want.


Re: [Streams API] Add load-using-streams functionality to XHR or only to Fetch API?

2014-08-14 Thread Takeshi Yoshino
Streams API itself still has many issues, right.

We'd like to continue prototyping in Blink anyway to provide feedback to
the spec as browser developer's point of view, but if we're sure that we
won't add it to XHR, we want to stop the work on XHR (both spec and impl)
at some early stage after figuring out very basic issues and focus on the
Fetch.

Takeshi


On Thu, Aug 14, 2014 at 4:03 PM, Anne van Kesteren  wrote:

> On Thu, Aug 14, 2014 at 6:20 AM, Takeshi Yoshino 
> wrote:
> > We're implementing Streams API and integration of it and XHR
> > experimentally in Blink.
>
> I think the question is not so much how far along XMLHttpRequest and
> fetch() are, but how far along streams is. E.g. a chat I had yesterday
> http://krijnhoetmer.nl/irc-logs/whatwg/20140813#l-592 suggests there's
> still many things to sort through.
>
>
> --
> http://annevankesteren.nl/
>


[Streams API] Add load-using-streams functionality to XHR or only to Fetch API?

2014-08-13 Thread Takeshi Yoshino
We're implementing Streams API [1] and integration of it and XHR [2]
experimentally in Blink [3][4].

Anne suggested that we look into adding new fetch-layer features to the
Fetch API [5][6] rather than XHR in Blink mailing list (blink-dev) [6].
There's a concern if we can ship the Fetch API to non-ServiceWorker scope
soon [7].

I'd like to hear your opinions on these issues:
- add the feature only to the Fetch, or both XHR and Fetch?
- how the integration should be done?

Thanks


[1] https://github.com/whatwg/streams
[2] http://xhr.spec.whatwg.org/
[3]
https://code.google.com/p/chromium/codesearch#chromium/src/third_party/WebKit/Source/core/streams/
[4] https://github.com/tyoshino/streams_integration/blob/master/README.md
[5] http://fetch.spec.whatwg.org/#fetch-method
[6] http://fetch.spec.whatwg.org/#body-stream-concept
[7]
https://groups.google.com/a/chromium.org/d/msg/blink-dev/GoFbe0yLO50/SXpYMdYn0A4J
[8]
https://groups.google.com/a/chromium.org/d/msg/blink-dev/GoFbe0yLO50/vviGEQ5Z-KoJ


Re: XMLHttpRequest: uppercasing method names

2014-08-12 Thread Takeshi Yoshino
On Wed, Aug 13, 2014 at 1:05 AM, Anne van Kesteren  wrote:

> On Tue, Aug 12, 2014 at 5:15 PM, Brian Kardell  wrote:
> > fetch should explain magic in XMLHttpRequest et all.. I don't see how it
> > could differ in the way you are suggesting and match
>
> Well fetch() will never be able to explain synchronous fetches. But in
> general I agree that it would be sad if XMLHttpRequest could do more
> than fetch(). Per bz it seems we should just align fetch() with
> XMLHttpRequest and call it a day.
>
>
wfm

seems Safari is also going to be compliant to the latest spec.
https://bugs.webkit.org/show_bug.cgi?id=134264


>
> --
> http://annevankesteren.nl/
>


Re: XMLHttpRequest: uppercasing method names

2014-08-12 Thread Takeshi Yoshino
On Wed, Aug 13, 2014 at 12:15 AM, Brian Kardell  wrote:

>
> On Aug 12, 2014 11:12 AM, "Takeshi Yoshino"  wrote:
> >
> > On Tue, Aug 12, 2014 at 10:55 PM, Anne van Kesteren 
> wrote:
> >>
> >> On Tue, Aug 12, 2014 at 3:37 PM, Brian Kardell 
> wrote:
> >> > If there's no really good reason to change it, least change is better
> IMO
> >>
> >> All I can think of is that it would be somewhat more consistent to not
> >> have this list and always uppercase,
> >
> >
> > Ideally
> >
> >>
> >> but yeah, I guess I'll just align
> >> fetch() with XMLHttpRequest.
> >
> >
> > Isn't it an option that we use stricter rule (all uppercase) for the
> newly-introduced fetch() API but keep the list for XHR? Aligning XHR and
> fetch() is basically good but making fetch() inherit the whitelist is a
> little sad.
> >
> >
> >
> > Some archaeology:
> >
> > - Blink recently reduced the whitelist to conform to the latest WHATWG
> XHR spec.
> http://src.chromium.org/viewvc/blink?view=revision&revision=176592
> > - Before that, used this list ported to WebKit from Firefox's behavior
> http://trac.webkit.org/changeset/13652/trunk/WebCore/xml/xmlhttprequest.cpp
> > - Anne introduced the initial version of the part of the spec in Aug
> 2006
> http://dev.w3.org/cvsweb/2006/webapi/XMLHttpRequest/Overview.html.diff?r1=1.12;r2=1.13;f=h
> > -- http://lists.w3.org/Archives/Public/public-webapi/2006Apr/0124.html
> > -- http://lists.w3.org/Archives/Public/public-webapi/2006Apr/0126.html
> >
>
> fetch should explain magic in XMLHttpRequest et all.. I don't see how it
> could differ in the way you are suggesting and match
>
Which do you mean by fetch? http://fetch.spec.whatwg.org/#dom-global-fetch
or http://fetch.spec.whatwg.org/#concept-fetch.

fetch() and XHR share the fetch algorithm but have different bootstrap and
hooks.


Re: XMLHttpRequest: uppercasing method names

2014-08-12 Thread Takeshi Yoshino
On Tue, Aug 12, 2014 at 10:55 PM, Anne van Kesteren 
wrote:

> On Tue, Aug 12, 2014 at 3:37 PM, Brian Kardell  wrote:
> > If there's no really good reason to change it, least change is better IMO
>
> All I can think of is that it would be somewhat more consistent to not
> have this list and always uppercase,


Ideally


> but yeah, I guess I'll just align
> fetch() with XMLHttpRequest.
>

Isn't it an option that we use stricter rule (all uppercase) for the
newly-introduced fetch() API but keep the list for XHR? Aligning XHR and
fetch() is basically good but making fetch() inherit the whitelist is a
little sad.



Some archaeology:

- Blink recently reduced the whitelist to conform to the latest WHATWG XHR
spec. http://src.chromium.org/viewvc/blink?view=revision&revision=176592
- Before that, used this list ported to WebKit from Firefox's behavior
http://trac.webkit.org/changeset/13652/trunk/WebCore/xml/xmlhttprequest.cpp
- Anne introduced the initial version of the part of the spec in Aug 2006
http://dev.w3.org/cvsweb/2006/webapi/XMLHttpRequest/Overview.html.diff?r1=1.12;r2=1.13;f=h
-- http://lists.w3.org/Archives/Public/public-webapi/2006Apr/0124.html
-- http://lists.w3.org/Archives/Public/public-webapi/2006Apr/0126.html


Re: Fetch API

2014-06-03 Thread Takeshi Yoshino
For XHR.send(), we've finally chosen to accept only ArrayBufferView.
http://lists.w3.org/Archives/Public/public-webapps/2012AprJun/0141.html

Do we want to do the same for FetchBody body of RequestInit?


Re: Fetch API

2014-06-03 Thread Takeshi Yoshino
On Mon, Jun 2, 2014 at 6:59 PM, Anne van Kesteren  wrote:

> On Thu, May 29, 2014 at 4:25 PM, Takeshi Yoshino 
> wrote:
> > http://fetch.spec.whatwg.org/#dom-request
> > Add steps to set client and context?
>
> That happens as part of the "restricted copy". However, that might
> still change around a bit.


Ah, ok.


> > http://fetch.spec.whatwg.org/#cors-preflight-fetch-0
> > Add steps to set client and context?
>
> That's an internal algorithm never directly used. You can only get
> there from http://fetch.spec.whatwg.org/#concept-fetch and that can
> only be reached through an API such as fetch().


Right. But preflight's client and context are not initialized before
invoking HTTP fetch (http://fetch.spec.whatwg.org/#concept-http-fetch).
client is referred in HTTP fetch.

BTW, "handle a fetch" is a dead link.
https://slightlyoff.github.io/ServiceWorker/spec/service_worker/handle-a-fetch


> > The promise is rejected with a TypeError which seems inconsistent with
> XHR.
>  > Is this intentional?
>
> Yes. I wanted to stick to JavaScript exceptions. However, I suspect at
> some point once we have FormData integration and such there might be
> quite a bit of dependencies on DOM in general, so maybe that is moot.
>

Got it. Thanks.


Re: Fetch API

2014-05-29 Thread Takeshi Yoshino
http://fetch.spec.whatwg.org/#dom-request
Add steps to set client and context?

http://fetch.spec.whatwg.org/#cors-preflight-fetch-0
Add steps to set client and context?

http://fetch.spec.whatwg.org/#concept-legacy-fetch
http://fetch.spec.whatwg.org/#concept-legacy-potentially-cors-enabled-fetch
Steps to set url, client and context are missing here too. But not worth
updating this section anymore?

Termination reason is not handled intentionally (only supposed to be used
by XHR's functionalities and nothing would be introduced for Fetch API?)?

The promise is rejected with a TypeError which seems inconsistent with XHR.
Is this intentional?


Re: [promises] Guidance on the usage of promises for API developers

2014-01-29 Thread Takeshi Yoshino
New texts look nice. Two comments...

- XMLHttpRequest.send() is a good example, but as we go through multiple
states and invokes onreadystatechange multiple times, I prefer not just
quoting "onreadystatechange" but point the one-and-done aspect of XHR by
using phrase like "which triggers onreadystatechange with readyState of
DONE"
- Could you please refrain from pointing the streams API or list both in
the doc until we finish the discussion now we have?


Re: [promises] Guidance on the usage of promises for API developers

2014-01-14 Thread Takeshi Yoshino
To address, jgraham's point, let's go back to just use "Run ..." style
phrasing.

I agree that we shouldn't mislead readers that task queuing is always
necessary. Moreover, I think algorithm may be allowed to be done either
synchronously or asynchronously for some cases. Maybe we could add
"possibly" to mean it's not forced to be run later for such a case?

How about introducing some markups such as "[Async]" and a note of detailed
explanation (no need to queue a task, etc.)? Readers are guided to read the
note to understand what they mean, and actually more likely to do so
compared to just writing "do X asynchronously". I imagine. Not sure.

1. sync_work_A.
2. sync_work_B.
3. Let p be a newly-created promise.
4. [PossiblyAsync] possibly_async_work_S.
5. [PossiblyAsync] possibly_async_work_T.
6. [Async] async_work_X.
7. [Async] async_work_Y.
8. Return p.

Right after returned from this method, we observe side effect of sync_work
and may observe possibly_async_work's but not async_work's.

If too noisy, modularize as Boris said (
http://lists.w3.org/Archives/Public/public-webapps/2014JanMar/0066.html).


Re: [promises] Guidance on the usage of promises for API developers

2014-01-14 Thread Takeshi Yoshino
Nice writing! Both the shorthand phrases and guidance look very useful for
writing Promise based specs.

I have only one comment on this section.
https://github.com/domenic/promises-unwrapping/blob/master/docs/writing-specifications-with-promises.md#maintain-a-normal-control-flow

I agree with your point in the first paragraph, but the suggested way looks
rather confusing. The asynchronous operation abstracted by Promises is well
described, but selling this convention (writing what usually happens later
before return without any annotation) to everywhere sounds too much, I
think. It's good to move "Return p" to the end of the steps. But how about
also keeping "do blah asynchronously" text?

1. Let p be a newly-created promise.
2. These steps will be run asynchronously.
  1. If blah, reject p with undefined.
  2. If blah, resolve p with foobar.
3. Return p.

Thanks


Re: Request for feedback: Streams API

2013-12-17 Thread Takeshi Yoshino
Hi,

Implementation of back pressure is important for handling large data
stably. The Streams need to ask the consumer code to notify it when it can
consume more data. Triggering this signal by method invocation is one of
possible options. Promises fit this well.

To address smaug___'s concern, we could choose not to 1-to-1 correspond
delivery of data and pull operation. Deliver data using callback
invocation, while allowing the consuming code to tell amount it want to
pull via pullAmount. Like WritableByteStream allows write() ignoring back
pressure, we allow ReadableByteStream to push available data to consuming
code ignoring back pressure. As far as the producer inside
WritableByteStream does work respecting pullAmount, back pressure mechanism
should still work well.

One concern Domenic showed in the chat will remain un-addressed. That is an
issue with callback setting timing. If we take "Promise read()" approach,
arrived data never vanishes into the void. I understand this
error-proneness.

Takeshi


On Mon, Dec 16, 2013 at 9:21 PM, Olli Pettay wrote:

> On 12/04/2013 06:27 PM, Feras Moussa wrote:
>
>> The editors of the Streams API have reached a milestone where we feel
>> many of the major issues that have been identified thus far are now
>> resolved and
>> incorporated in the editors draft.
>>
>> The editors draft [1] has been heavily updated and reviewed the past few
>> weeks to address all concerns raised, including:
>> 1. Separation into two distinct types -ReadableByteStream and
>> WritableByteStream
>> 2. Explicit support for back pressure management
>> 3. Improvements to help with pipe( ) and flow-control management
>> 4. Updated spec text and diagrams for further clarifications
>>
>> There are still a set of bugs being tracked in bugzilla. We would like
>> others to please review the updated proposal, and provide any feedback they
>> may
>> have (or file bugs).
>>
>> Thanks.
>> -Feras
>>
>>
>> [1] https://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm
>>
>
>
> So per https://www.w3.org/Bugs/Public/show_bug.cgi?id=24054
> it is not clear to me why the API is heavily Promise based.
> Event listeners tend to work better with stream like APIs.
>
>
> (The fact the Promises are hip atm is not a reason to use them for
> everything ;) )
>
> -Olli
>
>
>
>


Re: Comments on version web-apps specs from 2013-10-31

2013-12-09 Thread Takeshi Yoshino
Thanks for feedback.

On Thu, Nov 21, 2013 at 4:56 PM, Feras Moussa wrote:

> Hi Francois,
> Thanks for the feedback.
>
>
> > From: francois-xavier.kowal...@hp.com
> > To: public-webapps@w3.org
> > Date: Wed, 20 Nov 2013 20:30:47 +
> > Subject: Comments on version web-apps specs from 2013-10-31
>
> >
> > Hello
> >
> > I have a few comments on:
> https://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm from
> 2013-10-31. Apologies wether it is not the latest version: It took me some
> time to figure-out where was the right forum to send these comments to.
> >
> > Section 2.1.3:
> > 1. Close(): For writeable streams, the close() method does not provide a
> data-completion hook (all-data-flushed-to-destination), unlike the close
> method that resolved the Promise returned by read().
> The version of the spec you linked doesn't differentiate
> writeable/readable streams, but is something we are considering adding in a
> future version. I don't quite understand what you're referring to here.
> close is independent of future reads - you can call a read after close, and
> once EOF is reached, the promise is resolved and you get a result with
> eof=true.
>
>
writeClose() we have now is still void.

In current API, fulfillment of writePromise doesn't necessarily mean
acknowledgement (data written has been successfully processed) but
implmentor may give such a meaning to it in the following two ways:
- fulfill writePromise when the written data is successfully consumed
- make writeClose return a Promise, say closePromise. writePromise may be
fulfilled before processing but writeClose is fulfilled only when all the
data written has been successfully consumed

I think it makes sense to change writeClose() type to Promise so
that derived classes may choose to use it to notify the writer of success.


> > 2. Pipe(): the readable Steam (the one that provides the pipe() method)
> is neutered in case of I/O error, but the state change of the writeable
> stream is not indicated. What if the write operation failed?
> Are you asking what the chain of error propagation is when multiple
> streams are chained?
>
>
Error handling is not yet well documented but I plan to make write
operation failure result in rejection of pipePromise.


> >
> > Section 3.2:
> > 1. Shouldn't a FileWrite also be capable of consuming a Stream? (Like
> XHR-pipe-to-FileWriter)
> Yes, I think so - this is a use case we can add.
>
> Added to the possible consumer list.


> > 2. Shouldn't an XMLHttpRequest also be capable of consuming a Stream?
> (eg. chunked PUT/POST)?
> Section 5.4 covers this - support for posting a Stream. That said, this is
> a section I think will need to be flushed out more.
>
>
Added in recent commit.


> >
> > br.
> >
> > —FiX
> >
> > PS: My request to join this group is pending, so please Cc me in any
> reply/comment you may have until membership is fixed.
> >
> >
>


Re: Request for feedback: Streams API

2013-12-09 Thread Takeshi Yoshino
Thanks. ByteStream is already partially implemented in Blink/Chromium. As
one of implementors, I'll continue prototyping and share issues here.

I haven't got time for, but writing some polyfill might be also good thing
to do.

Takeshi


On Thu, Dec 5, 2013 at 2:38 AM, Kenneth Russell  wrote:

> Looks great! Seems very well thought through.
>
> The API seems large enough that it would be worth prototyping it and
> writing test applications to make sure it addresses key use cases
> before finalizing the spec.
>
> -Ken
>
>
> On Wed, Dec 4, 2013 at 8:27 AM, Feras Moussa 
> wrote:
> > The editors of the Streams API have reached a milestone where we feel
> many
> > of the major issues that have been identified thus far are now resolved
> and
> > incorporated in the editors draft.
> >
> > The editors draft [1] has been heavily updated and reviewed the past few
> > weeks to address all concerns raised, including:
> > 1. Separation into two distinct types -ReadableByteStream and
> > WritableByteStream
> > 2. Explicit support for back pressure management
> > 3. Improvements to help with pipe( ) and flow-control management
> > 4. Updated spec text and diagrams for further clarifications
> >
> > There are still a set of bugs being tracked in bugzilla. We would like
> > others to please review the updated proposal, and provide any feedback
> they
> > may have (or file bugs).
> >
> > Thanks.
> > -Feras
> >
> >
> > [1] https://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm
>
>


Re: CfC: publish Proposed Recommendation of Progress Events; deadline November 25

2013-11-18 Thread Takeshi Yoshino
two minor comments
- add semicolons to lines of the example code in the introduction section?
- 2nd paragraph in the conformance section, quote "must"?


Re: Thoughts behind the Streams API ED

2013-11-12 Thread Takeshi Yoshino
On Tue, Nov 12, 2013 at 5:20 PM, Aymeric Vitte wrote:

>  Takeshi,
>
> See discussion here too: https://github.com/whatwg/streams/issues/33
>
> The problem with stop again is that I need to handle myself the clone
> operations, the advantage of stop-eof is:
>
> - clone the operation
> - close it
> - restart from the clone
>
> And as I mentioned before this would work for any operation that has
> unresolved bytes (TextDecoder, etc) without the need of modifying the
> operation API for explicit clones or options.
>

OK. I understood your needs.

As explained in the previous mail, if we decide to take in your suggestion,
I'd implement your stop-eof as a more generic in-band control signal. In
other words, changing the draft to propose message stream than byte stream.

I estimate that this feature sophisticates implementation much. So, I want
to hear more support and justification.

For now, I filed a bug to track discussion.
https://www.w3.org/Bugs/Public/show_bug.cgi?id=23799


Re: Thoughts behind the Streams API ED

2013-11-12 Thread Takeshi Yoshino
On Tue, Nov 12, 2013 at 5:23 PM, Aymeric Vitte wrote:

>  No, see my previous reply, unless I am proven incorrect, I still think we
> should have:
>
> - pause/unpause
> - stop/(implicit resume)
>
> Regards,
>
> Aymeric
>
> Le 11/11/2013 22:06, Takeshi Yoshino a écrit :
>
>  Aymeric,
>
>  Re: pause()/resume(),
>
>
Sorry. This is typo. I meant pause()/unpause().


>
>  I've moved flow control functionality for non-exact read() method to a
> separate attribute pullAmount [1] [2]. pullAmount limits the max size of
> data to be read by read() method. Currently the pipe() method is specified
> not to respect pullAmount but we can choose to have it to respect
> pullAmount, i.e. pausing pipe()'s transfer when pullAmount is set to 0.
> Does this work for your use case?
>
>  [1] https://www.w3.org/Bugs/Public/show_bug.cgi?id=23790
> [2] https://dvcs.w3.org/hg/streams-api/rev/8a7f99536516
>
>
> --
> Peersm : http://www.peersm.com
> node-Tor : https://www.github.com/Ayms/node-Tor
> GitHub : https://www.github.com/Ayms
>
>


Re: Thoughts behind the Streams API ED

2013-11-11 Thread Takeshi Yoshino
Aymeric,

Re: pause()/resume(),

I've moved flow control functionality for non-exact read() method to a
separate attribute pullAmount [1] [2]. pullAmount limits the max size of
data to be read by read() method. Currently the pipe() method is specified
not to respect pullAmount but we can choose to have it to respect
pullAmount, i.e. pausing pipe()'s transfer when pullAmount is set to 0.
Does this work for your use case?

[1] https://www.w3.org/Bugs/Public/show_bug.cgi?id=23790
[2] https://dvcs.w3.org/hg/streams-api/rev/8a7f99536516


Re: Thoughts behind the Streams API ED

2013-11-08 Thread Takeshi Yoshino
On Fri, Nov 8, 2013 at 8:54 PM, Aymeric Vitte wrote:

> I would expect "Poc" (stop, keep 0xd1 for the next data) and "сия"
>
> It can be seen a bit different indeed, while with crypto you expect the
> finalization of the operation since the begining (but only by computing the
> latest bytes), here you can not expect the string since the begining of
> course.
>
> It just depends how the "Operation" (here TextDecoder) handles stop but I
> find it very similar, TextEncoder closes the operation with the bytes it
> has and "clone" its state (ie do nothing here except clearing resolved
> bytes and keeping unresolved ones for data to come).
>

I'd say more generally that stop() is kinda in-band control signal that is
inserted between elements of the stream and distinguishable from the
elements. As you said, interpretation of the stop() symbol depends on what
the destination is.

One thing I'm still not sure is that I think you can just add stop()
equivalent method to the destination, and
- pipe() data until the point you were calling stop()
- call the stop() equivalent on e.g. hash
- restart pipe()

At least our spec allows for this. Of course, it's convenient that Stream
can carry such a signal. But there's trade-off between the convenience and
API size. Similar to decision whether we include abort() to
WritableByteStream or not.

Extremely, abort(), close() and stop() can be merged into one method
(unless abort() method has a functionality to abandon already written
data). They're all signal inserting methods.

close() -> signal(FIN)
stop(info) -> signal(CONTROL, info)
abort(error) -> signal(ABORT, error)

and the signal is packed and inserted into the stream's internal buffer.


Re: Thoughts behind the Streams API ED

2013-11-08 Thread Takeshi Yoshino
Sorry. I've cut the input at wrong position.

textDecoderStream.write(arraybuffer of 0xd0 0xa0 0xd0 0xbe 0xd1 0x81 0xd1);
textDecoderStream.stop();
textDecoderStream.write(arraybuffer of 0x81 0xd0 0xb8 0xd1 0x8f)


Re: Thoughts behind the Streams API ED

2013-11-08 Thread Takeshi Yoshino
On Fri, Nov 8, 2013 at 5:38 PM, Aymeric Vitte wrote:

>  Please see here https://github.com/whatwg/streams/issues/33, I realized
> that this would apply to operations like textDecoder too without the need
> of an explicit stream option, so that's no more WebCrypto only related.
>

Similar but a bit different?

For clarification, could you review the following?

textDecoderStream.write(araybuffer of 0xd0 0xa0 0xd0 0xbe 0xd1);
textDecoderStream.stop();
textDecoderStream.write(arraybuffer of 0x81 0xd1 0x81 0xd0 0xb8 0xd1 0x8f)

This generates DOMString stream ["Рос", "сия"]. Right? Or you want to get
["Рос", "Россия"]?


Re: Thoughts behind the Streams API ED

2013-11-07 Thread Takeshi Yoshino
On Thu, Nov 7, 2013 at 6:27 PM, Aymeric Vitte wrote:

>
> Le 07/11/2013 10:21, Takeshi Yoshino a écrit :
>
>  On Thu, Nov 7, 2013 at 6:05 PM, Aymeric Vitte wrote:
>
>>  stop/resume:
>>
>> Indeed as I mentioned this is related to WebCrypto Issue22 but I don't
>> think this is a unique case. Issue22 was closed because of lack of
>> proposals to solve it, apparently I was the only one to care about it (but
>> I saw recently some other messages that seem to be related), and finally
>> this would involve a public clone method with associated security concerns.
>>
>> But with Streams it could be different, the application will internally
>> clone the state of the operation probably eliminating the security issues,
>> as simple as that.
>>
>> To describe simply the use case, let's take a progressive hash computing
>> 4 bytes by 4 bytes:
>>
>> incoming stream: ABCDE bytes
>> hash operation: process ABCD, keep E for the next computation
>> incoming stream: FGHI bytes + STOP-EOF
>> hash operation: process EFGH, process STOP-EOF: clone the state of the
>> hash, close the operation: digest hash with I
>>
>
>  So, here, partial hash for ABCDEFGH is output
>
>
> No, you get the digest for ABCDEFGHI and you get a cloned operation which
> will restart from ABCDEFGH
>
>
>
OK.


>
>
>>
>> resume:
>> incoming stream: JKLF
>> hash operation (clone): process IJKL, keep F for next computation
>> etc...
>>
>>
>  and if we close the stream here we'll get a hash for ABCDEFGHIJKLFPPP (P
> is padding). Right?
>
>
> If you close the stream here you get the digest for ABCDEFGHIJKLF
>
>
resume happens implicitly when new data comes in without explicit method
call say resume()?


>
>
>
>>  So you do not restart the operation as if it was the first time it was
>> receiving data, you just continue it from the state it was when stop was
>> received.
>>
>> That's not so unusual to do this, it has been requested many times in
>> node.
>>
>
> --
> Peersm : http://www.peersm.com
> node-Tor : https://www.github.com/Ayms/node-Tor
> GitHub : https://www.github.com/Ayms
>
>


Re: Thoughts behind the Streams API ED

2013-11-07 Thread Takeshi Yoshino
On Thu, Nov 7, 2013 at 6:05 PM, Aymeric Vitte wrote:

>  stop/resume:
>
> Indeed as I mentioned this is related to WebCrypto Issue22 but I don't
> think this is a unique case. Issue22 was closed because of lack of
> proposals to solve it, apparently I was the only one to care about it (but
> I saw recently some other messages that seem to be related), and finally
> this would involve a public clone method with associated security concerns.
>
> But with Streams it could be different, the application will internally
> clone the state of the operation probably eliminating the security issues,
> as simple as that.
>
> To describe simply the use case, let's take a progressive hash computing 4
> bytes by 4 bytes:
>
> incoming stream: ABCDE bytes
> hash operation: process ABCD, keep E for the next computation
> incoming stream: FGHI bytes + STOP-EOF
> hash operation: process EFGH, process STOP-EOF: clone the state of the
> hash, close the operation: digest hash with I
>

So, here, partial hash for ABCDEFGH is output


>
> resume:
> incoming stream: JKLF
> hash operation (clone): process IJKL, keep F for next computation
> etc...
>
>
and if we close the stream here we'll get a hash for ABCDEFGHIJKLFPPP (P is
padding). Right?


>  So you do not restart the operation as if it was the first time it was
> receiving data, you just continue it from the state it was when stop was
> received.
>
> That's not so unusual to do this, it has been requested many times in node.
>


Re: Thoughts behind the Streams API ED

2013-11-06 Thread Takeshi Yoshino
On Wed, Nov 6, 2013 at 7:33 PM, Aymeric Vitte wrote:

>  I have seen the different bugs too, some comments:
>
> - maybe I have missed some explaination or some obvious thing but I don't
> understand very well right now the difference/use between
> readable/writablebytestream and bytestream
>

ReadableByteStream and WritableByteStream are defining interfaces not only
for ByteStream but more generally for other APIs. For example, we discussed
how WebCrypto's encryption method should work with Stream concept recently,
and one idea you showed was making WebCrypto.subtle return an object (which
I called "filter") to which we can pipe data. By defining a protocol how to
pass data to consumer as the WritableByteStream interface, we can reuse it
later for defining IDL for those filters. Similarly, ReadableByteStream can
provide uniform protocol how data producing APIs should communicate with
consumers.

ByteStream is now a class inheriting both ReadableByteStream and
WritableByteStream (sorry, I forgot to include inheritance info in the IDL).


> - pause/unpause: as far as I understand the whatwg spec does not recommend
> it but I don't understand the reasons. As I previously mentionned the idea
> is to INSERT a pause signal in the stream, you can not control the stream
> and therefore know when you are pausing it.
>
>
Maybe after decoupling the interface, pause/unpause are things to be added
to ByteStream? IIUC, pause prevents data from being read from a ByteStream,
and unpause removes the dam?


> - stop/resume: same, see my previous post, the difference is that the
> consuming API should clone the state of the operation and close the current
> operation as if eof was received, then restart from the clone on resume
>

Sorry that I haven't replied to your one.

Your post about those methods:
http://lists.w3.org/Archives/Public/public-webapps/2013OctDec/0343.html
WebCrypto ISSUE-22: http://www.w3.org/2012/webcrypto/track/issues/22

Maybe I still don't quite understand your ideas. Let me confirm.

stop() tells the consumer API implementing WritableByteStream that it
should behave as if it received EOF, but when resume() is called, restart
processing the data written between stop() call and resume() call as if the
API received data for the first time?

How should stop() work for ByteStream? ByteStream's read() method will
receive EOF at least once when all data written before stop() call has been
read, and it keeps returning EOF until resume() tells the ByteStream to
restart outputting?

I've been feeling that your use case is very specific to WebCrypto. Saving
state and restoring it sounds more like feature request for WebCrypto, not
a Stream.

But I'm a bit interested in what your stop()/resume() enables. With this
feature, ByteStream becomes message stream which is convenient for handling
WebSocket.


>  - pipe []/fork: I don't see why the fast stream should wait for the slow
> one, so maybe the stream is forked and pause can be used for the slow one
>

There could be apps that want to limit memory usage strictly. We think
there're two strategies fork() can take.
a) wait until the slowest substream consumes
b) grow not to block the fastest substream while keeping data for the
slowest

a) is useful to limit memory usage. b) is more performance oriented.


>
> - flow control: could it be possible to advertise a maximum bandwidth rate
> for a stream?
>

It's currently communicated as window similar to TCP. Consumer can adjust
size argument of read() and frequency of read() call to match the
consumer's processing speed.


Re: Thoughts behind the Streams API ED

2013-11-05 Thread Takeshi Yoshino
FYI, I added a branch named "Preview version" into which suggestions are
incorporated aggressively to see how the API surface would be change.
https://dvcs.w3.org/hg/streams-api/raw-file/tip/preview.html
Please take a look if you're interested in.

For stabler version edited after having discussion, check ED as usual.


Re: Thoughts behind the Streams API ED

2013-11-04 Thread Takeshi Yoshino
On Tue, Nov 5, 2013 at 2:11 AM, Takeshi Yoshino  wrote:
>
> Feedback on "Overlap" thread, especially Issac's exhaustive list of
> considerations
>

Deleted citation by mistake. [1] is the Issac's post.


> [1]
> http://lists.w3.org/Archives/Public/public-webapps/2013JulSep/0355.html
>


Thoughts behind the Streams API ED

2013-11-04 Thread Takeshi Yoshino
I'd like to summarize my ideas behind this API surface since the overlap
thread is too long. We'll put these into bug entries soon.

Feedback on "Overlap" thread, especially Issac's exhaustive list of
considerations and conversation with Aymeric were very helpful. In reply to
his mail, I drafted my initial proposal [2] in past which addresses almost
all of them. Since the API surface was so big, I tried to compact it while
incorporating Promises. Current ED [3] addresses not all but some of
important requirements. I think it's a good (re)starting point.

* Flow control
read() and write() in the ED does provide flow control by controlling the
timing of resolution of the returned Promise. A Stream would have a window
to limit data to be buffered in it. If a big value is passed as size
parameter of read(), it may extend the window if necessary.

In reading data as a DOMString, the size param of read() doesn't specify
exact raw size of data to be read out. It just works as throttle to prevent
internal buffer from being drained too fast. StreamReadResult tells how
many bytes were actually consumed.

If more explicit and precise flow control is necessary, we could cherry
pick some from my old big API proposal [1]. For example, making window size
configurable.

If it makes sense, size can be generalized to be cost of each element. It
would be useful when trying to generalize Stream to various objects.

To make window dynamically adjustable, we could introduce methods such as
drainCapacity(), expandCapacity() to it.

* Passive producer
Thinking of producers like a random number generator, it's not always good
to ask a producer to prepare data and push it to a Stream using write().
This was possible in [2], but not in the ED. This can be addressed for
example by adding one overload to write().
- write() and write(size) doesn't write data but wait until the Stream can
accept some amount or the specified size of data.

* Conversion from existing active and unstoppable producer API
E.g. WebSocket invokes onmessage immediately when new data is available.
For this kind of API, finite size Stream cannot absorb the production. So,
there'll be need to buffer read data manually. In [2], Stream always
accepted write() even if buffer is full assuming that if necessary the
producer should be using onwritable method.

Currently, only one write() can be issued concurrently, but we can choose
to have Stream queue write() requests in it.

* Sync read if possible
By adding sync flag to StreamReadResult and introducing StreamWriteResult
to signal if read was done sync (data is the actual result) or async (data
is a Promise) to save Promise post tasking cost.

I estimated that post tasking overhead should be negligible for bulk
reading, and when to read small fields, we often read some amount into
ArrayBuffer and then parse it directly. So it's currently excluded.

* Multiple consumers
pipe() can take multiple destination Streams. This allows for mirroring
Stream's output into two or more Streams. I also considered making Stream
itself consumable, but I thought it complicates API and implementation.
- Elements in internal buffer need to be reference counted.
- It must be able to distinguish consumers.

If one of consumers is fast and the other is slow, we need to wait for the
slower one before starting processing the rest in the original Stream. We
can choose to allow multiple consumers to address this by introducing a new
concept "Cursor" that represents reading context. Cursor can be implemented
as a new interface or Stream that refers to (and adds reference count to
elements of) the original Stream's internal buffer.

Needs some more study to figure out if context approach is really better
than pipe()-ing to new Stream instance.

* Splitting InputStream (ReadableStream) and OutputStream (WritableStream)
Writing part and reading part of the API can be split into two separate
interfaces. It's designed to allow for such decoupling. The constructor and
most of internal flags are to define a plain Stream. I'll give it a try
soon.

* StreamCosumeResult
I decided to have this interface for returning results since
- Notify EOF by just one read call if possible
- Tell the size of raw binary data consumed when readType="text"

* read(n)
There're roughly two ways to encode structured data, length header based
and separator based. For the former, people basically don't want to get
notified when enough data is not ready. It's also inconvenient if we get
small ArrayBuffer chunks and need to concatenate them manually. For the
latter, call read() or read(n) if you need flow control.

Small chunk problem is also bothering for separator based protocol.
unshift() or peek() may help.

* In-band success/fail signaling
I excluded abort() kind method. Any error signal and other additional
information are conveyed manually outside of Stream. We can revisit this
point. If it's turned to be essential, we could put abort() back or add an
argument to close()

Re: Splitting Stream into InputStream and OutputStream (was Re: CfC: publish WD of Streams API; deadline Nov 3)

2013-10-31 Thread Takeshi Yoshino
On Thu, Oct 31, 2013 at 4:48 PM, François REMY <
francois.remy@outlook.com> wrote:

>  Since JavaScript does not provide a way to check if an object implements
> an interface, there should probably exist a way to check that from the API,
> like:
>

Basically it should be sufficient if each API can check type, but yeah
probably useful.


> var stream = InputStream(s) // returns “s” if it’s an input stream,
> convert is into a stream if possible, or return null
>if(stream) { … } else { … }
>
> That’s a great way to convert a string into a stream, for example, in the
> case of an API that requires an InputStream and must integrate with older
> code that returns a String or a Blob.
>

Interesting. Maybe also accept an array of strings?


Re: publish WD of Streams API; deadline Nov 3

2013-10-31 Thread Takeshi Yoshino
OK. There seems to be some disconnect, but I'm fine with waiting for
Domenic's proposal first.

Takeshi


On Thu, Oct 31, 2013 at 7:41 PM, Anne van Kesteren  wrote:

> On Wed, Oct 30, 2013 at 6:04 PM, Domenic Denicola
>  wrote:
> > I have concrete suggestions as to what such an API could look like—and,
> more importantly, how its semantics would significantly differ from this
> one—which I hope to flesh out and share more broadly by the end of this
> weekend. However, since the call for comments phase has commenced, I
> thought it important to voice these objections as soon as possible.
>
> Given how long we have been trying to figure out streams, waiting a
> little longer to see Domenic's proposal should be fine I think. No
> need to start rushing things through the process now. (Although on the
> flipside at some point we will need to start shipping something.)
>
>
> --
> http://annevankesteren.nl/
>
>


Defining generic Stream than considering only bytes (Re: CfC: publish WD of Streams API; deadline Nov 3)

2013-10-30 Thread Takeshi Yoshino
Hi Dean,

On Thu, Oct 31, 2013 at 11:30 AM, Dean Landolt  wrote:

> I really like this general concepts of this proposal, but I'm confused by
> what seems like an unnecessary limiting assumption: why assume all streams
> are byte streams? This is a mistake node recently made in its streams
> refactor that has led to an "objectMode" and added cruft.
>
> Forgive me if this has been discussed -- I just learned of this today. But
> as someone who's been slinging streams in javascript for years I'd really
> hate to see the "standard" stream hampered by this bytes-only limitation.
> The node ecosystem clearly demonstrates that streams are for more than
> bytes and (byte-encoded) strings.
>
>
To glue Streams with existing binary handling infrastructure such as
ArrayBuffer, Blob, we should have some specialization for Stream handling
bytes rather than using generalized Stream that would accept/output an
array or single object of the type. Maybe we can rename Streams API to
ByteStream not to occupy the name Stream that sounds like more generic, and
start standardizing generic Stream.


> In my perfect world any arbitrary iterator could be used to characterize
> stream chunks -- this would have some really interesting benefits -- but I
> suspect this kind of flexibility would be overkill for now. But there's
> good no reason bytes should be the only thing people can chunk up in
> streams. And if we're defining streams for the whole platform they
> shouldn't *just* be tied to a few very specific file-like use cases.
>
> If streams could also consist of chunks of strings (real, native strings)
> a huge swath of the API could disappear. All of readType, readEncoding and
> charset could be eliminated, replaced with simple, composable transforms
> that turn byte streams (of, say, utf-8) into string streams. And vice versa.
>
>
So, for example, XHR would be the point of decoding and it returns a Stream
of DOMStrings?


> Of course the real draw of this approach would be when chunks are neither
> blobs nor strings. Why couldn't chunks be arrays? The arrays could contain
> anything (no need to reserve any value as a sigil). Regardless of the chunk
> type, the zero object for any given type wouldn't be `null` (it would be
> something like '' or []). That means we can use null to distinguish EOF,
> and `chunk == null` would make a perfectly nice (and unambiguous) EOF
> sigil, eliminating yet more API surface. This would give us a clean object
> mode streams for free, and without node's arbitrary limitations.
>

For several reasons, I chose to use .eof than using null. One of them is to
allow the non-empty final chunk to signal EOF than requiring one more
read() call.

This point can be re-discussed.


> The `size` of an array stream would be the total length of all array
> chunks. As I hinted before, we could also leave the door open to specifying
> chunks as any iterable, where `size` (if known) would just be the `length`
> of each chunk (assuming chunks even have a `length`). This would also allow
> individual chunks to be built of generators, which could be particularly
> interesting if the `size` argument to `read` was specified as a maximum
> number of bytes rather than the total to return -- completely sensible
> considering it has to behave this way near the end of the stream anyway...
>

I don't really understand the last point. Could you please elaborate the
story and benefit?

IIRC, it's considered to be useful and important to be able to cut an exact
requested size of data into an ArrayBuffer object and get notified (the
returned Promise gets resolved) only when it's ready.


> This would lead to a pattern like `stream.read(Infinity)`, which would
> essentially say *give me everything you've got soon as you can*.
>

In current proposal, read() i.e. read() with no argument does this.


>  This is closer to node's semantics (where read is async, for added
> scheduling flexibility). It would drain streams faster rather than
> pseudo-blocking for a specific (and arbitrary) size chunk which ultimately
> can't be guaranteed anyway, so you'll always have to do length checks.
>
> (On a somewhat related note: why is a 0-sized stream specified to throw?
> And why a SyntaxError of all things? A 0-sized stream seems perfectly
> reasonable to me.)
>

0-sized Stream is not prohibited.

Do you mean 0-sized read()/pipe()/skip()? I don't think they make much
sense. It's useful only when you want to sense EOF and it can be done by
read(1).


> What's particularly appealing to me about the chunk-as-generator idea is
> that these chunks could still be quite large -- hundreds megabytes, even.
> Just because a potentially large amount of data has become available since
> the last chunk was processed doesn't mean you should have to bring it all
> into memory at once.
>

It's interesting. Could you please list some concrete example of such a
generator?


Splitting Stream into InputStream and OutputStream (was Re: CfC: publish WD of Streams API; deadline Nov 3)

2013-10-30 Thread Takeshi Yoshino
Hi François

On Thu, Oct 31, 2013 at 6:16 AM, François REMY <
francois.remy@outlook.com> wrote:

> - Streams should exist in at least two fashions: InputStream and
> OutputStream. Both of them serve different purposes and, while some stream
> may actually be both, this remains an exceptional behavior worth being
> noted. Finally, a "Stream" is not equal to a InMemoryStream as the
> constructor may seem to indicate. A stream is a much lower-level concept,
> which may actually have nothing to do with InMemory operations.
>

Yes. I initially thought it'll be clearer to split in/out interface. E.g. a
Stream obtained from XHR to receive a response should not be writable. It's
reasonable to make network-to-Stream transfer happen in background
asynchronously to JavaScript, and then it doesn't make much sense to keep
it writable from JavaScript.

It has a unified IDL now but I'm designing write side and read side
independently. We could decouple it into two separate IDLs? concepts? if it
make sense. Stream would inherit from both and provides a constructor.


Re: Overlap between StreamReader and FileReader

2013-10-30 Thread Takeshi Yoshino
On Wed, Oct 30, 2013 at 8:14 PM, Takeshi Yoshino wrote:

> On Wed, Oct 23, 2013 at 11:42 PM, Aymeric Vitte wrote:
>
>> - pause: pause the stream, do not send eof
>>
>
>>
> Sorry, what will be paused? Output?
>

http://lists.w3.org/Archives/Public/public-webrtc/2013Oct/0059.html
http://www.w3.org/2011/04/webrtc/wiki/Transport_Control#Pause.2Fresume

So, you're suggesting that we make Stream be a convenient point where we
can dam up data flow and skip adding methods to pausing data producing and
consuming to producer/consumer APIs? I.e. we make it able to prevent data
queued in a Stream from being read. This typically means asynchronously
suspending ongoing pipe() or read() call on the Stream with no-argument or
very large argument.


>
>
>>  - unpause: restart the stream
>>
>> And flow control should be back and explicit, not sure right now how to
>> define it but I think it's impossible for a js app to do a precise flow
>> control, and for existing APIs like WebSockets it's not easy to control the
>> flow and avoid in some situations to overload the UA.
>>
>


Re: Overlap between StreamReader and FileReader

2013-10-30 Thread Takeshi Yoshino
On Wed, Oct 23, 2013 at 11:42 PM, Aymeric Vitte wrote:

>  Your filter idea seems to be equivalent to a createStream that I
> suggested some time ago (like node), what about:
>
> var encryptionPromise = crypto.subtle.encrypt(aesAlgorithmEncrypt, aesKey,
> sourceStream).createStream();
>
> So you don't need to modify the APIs where you can not specify the
> responseType.
>
> I was thinking to add stop/resume and pause/unpause:
>
> - stop: insert eof in the stream
>

close() does this.


>  Example : finalize the hash when eof is received
>
> - resume: restart from where the stream stopped
> Example : restart the hash from the state the operation was before
> receiving eof (related to Issue22 in WebCrypto that was closed without any
> solution, might imply to clone the state of the operation)
>
>
Should it really be a part of Streams API? How about just making the filter
(not Stream itself) returned by WebCrypto reusable and add some method to
recycle it?


> - pause: pause the stream, do not send eof
>
>
Sorry, what will be paused? Output?


>  - unpause: restart the stream
>
> And flow control should be back and explicit, not sure right now how to
> define it but I think it's impossible for a js app to do a precise flow
> control, and for existing APIs like WebSockets it's not easy to control the
> flow and avoid in some situations to overload the UA.
>
> Regards,
>
> Aymeric
>
> Le 21/10/2013 13:14, Takeshi Yoshino a écrit :
>
>  Sorry for blank of ~2 weeks.
>
>  On Fri, Oct 4, 2013 at 5:57 PM, Aymeric Vitte wrote:
>
>>  I am still not very familiar with promises, but if I take your
>> preceeding example:
>>
>>
>> var sourceStream = xhr.response;
>> var resultStream = new Stream();
>> var fileWritingPromise = fileWriter.write(resultStream);
>> var encryptionPromise = crypto.subtle.encrypt(aesAlgorithmEncrypt,
>> aesKey, sourceStream, resultStream);
>>  Promise.all(fileWritingPromise, encryptionPromise).then(
>>   ...
>> );
>>
>
>  I made a mistake. The argument of Promise.all should be an Array. So,
> [fileWritingPromise, encryptionPromise].
>
>
>>
>>
>>  shoud'nt it be more something like:
>>
>> var sourceStream = xhr.response;
>> var encryptionPromise = crypto.subtle.encrypt(aesAlgorithmEncrypt,
>> aesKey);
>> var resultStream=sourceStream.pipe(encryptionPromise);
>> var fileWritingPromise = fileWriter.write(resultStream);
>>  Promise.all(fileWritingPromise, encryptionPromise).then(
>>   ...
>> );
>>
>
>  Promises just tell the user completion of each operation with some value
> indicating the result of the operation. It's not destination of data.
>
>  Do you think it's good to create objects representing each encrypt
> operation? So, some objects called "filter" is introduced and the code
> would be like:
>
>  var pipeToFilterPromise;
>
>  var encryptionFilter;
>  var fileWriter;
>
>  xhr.onreadystatechange = function() {
>   ...
>   } else if (this.readyState == this.LOADING) {
>  if (this.status != 200) {
>   ...
> }
>
>  var sourceStream = xhr.response;
>
>  encryptionFilter =
> crypto.subtle.createEncryptionFilter(aesAlgorithmEncrypt, aesKey);
> // Starts the filter.
> var encryptionPromise = encryptionFilter.encrypt();
> // Also starts pouring data but separately from promise creation.
>  pipeToFilterPromise = sourceStream.pipe(encryptionFilter);
>
>  fileWriter = ...;
> // encryptionFilter works as data producer for FileWriter.
> var fileWritingPromise = fileWriter.write(encryptionFilter);
>
>  // Set only handler for rejection now.
>  pipeToFilterPromise.catch(
>function(result) {
> xhr.abort();
> encryptionFilter.abort();
>  fileWriter.abort();
>   }
>  );
>
>  encryptionPromise.catch(
>function(result) {
>  xhr.abort();
>  fileWriter.abort();
>   }
>  );
>
>  fileWritingPromise.catch(
>function(result) {
>  xhr.abort();
>  encryptionFilter.abort();
>   }
>  );
>
>   // As encryptionFilter will be (successfully) closed only
>  // when XMLHttpRequest and pipe() are both successful.
> // So, it's ok to set handler for fulfillment now.
>  Promise.all([encryptionPromise, fileWritingPromise]).then(
>function(result) {
> // Done everything successfully!
> // We come here only when encryptionFilter is close()-ed.
> fileWriter.close();
> pr

Re: Overlap between StreamReader and FileReader

2013-10-21 Thread Takeshi Yoshino
Sorry for blank of ~2 weeks.

On Fri, Oct 4, 2013 at 5:57 PM, Aymeric Vitte wrote:

>  I am still not very familiar with promises, but if I take your preceeding
> example:
>
>
> var sourceStream = xhr.response;
> var resultStream = new Stream();
> var fileWritingPromise = fileWriter.write(resultStream);
> var encryptionPromise = crypto.subtle.encrypt(aesAlgorithmEncrypt, aesKey,
> sourceStream, resultStream);
>  Promise.all(fileWritingPromise, encryptionPromise).then(
>   ...
> );
>

I made a mistake. The argument of Promise.all should be an Array. So,
[fileWritingPromise, encryptionPromise].


>
>
> shoud'nt it be more something like:
>
> var sourceStream = xhr.response;
> var encryptionPromise = crypto.subtle.encrypt(aesAlgorithmEncrypt, aesKey);
> var resultStream=sourceStream.pipe(encryptionPromise);
> var fileWritingPromise = fileWriter.write(resultStream);
>  Promise.all(fileWritingPromise, encryptionPromise).then(
>   ...
> );
>

Promises just tell the user completion of each operation with some value
indicating the result of the operation. It's not destination of data.

Do you think it's good to create objects representing each encrypt
operation? So, some objects called "filter" is introduced and the code
would be like:

var pipeToFilterPromise;

var encryptionFilter;
var fileWriter;

xhr.onreadystatechange = function() {
  ...
  } else if (this.readyState == this.LOADING) {
 if (this.status != 200) {
  ...
}

var sourceStream = xhr.response;

encryptionFilter =
crypto.subtle.createEncryptionFilter(aesAlgorithmEncrypt, aesKey);
// Starts the filter.
var encryptionPromise = encryptionFilter.encrypt();
// Also starts pouring data but separately from promise creation.
pipeToFilterPromise = sourceStream.pipe(encryptionFilter);

fileWriter = ...;
// encryptionFilter works as data producer for FileWriter.
var fileWritingPromise = fileWriter.write(encryptionFilter);

// Set only handler for rejection now.
pipeToFilterPromise.catch(
  function(result) {
xhr.abort();
encryptionFilter.abort();
fileWriter.abort();
  }
);

encryptionPromise.catch(
  function(result) {
xhr.abort();
fileWriter.abort();
  }
);

fileWritingPromise.catch(
  function(result) {
xhr.abort();
encryptionFilter.abort();
  }
);

// As encryptionFilter will be (successfully) closed only
// when XMLHttpRequest and pipe() are both successful.
// So, it's ok to set handler for fulfillment now.
Promise.all([encryptionPromise, fileWritingPromise]).then(
  function(result) {
// Done everything successfully!
// We come here only when encryptionFilter is close()-ed.
fileWriter.close();
processFile();
  }
);
  } else if (this.readyState == this.DONE) {
 if (this.status != 200) {
  encryptionFilter.abort();
  fileWriter.abort();
} else {
  // Now we know that XHR was successful.
  // Let's close() the filter to finish encryption
  // successfully.
  pipeToFilterPromise.then(
function(result) {
  // XMLHttpRequest closes sourceStream but pipe()
  // resolves pipeToFilterPromise without closing
  // encryptionFilter.
  encryptionFilter.close();
}
  );
}
  }
};
xhr.send();

encryptionFilter has the same interface as normal stream but encrypts piped
data. Encrypted data is readable from it. It has special methods, encrypt()
and abort().

processFile() is hypothetical function must be called only when all of
loading, encryption and saving to file were successful.


>
> or
>
> var sourceStream = xhr.response;
> var encryptionPromise = crypto.subtle.encrypt(aesAlgorithmEncrypt, aesKey);
> var hashPromise = crypto.subtle.digest(hash);
> var resultStream = sourceStream.pipe([encryptionPromise,hashPromise]);
> var fileWritingPromise = fileWriter.write(resultStream);
> Promise.all([fileWritingPromise, resultStream]).then(
>   ...
> );
>
>
and this should be:

var sourceStream = xhr.response;

encryptionFilter =
crypto.subtle.createEncryptionFilter(aesAlgorithmEncrypt, aesKey);
var encryptionPromise = encryptionFilter.crypt();

hashFilter = crypto.subtle.createDigestFilter(hash);
var hashPromise = hashFilter.digest();

pipeToFiltersPromise = sourceStream.pipe([encryptionFilter, hashFilter]);

var encryptedDataWritingPromise = fileWriter.write(encryptionFilter);

var hashWritingPromise =
  Promise.all([encryptionPromise, encryptedDataWritingPromise]).then(
function(result) {
  return fileWriter.write(hashFilter)
},
...
  );

Promise.all([hashPromise, hashWritingPromise]).then(
  function(result) {
fileWriter.close();
processFile();
  },
  ...
);

Or, we can also choose to let the writer API to create a special object
that has the Stream interface for receiving input and then let
encryptionFilter and hashFilter to pipe() to it.

...
pipeToFi

Re: [streams-api] Seeking status and plans

2013-10-11 Thread Takeshi Yoshino
On Thu, Oct 10, 2013 at 11:34 PM, Feras Moussa wrote:

> Apologies for the delay, I had broken one of my mail rules and didn't see
> this initially.
>
> Asymeric is correct - there have been a few threads and revisions. A more
> up-to-date version is the one Asymeric linked -
> http://htmlpreview.github.io/?https://github.com/tyoshino/stream/blob/master/streams.html
> The above version incorporates both promises and streams and is a more
> refined version of Streams.
>
> From other threads on Stream, it became apparent that there were a few
> pieces of the current Streams API ED that were designed around older
> paradigms and needed refining to be better aligned with current APIs.  I
> think it does not make sense to have two different specs, and instead have
> a combined one that we move forward.
>
> I can work with Takeshi on getting his version incorporated into the
> Streams ED, which we can then move forward with.
>

I'm happy to.


>
> Thanks,
> Feras
>
>
> > Date: Thu, 10 Oct 2013 09:32:20 -0400
> > From: art.bars...@nokia.com
> > To: vitteayme...@gmail.com; feras.mou...@hotmail.com;
> tyosh...@google.com
> > CC: public-webapps@w3.org
> > Subject: Re: [streams-api] Seeking status and plans
> >
> > On 10/10/13 6:26 AM, ext Aymeric Vitte wrote:
> >> I think the plan should be more here now:
> >> http://lists.w3.org/Archives/Public/public-webapps/2013OctDec/0049.html
> >
> > There are indeed at least two specs here:
> >
> > [1] Feras'  >
> >
> > [2] Takeshi's
> > <
> http://htmlpreview.github.io/?https://github.com/tyoshino/stream/blob/master/streams.html
> >
> >
> > Among the Qs I have here are ...
> >
> > * What is the implementation status of these specs?
>

There's a Blink intent-to-implement for Streams API I sent. I plan to
implement it based on [2]. By turning on a flag to turn on experimental
features, XHR in Chrome can return a response as a Stream object. It's
based on "10. Extension of XMLHttpRequest" of [1]. Nothing done yet for the
rest. We're preparing for it (such as implementation of Promises).

I agree that we should merge [1] and [2] as Feras said.


Re: [XHR] Event firing order. XMLHttpRequestUpload then XMLHttpRequest or reverse

2013-10-04 Thread Takeshi Yoshino
On Sat, Oct 5, 2013 at 1:26 AM, Anne van Kesteren  wrote:

> On Fri, Oct 4, 2013 at 3:12 PM, Takeshi Yoshino 
> wrote:
> > Sorry. I included network by mistake. I wanted to understand the abort
> error
> > well.
> >
> > Q: cancel by abort() is abort error?
>
> It's not the same condition. abort() has its own set of steps.
> Although we might be able to merge these and probably should...
>
>
>
OK.


>  > Q: any kind of network activity cancellation not due to network/timeout
> are
> > abort error?
>
> No.
>

OK. I didn't understand what you meant by "end user" correctly. Never mind.


Re: [XHR] Event firing order. XMLHttpRequestUpload then XMLHttpRequest or reverse

2013-10-04 Thread Takeshi Yoshino
On Fri, Oct 4, 2013 at 8:46 PM, Anne van Kesteren  wrote:

> On Thu, Oct 3, 2013 at 6:35 AM, Takeshi Yoshino 
> wrote:
> > On Tue, Sep 3, 2013 at 9:00 PM, Anne van Kesteren 
> wrote:
> >> This is the end user terminate, correct?
> >
> > Yes. So, this includes any kind of event resulting in termination of
> fetch
> > algorithm (network, termination by some user's instruction to UA)?
>
> No, if you look at
> http://xhr.spec.whatwg.org/#infrastructure-for-the-send%28%29-method
> "If the end user cancels the request" is about the end user, "If there
> is a network error" is about everything else.
>
>
> Sorry. I included network by mistake. I wanted to understand the abort
error well.

Q: cancel by abort() is abort error?
Q: any kind of network activity cancellation not due to network/timeout are
abort error?


Re: Overlap between StreamReader and FileReader

2013-10-03 Thread Takeshi Yoshino
Formatted and published my latest proposal at github after incorporating
Aymeric's multi-dest idea.

http://htmlpreview.github.io/?https://github.com/tyoshino/stream/blob/master/streams.html


On Sat, Sep 28, 2013 at 11:45 AM, Kenneth Russell  wrote:

> This looks nice. It looks like it should already handle the flow control
> issues mentioned earlier in the thread, simply by performing the read on
> demand, though reporting the result asynchronously.
>
>
Thanks, Kenneth for reviewing.


Re: [XHR] Event firing order. XMLHttpRequestUpload then XMLHttpRequest or reverse

2013-10-02 Thread Takeshi Yoshino
Sorry for the delay.

On Tue, Sep 3, 2013 at 9:00 PM, Anne van Kesteren  wrote:

> On Tue, Sep 3, 2013 at 9:18 AM, Takeshi Yoshino 
> wrote:
> > In the spec, we have three "cancel"s
> > - cancel an instance of fetch algorithm
> > - cancel network activity
>
> These are the same. Attempted to clarify.
>
>
Verified that the order of events are the same for abort() and abort error.

Thanks for factoring out termination algorithm. It's clearer now.


>
> > - cancel a request
>
> This is the end user terminate, correct?
>

Yes. So, this includes any kind of event resulting in termination of fetch
algorithm (network, termination by some user's instruction to UA)?


> Would you like to be
> acknowledged as "Takeshi Yoshino"? If you can give me your name in
> kanji I can include that too. See e.g.
> http://encoding.spec.whatwg.org/#acknowledgments for some examples.
>
>
Thank you. Just alphabetical name is fine.


> See http://xhr.spec.whatwg.org/ for the updated text. And
> https://github.com/whatwg/xhr/commits for an overview of the changes.
>


Re: Overlap between StreamReader and FileReader

2013-09-26 Thread Takeshi Yoshino
On Thu, Sep 26, 2013 at 6:36 PM, Aymeric Vitte wrote:

> Looks good, comments/questions :
>
> - what's the use of readEncoding?
>

Overriding charset specified in .type for read op. It's weird but we can
ask an app to overwrite .type instead.


>
> - StreamReadType: add MediaStream? (and others if existing)
>

Maybe, if there's clear rule to convert binary stream + MIME type into
MediaStream object.


>
> - would it be possible to pipe from StreamReadType to other StreamReadType?
>

pipe() tells the receiver with which value of StreamReadType the pipe() was
called. Receiver APIs may be designed to accept either mode or both modes.


>
> - would it be possible to pipe from a source to different targets (my
> example of encrypt/hash at the same time)?
>

I missed it. Your mirroring method (making pipe accept multiple Stream)
looks good.

The problem is what to do when one of destinations is write blocked. Maybe
we want to read data from the source as the fastest consumer consumes and
save read data for slowest one. When should we fulfill the promise?
Completion of read from the source, completion of write to all
destinations, etc.


>
> - what is the link between the API and the Stream (responseType='stream')?
> How do you handle this for APIs where responseType does not really apply
> (mspack, crypto...)
>

- make APIs to return a Stream for read (write) like
XHR.responseType='stream'
- make APIs to accept a Stream for read (write)

Either should work as we have pipe().

E.g.

var sourceStream = xhr.response;
var resultStream = new Stream();
var fileWritingPromise = fileWriter.write(resultStream);
var encryptionPromise = crypto.subtle.encrypt(aesAlgorithmEncrypt, aesKey,
sourceStream, resultStream);
Promise.all(fileWritingPromise, encryptionPromise).then(
  ...
);



I also found a point needs clarification

- pipe() does eof or not. I think we don't want automatic eof.


Re: Overlap between StreamReader and FileReader

2013-09-25 Thread Takeshi Yoshino
As we don't see any strong demand for flow control and sync read
functionality, I've revised the proposal.

Though we can separate state/error signaling from Stream and keep them done
by each API (e.g. XHR) as Aymeric said, EoF signal still needs to be
conveyed through Stream.



enum StreamReadType {
  "",
  "blob",
  "arraybuffer",
  "text"
};

interface StreamConsumeResult {
  readonly attribute boolean eof;
  readonly any data;
  readonly unsigned long long size;
};

[Constructor(optional DOMString mime)]
interface Stream {
  readonly attribute DOMString type;  // MIME type

  // Rejected on error. No more write op shouldn't be made.
  //
  // Fulfilled when the write completes. It doesn't guarantee that the
written data has been
  // read out successfully.
  //
  // The contents of ArrayBufferView must not be modified until the promise
is fulfilled.
  //
  // Fulfill may be delayed when the Stream considers itself to be full.
  //
  // write(), close() must not be called again until the Promise of the
last write() is fulfilled.
  Promise write((DOMString or ArrayBufferView or Blob)? data);
  void close();

  attribute StreamReadType readType;
  attribute DOMString readEncoding;

  // read(), skip(), pipe() must not be called again until the Promise of
the last read(), skip(), pipe() is fulfilled.

  // Rejected on error. No more read op shouldn't be made.
  //
  // If size is specified,
  // - if EoF: fulfilled with data up to EoF
  // - otherwise: fulfilled with data of size bytes
  //
  // If size is omitted, (all or part of) data available for read now will
be returned.
  //
  // If readType is set to text, size of the result may be smaller than the
value specified for the size argument.
  Promise read(optional [Clamp] long long size);

  // Rejected on error. Fulfilled on completion.
  //
  // .data of result is not used. .size of result is the skipped amount.
  Promise skip([Clamp] long long size);  // .data is
skipped size

  // Rejected on error. Fulfilled on completion.
  //
  // If size is omitted, transfer until EoF is encountered.
  //
  // .data of result is not used. .size of result is the size of data
transferred.
  Promise pipe(Stream destination, optional [Clamp]
long long size);
};


Re: Overlap between StreamReader and FileReader

2013-09-25 Thread Takeshi Yoshino
On Wed, Sep 25, 2013 at 10:55 PM, Aymeric Vitte wrote:

>
>  My understanding is that the flow control APIs like mine are intended to
> be used by JS code implementing some converter, consumer, etc. while
> built-in stuff like WebCrypt would be evolved to accept Stream directly and
> handle flow control in e.g. C++ world.
>
>
>  
>
>  BTW, I'm discussing this to provide data points to decide whether to
> include flow control API or not. I'm not pushing it. I appreciate if other
> participants express opinions about this.
>
>
>
> Not sure to get what you mean between your API flow control and built-in
> flow control... I think the main purpose of the Stream API should be to
> handle more efficiently streaming without having to handle ArrayBuffers
> copy, split, concat, etc, to abstract the use of ArrayBuffer,
> ArrayBufferView, Blob, txt so you don't spend your time converting things
> and to connect simply different streams.
>

JS flow control API is for JS code to manually control threshold, buffer
size, etc. so that JS code can consume, produce data to/from Stream.

Built-in flow control is C++ (or any other lang implementing the UA)
interface that will be used when streams are connected with pipe(). Maybe
it would have similar interface as JS flow control API.


Re: Overlap between StreamReader and FileReader

2013-09-24 Thread Takeshi Yoshino
On Wed, Sep 25, 2013 at 12:41 AM, Aymeric Vitte wrote:

>  Did you see
> http://lists.w3.org/Archives/Public/public-webapps/2013JulSep/0593.html ?
>

Yes. This example seems to be showing how to connect only producer/consumer
APIs which support Stream. Right?

In such a case, all the flow control stuff would be basically hidden, and
if necessary each producer/consumer/transformer/filter/etc. may expose flow
control related parameter in their own form, and configure connected
input/output streams accordingly. E.g. stream_xhr may choose to have large
write buffer for performance, or have small one and make some backpressure
to stream_ws1 for memory efficiency.

My understanding is that the flow control APIs like mine are intended to be
used by JS code implementing some converter, consumer, etc. while built-in
stuff like WebCrypt would be evolved to accept Stream directly and handle
flow control in e.g. C++ world.



BTW, I'm discussing this to provide data points to decide whether to
include flow control API or not. I'm not pushing it. I appreciate if other
participants express opinions about this.


Re: Overlap between StreamReader and FileReader

2013-09-20 Thread Takeshi Yoshino
On Sat, Sep 14, 2013 at 12:03 AM, Aymeric Vitte wrote:

>
> I take this example to understand if this could be better with a built-in
> Stream flow control, if so, after you have defined the right parameters (if
> possible) for the streams flow control, you could process delta data while
> reading the file and restream them directly via WebSockets, and this would
> be great but again not sure that a universal solution can be found.
>
>
I think what we can do is just providing helper to make it easier to build
such an intelligent and app specific flow control logic.

Maybe one of the points of your example is that we're not always be able to
calculate good readableThreshold. I'm also not so sure how much of apps in
the world can benefit from this kind of APIs.

For consumers that can do flow control well on receive window basis, my API
should work well (unnecessary events are not dispatched. chunks are
consolidated. lazier ArrayBuffer creation). WebSocket has (broken)
bufferedAmount attribute for window based flow control. Are you using it as
a hint?


>


Re: Overlap between StreamReader and FileReader

2013-09-13 Thread Takeshi Yoshino
On Fri, Sep 13, 2013 at 9:50 PM, Aymeric Vitte wrote:

>
> Le 13/09/2013 14:23, Takeshi Yoshino a écrit :
>
> Do you mean that those data producer APIs should be changed to provide
> read-by-delta-data, and manipulation of data by js code should happen there
> instead of at the output of Stream?
>
>
>
> Yes, exactly, except if you/someone see another way of getting the data
> inside the browser and turning the flow into a stream without using these
> APIs.
>

I agree that there're various states and things to handle for each of the
producer APIs, and it might be judicious not to convey such API specific
info/signal through Stream.

I don't think it's bad to convert xhr.DONE to stream.close() manually as in
your example
http://lists.w3.org/Archives/Public/public-webapps/2013JulSep/0453.html.

But, regarding flow control, as I said in the other mail just posted, if we
start thinking of flow control more seriously, maybe the right approach is
to have unified flow control method and the point to define such a
fine-grained flow control is Stream, not each API. If we're not, yes, maybe
your proposal (deltaResponse) should be enough.


Re: Overlap between StreamReader and FileReader

2013-09-13 Thread Takeshi Yoshino
Since I joined discussion recently, I don't know the original idea behind
the Stream+XHR integration approach (response returns Stream object) as in
current Streams API spec. But one advantage of it I come up with is that we
can keep change to those producer APIs small. If we decide to add methods
for example for flow control (though it's still under question), such stuff
go to Stream, not XHR, etc.


Re: Overlap between StreamReader and FileReader

2013-09-13 Thread Takeshi Yoshino
On Fri, Sep 13, 2013 at 6:08 PM, Aymeric Vitte wrote:

> Now you want a stream interface so you can code some js like mspack on top
> of it.
>
> I am still missing a part of the puzzle or how to use it: as you mention
> the stream is coming from somewhere (File, indexedDB, WebSocket, XHR,
> WebRTC, etc) you have a limited choice of APIs to get it, so msgpack will
> act on top of one of those APIs, no? (then back to the examples above)
>
> How can you get the data another way?
>

Do you mean that those data producer APIs should be changed to provide
read-by-delta-data, and manipulation of data by js code should happen there
instead of at the output of Stream?


Re: Overlap between StreamReader and FileReader

2013-09-12 Thread Takeshi Yoshino
On Fri, Sep 13, 2013 at 5:15 AM, Aymeric Vitte wrote:

>  Isaac said too "So, just to be clear, I'm **not** suggesting that
> browser streams copy Node streams verbatim.".
>

I know. I wanted to kick the discussion which was stopped for 2 weeks.


> Unless you want to do node inside browsers (which would be great but seems
> unlikely) I still don't see the relation between this kind of proposal and
> existing APIs.
>

What do you mean by "existing APIs"? I was thinking that we've been
discussing what Stream read/write API for manual consuming/producing by
JavaScript code should be like.


> Could you please give an example very different from the ones I gave
> already?
>

Sorry, which mail?

One of what I was imaging is protocol parsing. Such as msgpack, protocol
buffer. It's good that ArrayBuffers of exact size is obtained.

OTOH, as someone pointed out, Stream should have some flow control
mechanism not to pull unlimited amount of data from async storage, network,
etc. readableSize in my proposal is an example of how we make the limit
controllable by an app.

We could also depend on the size argument of read() call. But thinking of
protocol parsing again, it's common that we have small fields such as 4, 8,
16 bytes. If read(size) is configured to pull size bytes from async
storage, it's inefficient. Maybe we could have some hard coded limit, e.g.
1MiB and use max(hardCodedLimit, requestedReadSize).

I'm fine with the latter.


> You have reverted to EventTarget too instead of promises, why?
>

There was no intention to object against use of Promise. Sorry that I
wasn't clear. I'm rather interested in receiving sequence of data as they
become available (corresponds to Jonas's ChunkedData version read methods)
with just one read call. Sorry that I didn't mention explicitly, but
listeners on the proposed API came from ChunkedData object. I thought we
can put them on Stream itself by giving up multiple read scenario.

writeable/readableThreshold can be safely removed from the API if we agree
it's not important. If the threshold stuff are removed, flush() and pull()
will also be removed.


Re: Overlap between StreamReader and FileReader

2013-09-12 Thread Takeshi Yoshino
On Thu, Sep 12, 2013 at 10:58 PM, Aymeric Vitte wrote:

>  Apparently we are not talking about the same thing, while I am thinking
> to a high level interface your interface is taking care of the underlying
> level.
>

How much low level stuff to expose would basically affect high level
interface design, I think.


> Like node's streams, node had to define it since it was not existing (but
> is someone using node's streams as such or does everybody use
>
...snip...

> So, to understand where the mismatch comes from, could you please
> highlight a web use case/code example based on your proposal?
>

I'm still thinking how much we should include in the API, too. This
proposal is just a trial to address the requirements Isaac listed. So, each
feature should correspond to some of his example code.


Re: Overlap between StreamReader and FileReader

2013-09-11 Thread Takeshi Yoshino
I forgot to add an attribute to specify the max size of backing store.
Maybe it should be added to the constructor.

On Wed, Sep 11, 2013 at 11:24 PM, Takeshi Yoshino wrote:

>   any peek(optional [Clamp] long long size, optional [Clamp] long long
> offset);
>

peek with offset doesn't make sense for text mode reading. Some exception
should be throw for that case.


> - readableSize attribute returns (number of readable bytes as of the last
> time the event loop started executing a task) - (bytes consumed by read()
> method).
>

+ (bytes added by write() and transferred to read buffer synchronously)



The concept of this interface is
- to allow bulk transfer from internal asynchronous storage (e.g. network,
disk based backing store) to JS world but delay conversion (e.g. into
DOMString, ArrayBuffer).
- not to ask an app to do such transfer explicitly


Re: Overlap between StreamReader and FileReader

2013-09-11 Thread Takeshi Yoshino
Here's my all-in-one strawman proposal including some new stuff for flow
control. Yes, it's too big, but may be useful for glancing what features
are requested.



enum StreamReadType {
  "",
  "arraybuffer",
  "text"
};

[Constructor(optional DOMString mime, optional [Clamp] long long
writeBufferSize, optional [Clamp] long long readBufferSize)]
interface Stream : EventTarget {
  readonly attribute DOMString type;  // MIME type

  // Writing interfaces

  readonly attribute unsigned long long writableSize;  // Bytes can be
written synchronously
  attribute unsigned long long writeBufferSize;

  attribute EventHandler onwritable;
  attribute unsigned long long writableThreshold;

  attribute EventHandler onpulled;

  attribute EventHandler onreadaborted;

  void write((DOMString or ArrayBufferView or Blob)? data);
  void flush();
  void closeWrite();
  void abortWrite();

  // Reading interfaces

  attribute StreamReadType readType;  // Must not be set after the first
read()
  attribute DOMString readEncoding;

  readonly attribute unsigned long long readableSize;  // Bytes can be read
synchronously
  attribute unsigned long long readBufferSize;

  attribute EventHandler onreadable;
  attribute unsigned long long readableThreshold;

  attribute EventHandler onflush;

  attribute EventHandler onclose;  // Receives clean flag

  any read(optional [Clamp] long long size);
  any peek(optional [Clamp] long long size, optional [Clamp] long long
offset);
  void skip([Clamp] long long size);
  void pull();
  void abortRead();

  // Async interfaces

  attribute EventHandler ondoneasync;  // Receives bytes skipped or Blob or
undefined (when done pipeTo)

  void readAsBlob(optional [Clamp] long long size);
  void longSkip([Clamp] long long size);
  void pipeTo(Stream destination, optional [Clamp] long long size);
};



- Encoding for text mode reading is determined by the type attribute. Can
be overridden by setting the readEncoding attribute.

- Invoking read() repeatedly to pull data into the stream is annoying. So,
instead I used writable/readableThreshold approach.

- Not to bloat the API anymore, limited error/close signaling interface to
only EventHandlers.

- stream.read() means stream.read(stream.readableSize).

- After onclose invocation, it's guaranteed that all written bytes are
available for read.

- read() is non-blocking. It returns only what is synchronously readable.
If there isn't enough bytes (investigate the readableSize attribute), an
app should wait until the next invocation of onreadable. readBufferSize and
readableThreshold may be modified accordingly and pull() may be called.
- stream.read(size) returns an ArrayBuffer or DOMString of min(size,
stream.readableSize) bytes that is synchronously readable now.

- When readType is set to "text", read() throws an "EncodingError" if an
invalid sequence is found. Incomplete sequence will be left unconsumed. If
there's an incomplete sequence at the end of stream, the app can know that
by checking the size attribute after onclose invocation and read() call.

- readableSize attribute returns (number of readable bytes as of the last
time the event loop started executing a task) - (bytes consumed by read()
method).

- onflush is separated from onreadable since it's possible that an
intermediate Stream in a long change has no data to flush but the next or
later Streams have.
- Invocation order is onreadable -> onflush or onclose.
- Flush handling code must be implemented on both onflush and onclose. On
close() call, only onclose is invoked to reduce event propagation cost.

- Pass read/writeBufferSize of -1 to constructor or set -1 to
stream.read/writeBufferSize for unlimited buffering.

- Instead of having write(buffer, cb), made write() accept any size of data
regardless of writeBufferSize. XHR should respect writeBufferSize and write
only writableSize bytes of data and set onwritable if necessary and
possibly also set writableThreashold.

- {writable,readable}Threshold are 0 by default meaning that onwritable and
onreadable are invoked immediately when there's space/data available.

- If {writable,readable}Threshold are greater than capacity, it's
considered to be set to capacity.

- onwritable/onreadable is invoked asynchronously when
-- new space/data is available as a result of read()/write() operation that
satisfies writable/readableThreshold
onreadable is invoked asynchronously when
-- flush()-ed or close()-ed

- onwritable/onreadable is invoked synchronously when
-- onwritable/onreadable is updated and there's space/data available that
satisfies writable/readableThreshold
-- writable/readableThreshold is updated and there's space/data available
that satisfies the new writable/readableThreshold
-- new space/data is available as a result of updating capacity that
satisfies writable/readableThreshold


Re: Overlap between StreamReader and FileReader

2013-09-10 Thread Takeshi Yoshino
On Fri, Aug 23, 2013 at 2:41 AM, Isaac Schlueter  wrote:

> 1. Drop the "read n bytes" part of the API entirely.  It is hard to do


I'm ok with that. But then, instead we need to evolve ArrayBuffer to have
powerful concat/slice functionality for performance. Re: slicing, we can
just make APIs to accept ArrayBufferView. How should we deal with concat
operation? You suggested that we add unshift(), but repeating read and
unshift until we get enough data sound not so good.

For example, currently TextDecoder (http://encoding.spec.whatwg.org/)
accepts one ArrayBufferView and outputs one DOMString. We can use "stream"
mode of TextDecoder to get multiple output DOMStrings and then concatenate
them to get the final result.

As we still don't have StringBuilder, it's not considered to be a big deal
to have "ArrayBufferBuilder" (Stream.read(size) is kinda ArrayBuffer
builder)?

Is any of you guys thinking about introducing something like Node.js's
Buffer class for decoding and tokenization? TextDecoder+Stream would be a
kind of such classes.

I also considered making read() operation to accept pre-allocated
ArrayBuffer and return the number of bytes written.

  stream.read(buffer)

If written data is insufficient, the user can continue to pass the same
buffer to fill the unused space. But, since DOMString is immutable, we
can't take the same approach for readText() op.


> see in Node), and complicates the internal mechanisms.  People think
>
they need it, but what they really need is readUntil(delimiterChar).


What if implementing length header based protocol, e.g. msgpack?


> 2. Reading strings vs ArrayBuffers or other types of things MUST be a
>
property of the stream,


Fixed property or mutable via readType attribute?

If readType, the sequence of UTF8/binary mixed read() problem remains.


> 3. Sync vs async read().  Let's dig into the issue of
> `var d = s.read()` vs `s.read(function(d) {})` for getting data out of
> a stream.
>
...snip...

> buffering to occur if you have pipe chains of streams that are
> processing at different speeds, where one is bursty and the other is
> consistent.
>

Clarification. You're saying that always posting cb to task queue is
wasteful. Right?

Anyway, I think it makes sense. If read is designed to invoke cb
synchronously, it'll be difficult to avoid stack overflow. So the only
options is to always run cb in the next task.


> stream.poll(function ondata() {
>

What happens if unshift() is called? poll() invokes ondata() only when new
data (unshift()-ed data is not included) is available?


>   var d = stream.read();
>   while (stream.state === 'OK') {
> processData(d);
> d = stream.read();
>   }
>

Is Jonas right about the reason why we need loop here? I.e. to avoid
automatic merge/serialization of buffered chunks?


>   switch (stream.state) {
> case 'EOF': onend(); break;
> case 'EWOULDBLOCK': stream.poll(ondata); break;
> default: onerror(new Error('Stream read error: ' + stream.state));
>

We could distinguish these three states by null, empty
ArrayBuffer/DOMString, and non-empty ArrayBuffer/DOMString?


> ReadableStream.prototype.readAll = function(onerror, ondata, onend) {
>   onpoll();
>   function onpoll() {
>

If we decide not to allow multiple concurrent read operations on a stream,
can we just use event handler approach.

stream.onerror = ...
stream.ondata = ...


> 4. Passive data listening.  In Node v0.10, it is not possible to
> passively "listen" to the data passing through a stream without
> affecting the state of the stream.  This is corrected in v0.12, by
> making the read() method also emit a 'data' event whenever it returns
> data, so v0.8-style APIs work as they used to.
>
> The takeaway here is not to do what Node did, but to learn what Node
> learned: the passive-data-listening use-case is relevant.
>

What's the use case?


> 5. Piping.  It's important to consider how any proposed readable
> stream API will allow one to respond to backpressure, and how it
> relates to a *writable* stream API.  Data management from a source to
> a destination is the fundamental reason d'etre for streams, after all.
>

I'd have onwritable and onreadable handler, make their threshold
configurable and let pipe to setup them.


Re: [XHR] Event firing order. XMLHttpRequestUpload then XMLHttpRequest or reverse

2013-09-03 Thread Takeshi Yoshino
On Fri, Aug 2, 2013 at 2:13 AM, Anne van Kesteren  wrote:

> On Tue, Jul 30, 2013 at 10:25 AM, Takeshi Yoshino 
> wrote:
> > Change on 2010/09/13
> >
> http://dev.w3.org/cvsweb/2006/webapi/XMLHttpRequest-2/Overview.src.html.diff?r1=1.138;r2=1.139;f=h
> > reversed the order of event firing for "request error" algorithm and
> send()
> > method to XHRUpload-then-XHR.
> >
> > send() (only loadstart event) and abort() method are still specified to
> fire
> > events in XHR-then-XHRUpload order. Is this intentional or we should make
> > them consistent?
>
> We should make them consistent in some manner. Firing on the main
> object last makes sense to me. It also makes some amount of conceptual
> sense to do the reverse for when the fetching starts, but I feel less
> strongly about that.


In the spec, we have three "cancel"s
- cancel an instance of fetch algorithm
- cancel network activity
- cancel a request

The spec says "Cancel a request" is an abort error which fires events in
XHR-XHRUpload order, but abort() fires events in XHR-XHRUpload order. It
was confusing so I filed this bug. First and at least, I'd like to make
this clear.

What does "cancel a request" correspond to?

Re: loadstart, I don't have strong opinion, too.


Re: Overlap between StreamReader and FileReader

2013-05-18 Thread Takeshi Yoshino
On Sat, May 18, 2013 at 1:56 PM, Jonas Sicking  wrote:

> On Fri, May 17, 2013 at 9:38 PM, Jonas Sicking  wrote:
> > For Stream reading, I think I would do something like the following:
> >
> > interface Stream {
> >   AbortableProgressFuture readBinary(optional unsigned
> > long long size);
> >   AbortableProgressFuture readText(optional unsigned long long
> > size, optional DOMString encoding);
> >   AbortableProgressFuture readBlob(optional unsigned long long
> size);
> >
> >   ChunkedData readBinaryChunked(optional unsigned long long size);
> >   ChunkedData readTextChunked(optional unsigned long long size);
> > };
> >
> > interface ChunkedData : EventTarget {
> >   attribute EventHandler ondata;
> >   attribute EventHandler onload;
> >   attribute EventHandler onerror;
> > };
>
> Actually, we could even get rid of the ChunkedData interface and do
> something like
>
> interface Stream {
>   AbortableProgressFuture readBinary(optional unsigned
> long long size);
>   AbortableProgressFuture readText(optional unsigned long long
> size, optional DOMString encoding);
>   AbortableProgressFuture readBlob(optional unsigned long long size);
>
>   AbortableProgressFuture readBinaryChunked(optional unsigned
> long long size);
>   AbortableProgressFuture readTextChunked(optional unsigned long
> long size);
> };
>
> where the ProgressFutures returned from
> readBinaryChunked/readBinaryChunked delivers the data in the progress
> notifications only, and no data is delivered when the future is
> actually resolved. Though this might be abusing Futures a bit?
>

This is also clear read-only-once interface as well as onmessage() approach
because there's no attribute to accumulate the result value. The fact that
the argument for accept callback is void strikes at least me that the value
passed to progress callback is not an accumulated result but each chunk
separately.

As the state transition of Stream would be simple enough to match Future, I
think technically it's ok and even better to employ it than "readyState +
callback" approach.

But is everyone fine with making it mandatory to get used to programming
with Future to use Stream?


Re: Overlap between StreamReader and FileReader

2013-05-18 Thread Takeshi Yoshino
On Sat, May 18, 2013 at 1:38 PM, Jonas Sicking  wrote:

> For File reading I would now instead do something like
>
> partial interface Blob {
>   AbortableProgressFuture readBinary(BlobReadParams);
>   AbortableProgressFuture readText(BlobReadTextParams);
>   Stream readStream(BlobReadParams);
>

I'd name it "asStream". readStream operation here isn't intended to do any
"read", i.e. moving data between buffers, (like ArrayBufferView for
ArrayBuffer) right?

Or it's gonna clone the Blob's contents and wrap with the Stream interface
as we cannot "discard" contents of a Blob and it'll be inconsistent with
the semantics (implication?) we're going to give to the Stream interface?


Re: Overlap between StreamReader and FileReader

2013-05-17 Thread Takeshi Yoshino
On Fri, May 17, 2013 at 6:15 PM, Anne van Kesteren  wrote:

> The main problem is that Stream per Streams API is not what you expect
>  from an IO stream, but it's more what Blob should've been (Blob
> without synchronous size). What we want I think is a real IO stream.
> If we also need Blob without synchronous size is less clear to me.


Forgetting File API completely, for example, ... how about simple socket
like interface?

// Downloading big data

var remaining;
var type = null;
var payload = '';
function processData(data) {
  var offset = 0;
  while (offset < data.length) {
if (!type) {
  var type = data.substr(offset, 1);
  remaining = payloadSize(type);
} else if (remaining > 0) {
  var consume = Math.min(remaining, data.length - offset);
  payload += data.substr(offset, consume);
  offset += consume;
} else if (remaining == 0) {
  if (type == FOO) {
foo(payload);
  } else if (type == BAR) {
bar(payload);
  }
  type = null;
}
  }
}

var client = new XMLHttpRequest();
client.onreadystatechange = function() {
  if (this.readyState == this.LOADING) {
var responseStream = this.response;
responseStream.setBufferSize(1024);
responseStream.ondata = function(evt) {
  processData(evt.data);
  // Consumed data will be invalidated and memory used for the data
will be released.
};
responseStream.onclose = function() {
  // Reached end of response body
  ...
};
responseStream.start();
// Now responseStream starts forwarding events happen on XHR to its
callbacks.
  }
};
client.open("GET", "/foobar");
client.responseType = "stream";
client.send();

// Uploading big data

var client = new XMLHttpRequest();
client.open("POST", "/foobar");

var requestStream = new WriteStream(1024);

var producer = new Producer();
producer.ondata = function(evt.data) {
  requestStream.send(evt.data);
};
producer.onclose = function() {
  requestStream.close();
};

client.send(requestStream);


Re: Overlap between StreamReader and FileReader

2013-05-17 Thread Takeshi Yoshino
Sorry, I just took over this work and so was misunderstanding some point in
the Streams API spec.

On Fri, May 17, 2013 at 6:09 PM, Anne van Kesteren  wrote:

> On Thu, May 16, 2013 at 10:14 PM, Takeshi Yoshino 
> wrote:
> > I skimmed the thread before starting this and saw that you were pointing
> out
> > some issues but didn't think you're opposing so much.
>
> Well yes. I removed integration from XMLHttpRequest a while back too.
>
>
> > Let me check requirements.
> >
> > d) The I/O API needs to work with synchronous XHR.
>
> I'm not sure this is a requirement. In particular in light of
> http://infrequently.org/2013/05/the-case-against-synchronous-worker-apis-2/
> and synchronous being worker-only it's not entirely clear to me this
> needs to be a requirement from the get-go.
>
>
> > e) Resource for already processed data should be able to be released
> > explicitly as the user instructs.
>
> Can't this happen transparently?


Yes. "Read data is automatically released" model is simple and good.

I thought the spec is clear about this but sorry it isn't. In the spec we
should say that StreamReader invalidates consumed data in Stream and buffer
for the invalidated bytes will be released at that point. Right?


> > g) The I/O API should allow for skipping unnecessary data without
> creating a
> > new object for that.
>
> This would be equivalent to reading and discarding?


I wanted to understand clearly what you meant by "discard" in your posts. I
wondered if you were suggesting that we have some method to skip incoming
data without creating any object holding received data. I.e. something like

s.skip(10);
s.readFrom(10);

not like

var useless_data_at_head_remaining = 256;
ondata = function(evt) {
  var bytes_received = evt.data.size();
  if (useless_data_at_head_remaining > bytes_received) {
useless_data_at_head_remaining -= bytes_received;
return;
  }

  processUsefulData(evt.data.slice(uselesss_data_at_head_remaining));
}

If you meant the latter, I'm ok. I'd also call the latter "reading and
discarding".


> > Not requirement
> >
> > h) Some people wanted Stream to behave like not an object to store the
> data
> > but kinda dam put between response attribute and XHR's internal buffer
> (and
> > network stack) expecting that XHR doesn't consume data from the network
> > until read operation is invoked on Stream object. (i.e. Stream controls
> data
> > flow in addition to callback invocation timing). But it's no longer
> > considered to be a requirement.
>
> I'm not sure what this means. It sounds like something that indeed
> should be transparent from an API point-of-view, but it's hard to
> tell.
>

In the thread, Glenn was discussing what's consumer and what's producer,
IIRC.

I supposed that the idea behind Stream is providing a flow control
interface to control XHR has internal buffer. When the internal buffer is
full, it stops reading data from the network (e.g. BSD socket). The buffer
will be drained when and only when read operation is made on the Stream
object.

Stream has infinite length, but shouldn't have infinite capacity. It'll
swell up if the consumer (e.g. media stream?) is slow.

Of course, browsers would set some limit, but it should rather be well
discussed in the spec. Unless the limit is visible to scripts, they cannot
know if it can watch only "load" event or need to handle "progress" event
and consume arrived data progressively to process all data.


> We also need to decide whether a stream supports multiple readers or
> whether you need to explicitly clone a stream somehow. And as far as
> the API goes, we should study existing libraries.
>

What use cases do you have in your mind? Your example in the thread was
passing one to  but also accessing it manually using StreamReader. I
think it's unknown in what timing and how much  consumes data from
the Stream to the script and it's really hard make such coordination
successful.

Are you thinking of use case like mixing chat data and video contents in
the same HTTP response body?


Re: Overlap between StreamReader and FileReader

2013-05-16 Thread Takeshi Yoshino
I skimmed the thread before starting this and saw that you were pointing
out some issues but didn't think you're opposing so much.



Let me check requirements.

a) We don't want to introduce a completely new object for streaming HTTP
read/write, but we'll realize it by adding some extension to XHR.

b) The point to connect the I/O API and XHR should be only the send()
method argument and xhr.response attribute if possible.

c) The semantics (attribute X is valid when state is ..., etc.) should be
kept same as other modes.

d) The I/O API needs to work with synchronous XHR.

e) Resource for already processed data should be able to be released
explicitly as the user instructs.

f) Reading with maxSize argument (don't read too much).

g) The I/O API should allow for skipping unnecessary data without creating
a new object for that.

Not requirement

h) Some people wanted Stream to behave like not an object to store the data
but kinda dam put between response attribute and XHR's internal buffer (and
network stack) expecting that XHR doesn't consume data from the network
until read operation is invoked on Stream object. (i.e. Stream controls
data flow in addition to callback invocation timing). But it's no longer
considered to be a requirement.

i) Reading with size argument (invoke callback only when data of the
specified amount is ready. Only data of the specified size at the head of
stream is passed to the handler)


On Fri, May 17, 2013 at 2:41 AM, Anne van Kesteren  wrote:

> On Thu, May 16, 2013 at 6:31 PM, Travis Leithead
>  wrote:
> > Since we have Streams implemented to some degree, I'd love to hear
> suggestions to improve it relative to IO. Anne can you summarize the points
> you've made on the other various threads?
>
> I recommend reading through
>
> http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/thread.html#msg569
>
> Problems:
>
> * Too much complexity for being an Blob without synchronous size.
> * The API is bad. The API for File is bad too, but we cannot change
> it, this however is new.
>
> And I think we really want an IO API that's not about incremental, but
> can actively discard incoming data once it's processed.
>
>
> --
> http://annevankesteren.nl/
>


Overlap between StreamReader and FileReader

2013-05-16 Thread Takeshi Yoshino
StreamReader proposed in the Streams API spec is almost the same as
FileReader. By adding the maxSize argument to the readAs methods (new
methods or just add it to existing methods as an optional argument) and
adding the readAsBlob method, FileReader can cover all what StreamReader
provides. Has this already been discussed here?

I heard that some people who had this concern discussed briefly and were
worrying about derailing File API standardization.

We're planning to implement it on Chromium/Blink shortly.


Re: RfC: LCWD of WebSocket API; deadline August 30

2012-08-09 Thread Takeshi Yoshino
No technical comments.

A few editorial comments.

> CLOSING (numeric value 2)
> The connection is going through the closing handshake.

The readyState can enter CLOSING also when close() is called before
establishment. In that case, it's not going through closing handshake.

>  // networking
>   attribute EventHandleronopen;

insert an SP between EventHandler and onopen
(already fixed on the editor's draft)

> When the user agent validates the server's response during the "establish
a WebSocket connection" algorithm, if the status code received from the
server is not 101 (e.g. it is a redirect), the user agent must fail the
websocket connection.

websocket -> WebSocket

> If the user agent was required to fail the websocket connection or the
WebSocket connection is closed with prejudice, fire a simple event named
error at the WebSocket object. [WSP]

websocket -> WebSocket

> interface CloseEvent : Event {
>   readonly attribute boolean wasClean;
>   readonly attribute unsigned short code;
>   readonly attribute DOMString reason;
> };

missing anchor on reason to its description


Re: Proposal: add websocket close codes for "server not found" and/or "too many websockets open"

2012-05-28 Thread Takeshi Yoshino
The protocol spec has defined 1015, but I think we should not pass through
it to the WebSocket API.
http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/0437.html

I think 1006 is the right code for all of WebSocket handshake failure, TLS
failure and TCP connection failure. If the language in
http://tools.ietf.org/html/rfc6455#section-7.4.1 is not good, we can add
"cannot be opened or" before "closed abnormally" for clarification.

Chrome's onerror issue will be fixed soon. We agree with Simon that it's
clearly specified in http://tools.ietf.org/html/rfc6455#section-4.1. Thanks
for reporting Jason.
http://code.google.com/p/chromium/issues/detail?id=128057


Re: [websockets] Making optional extensions mandatory in the API (was RE: Getting WebSockets API to Last Call)

2011-07-28 Thread Takeshi Yoshino
On Thu, Jul 28, 2011 at 03:11, Anne van Kesteren  wrote:

> HTML5 is mostly transport-layer agnostic.
>

HTML5 is transport-layer agnostic though it involves communication with
server in handling ,  element. The WebSocket API specifies much
detail of transport-layer. What does make this difference of philosophy in
building these specs? If the role of specs for browsers is to define what
mainstream modern browsers should behave like, we may mandate gzip for
HTTP somewhere. That's what I tried to mean by using the analogy.


> I am not sure why we are going through this theoretical side-quest on where
> we should state what browsers are required to implement from HTTP to
> function.


Please see my comment at the bottom half of
http://lists.w3.org/Archives/Public/public-webapps/2011JulSep/0498.html. I
don't intend to focus on the theoretical side-quest.

As I said, hearing your and Hixie's opinions, I now understand W3C/WHATWG's
position to require some good compression. It's acceptable. Banning
something considered to be bad is also fine. So, please ban deflate-stream
than requiring it. According to the reply from Hixie to my mail on
wha...@whatwg.org
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2011-July/032650.html,
one of the API spec's mission is considered to give clear/normative
guideline to browser developers how to implement WebSocket by saying Yes/No
on everything listed on the IETF spec. Require X / forbid X are used to make
it clear X is required or not required. For extensions listed in the core
spec, I may support this aggressive stance (use of forbid) to guide browser
developers without ambiguity.

But extending this aggressive stance beyond what not listed in the core spec
is too much, I think.

As far as, A, B, C,... S, T, U, ... cover what listed in the core spec, text
like this is enough, I believe.
- "A, B, C, ..." must be implemented
- "S, T, U, ..." must not be implemented
- any other extension are not required to be implemented.

No one knows what kind of extensions will finally be taken as the best for
WebSocket, yet. Without getting enough data and community support
by conducting live experiment, it's unreasonable to require method X and ban
the others. While conducting such experiments, user-agents are not standard
compliant. If W3C/WHATWG are confident that violating specs is the right way
to evolve specs, I'll stop arguing this. Yes, we'll make our browsers
standard non-compliant to seek the way to improve WebSocket.


> The HTTP protocol has its own set of problems and this is all largely
> orthogonal to what we should do with the WebSocket protocol and API.
>

Sure


> If you do not think this particular extension makes sense raise it as a
> last call issue with the WebSocket protocol and ask for the API to require
> implementations to not support it. Lets not meta-argue about this.


Yeah. Getting answer for the meta-argument is not my goal.

Thanks,
Takeshi


Re: [websockets] Making optional extensions mandatory in the API (was RE: Getting WebSockets API to Last Call)

2011-07-27 Thread Takeshi Yoshino
On Thu, Jul 28, 2011 at 02:03, Anne van Kesteren  wrote:

> On Wed, 27 Jul 2011 08:49:57 -0700, Takeshi Yoshino 
> wrote:
>
>> What do you mean by "more places"?
>>
>
> XMLHttpRequest is not the sole API for HTTP, there's also , , etc.
> So indicating what parts of the HTTP protocol are mandatory for browsers is
> not really in scope for XMLHttpRequest.


So, let me correct my text by s/XHR/HTML5 <http://www.w3.org/TR/html5/>/.


Re: [websockets] Making optional extensions mandatory in the API (was RE: Getting WebSockets API to Last Call)

2011-07-27 Thread Takeshi Yoshino
On Thu, Jul 28, 2011 at 00:06, Anne van Kesteren  wrote:

> On Wed, 27 Jul 2011 03:35:03 -0700, Takeshi Yoshino 
> wrote:
>
>> Is new XHR spec going to make gzip mandatory for its underlying HTTP?
>>
>
> I do not think that analogy makes sense. The WebSocket protocol can only be
> used through the WebSocket API, HTTP is prevalent in more places.


What do you mean by "more places"?


> Having said that, XMLHttpRequest does place constraints on HTTP. E.g. it
> requires redirects to be followed, it does not expose 1xx responses, only
> works cross-origin in combination with CORS, etc.


I agree that there're some constrains must be placed on underlying protocol
to make it useful/secure on browser.


Re: [websockets] Making optional extensions mandatory in the API (was RE: Getting WebSockets API to Last Call)

2011-07-27 Thread Takeshi Yoshino
Is new XHR spec going to make gzip mandatory for its underlying HTTP?

On Tue, Jul 26, 2011 at 06:05, Aryeh Gregor wrote:

> From the discussion here, it sounds like there are problems with
> WebSockets compression as currently defined.  If that's the case, it
> might be better for the IETF to just drop it from the protocol for now
> and leave it for a future version, but that's up to them.  As far as
> we're concerned, if the option is really a bad idea to start with, it
> might make sense for us to prohibit it rather than require it, but
> there's no reason at all we have to leave it optional for web browsers
> just because it's optional for other WebSockets implementations.
>

Regarding deflate-stream, I think prohibiting is better than requiring.

But I still don't understand the benefit of banning any extension other than
what specified in the API spec.

There are two different assertions made by W3C side.

(a) it's not acceptable to make support (== request) of "good-compression"
optional
(b) it's not acceptable to allow any other compression/extension than
specified in the API spec

(a) is supported by the discussion you and Anne made by using analogy with
HTTP. I may agree with this. (b) is what Hixie was asserting in the bug
entry. I'd like to see clear support for (b).

No one knows what kind of compression will be finally the win for WebSocket.
I'd like to see ideas about how the evolution of WebSocket will be like.
With (b), to experimentally implement some extension/compression not
specified in the API spec, we have to make our browser non-compliant with
W3C spec.

I'd suggest that, once better-deflate is ready by IETF, W3C uses text like "
the user agent MUST request better-deflate extension" instead of "JUST
better-deflate extension".

Thanks,
Takeshi


Re: [websockets] Making optional extensions mandatory in the API (was RE: Getting WebSockets API to Last Call)

2011-07-23 Thread Takeshi Yoshino
Resending. I've made mistake in processing public-webapps@w3.org confirmation
page.

On Fri, Jul 22, 2011 at 19:45, Takeshi Yoshino  wrote:

> On Fri, Jul 22, 2011 at 18:54, Anne van Kesteren  wrote:
>
>> On Fri, 22 Jul 2011 07:03:34 +0200, Takeshi Yoshino 
>> wrote:
>>
>>> Think of these facts, the only "dominant implementation" concern I can
>>> come up with for WebSocket compression is that big service provides may take
>>> an aggressive policy that they require users to use only WebSocket client
>>> with compression capability to reduce bandwidth. As a result clients with no
>>> compression capability may, in effect, be kicked out of WebSocket world.
>>>
>>> I can understand that concern. Is this true for the HTTP world? I.e.
>>> clients that send "Accept-Encoding: \r\n" cannot live?
>>>
>>
>> Yes. Optional features for browsers do not work. For servers it is fine if
>> they do not support compression or whatnot, but browsers pretty much need to
>> align on feature set.
>
>
> In summary, your point seems to be that we must choose feature set that is
> considered to be taken by majority as optimal like HTTP/gzip, and ask
> browsers to support that by specifying it the W3C spec (*). I see that. It
> might make sense. However deflate-stream is not yet considered as the
> optimal choice in the HyBi WG and we're trying to introduce better one. Even
> some are doubting if it's worth using deflate-stream compared to identity
> stream.
>
> Requiring all browsers request (== implement) deflate-stream can be asking
> everyone to do thrown-away work as a result. Is this acceptable?
>
> Based on W3C's stance (*) and IETF's stance, the only landing point is
> initially enforcing browsers to use identify encoding except for
> experimenting new compression, and when IETF provides new compression
> extension good enough to recommend, update the API spec to switch to that.
>
> Takeshi
>
>