Re: Overlap between StreamReader and FileReader

2013-05-17 Thread Jonas Sicking
On Fri, May 17, 2013 at 9:38 PM, Jonas Sicking  wrote:
> For Stream reading, I think I would do something like the following:
>
> interface Stream {
>   AbortableProgressFuture readBinary(optional unsigned
> long long size);
>   AbortableProgressFuture readText(optional unsigned long long
> size, optional DOMString encoding);
>   AbortableProgressFuture readBlob(optional unsigned long long size);
>
>   ChunkedData readBinaryChunked(optional unsigned long long size);
>   ChunkedData readTextChunked(optional unsigned long long size);
> };
>
> interface ChunkedData : EventTarget {
>   attribute EventHandler ondata;
>   attribute EventHandler onload;
>   attribute EventHandler onerror;
> };

Actually, we could even get rid of the ChunkedData interface and do
something like

interface Stream {
  AbortableProgressFuture readBinary(optional unsigned
long long size);
  AbortableProgressFuture readText(optional unsigned long long
size, optional DOMString encoding);
  AbortableProgressFuture readBlob(optional unsigned long long size);

  AbortableProgressFuture readBinaryChunked(optional unsigned
long long size);
  AbortableProgressFuture readTextChunked(optional unsigned long
long size);
};

where the ProgressFutures returned from
readBinaryChunked/readBinaryChunked delivers the data in the progress
notifications only, and no data is delivered when the future is
actually resolved. Though this might be abusing Futures a bit?

/ Jonas



Re: Overlap between StreamReader and FileReader

2013-05-17 Thread Jonas Sicking
I figured I should chime in with some ideas of my own because, well, why not :-)

First off, I definitely think the semantic model of a Stream shouldn't
be "a Blob without a size", but rather "a Blob without a size that you
can only read from once". I.e. the implementation should be able to
discard data as it passes it to a reader.

That said, many Stream APIs support the concept of a "T". This enables
splitting a Stream into two Streams. This enables having multiple
consumers of the same data source. However when a T is created, it
only returns the data that has so far been unread from the original
Stream. It does not return the data from the beginning of the stream
since that would prevent streams from discarding data as soon as it
has been read.

If we are going to have a StreamReader API, then I don't think we
should model it after FileReader. FileReader unfortunately followed
the model of XMLHttpRequest (based on request from several
developers), however this is a pretty terrible API, and I believe we
can do much better. And obviously we should do something based on
Futures :-)

For File reading I would now instead do something like

partial interface Blob {
  AbortableProgressFuture readBinary(BlobReadParams);
  AbortableProgressFuture readText(BlobReadTextParams);
  Stream readStream(BlobReadParams);
};

dictionary BlobReadParams {
  long long start;
  long long length;
};

dictionary BlobReadTextParams : BlobReadParams {
  DOMString encoding;
};

For Stream reading, I think I would do something like the following:

interface Stream {
  AbortableProgressFuture readBinary(optional unsigned
long long size);
  AbortableProgressFuture readText(optional unsigned long long
size, optional DOMString encoding);
  AbortableProgressFuture readBlob(optional unsigned long long size);

  ChunkedData readBinaryChunked(optional unsigned long long size);
  ChunkedData readTextChunked(optional unsigned long long size);
};

interface ChunkedData : EventTarget {
  attribute EventHandler ondata;
  attribute EventHandler onload;
  attribute EventHandler onerror;
};

For all of the above function, if a size is not passed, the rest of
the Stream is read.

The ChunkedData interface allows incremental reading of a stream. I.e.
as soon as there is data available a "data" event is fired on the
ChunkedData object which contains the data since last "data" event
fired. Once we've reached the end of the stream, or the requested
size, the "load" event is fired on the ChunkedData object.

So the read* functions allow a consumer to pull data, whereas the
read*Chunked allow consumers to have the data pushed at them. There's
also other potential functions we can add which allow hybrids, but
that seems overly complex for now.

Other functions we could add is peekText and peekBinary which allows
looking into the stream to determine if you're able to consume the
data that's there, or if you should pass the Stream to some other
consumer.

We might also want to add a "eof" flag to the Stream interface, as
well as an event which is fired when the end of the stream is reached
(or should that be modeled using a Future?)

/ Jonas

On Fri, May 17, 2013 at 5:02 AM, Takeshi Yoshino  wrote:
> On Fri, May 17, 2013 at 6:15 PM, Anne van Kesteren  wrote:
>>
>> The main problem is that Stream per Streams API is not what you expect
>> from an IO stream, but it's more what Blob should've been (Blob
>> without synchronous size). What we want I think is a real IO stream.
>> If we also need Blob without synchronous size is less clear to me.
>
>
> Forgetting File API completely, for example, ... how about simple socket
> like interface?
>
> // Downloading big data
>
> var remaining;
> var type = null;
> var payload = '';
> function processData(data) {
>   var offset = 0;
>   while (offset < data.length) {
> if (!type) {
>   var type = data.substr(offset, 1);
>   remaining = payloadSize(type);
> } else if (remaining > 0) {
>   var consume = Math.min(remaining, data.length - offset);
>   payload += data.substr(offset, consume);
>   offset += consume;
> } else if (remaining == 0) {
>   if (type == FOO) {
> foo(payload);
>   } else if (type == BAR) {
> bar(payload);
>   }
>   type = null;
> }
>   }
> }
>
> var client = new XMLHttpRequest();
> client.onreadystatechange = function() {
>   if (this.readyState == this.LOADING) {
> var responseStream = this.response;
> responseStream.setBufferSize(1024);
> responseStream.ondata = function(evt) {
>   processData(evt.data);
>   // Consumed data will be invalidated and memory used for the data will
> be released.
> };
> responseStream.onclose = function() {
>   // Reached end of response body
>   ...
> };
> responseStream.start();
> // Now responseStream starts forwarding events happen on XHR to its
> callbacks.
>   }
> };
> client.open("GET", "/foobar");
> client.responseType = "stream";
> client.send();
>

Re: [XHR] anonymous flag

2013-05-17 Thread Hallvord Reiar Michaelsen Steen
Den 17. mai 2013 kl. 11:53 skrev Anne van Kesteren :

> On Thu, May 16, 2013 at 10:35 PM, Hallvord Reiar Michaelsen Steen
>  wrote:
>> Anonymous mode still seems like useless complexity to me, so I'm still in 
>> favour of dropping it.
> 
> Right. I don't really get the feeling you're considering the arguments
> carefully

I am trying.  "Confused deputy" is a general and somewhat abstract class of 
issues. The only way I know to "consider" such an abstraction carefully is to 
try to translate it to real world scenarios, then discuss those, so that's what 
I have been trying to do. That approach also seems to match the "use cases, use 
cases" approach spec authors generally aim for. 

> and since nobody else seems to be participating here (much
> like before) I'm not sure this is a good use of our time.

I hope a few more people will chime in, sure. I would like to assure anyone who 
considers voicing an opinion that I will not consider your participation in the 
discussion a waste of MY time.
Hallvord



[Bug 19771] Need way to determine what keys are supported on device.

2013-05-17 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=19771

Gary Kacmarcik  changed:

   What|Removed |Added

 CC||gary...@google.com,
   ||public-webapps@w3.org
  Component|DOM3 Events |UI Events
   Assignee|tra...@microsoft.com|gary...@google.com

--- Comment #2 from Gary Kacmarcik  ---
We're trying to do something similar with queryKeyCaps in UI Events.

Since this certainly won't be part of D3E at this point, I'm changing the
component to "UI Events" so we can consider it for that spec.

-- 
You are receiving this mail because:
You are on the CC list for the bug.



Re: Overlap between StreamReader and FileReader

2013-05-17 Thread Takeshi Yoshino
On Fri, May 17, 2013 at 6:15 PM, Anne van Kesteren  wrote:

> The main problem is that Stream per Streams API is not what you expect
>  from an IO stream, but it's more what Blob should've been (Blob
> without synchronous size). What we want I think is a real IO stream.
> If we also need Blob without synchronous size is less clear to me.


Forgetting File API completely, for example, ... how about simple socket
like interface?

// Downloading big data

var remaining;
var type = null;
var payload = '';
function processData(data) {
  var offset = 0;
  while (offset < data.length) {
if (!type) {
  var type = data.substr(offset, 1);
  remaining = payloadSize(type);
} else if (remaining > 0) {
  var consume = Math.min(remaining, data.length - offset);
  payload += data.substr(offset, consume);
  offset += consume;
} else if (remaining == 0) {
  if (type == FOO) {
foo(payload);
  } else if (type == BAR) {
bar(payload);
  }
  type = null;
}
  }
}

var client = new XMLHttpRequest();
client.onreadystatechange = function() {
  if (this.readyState == this.LOADING) {
var responseStream = this.response;
responseStream.setBufferSize(1024);
responseStream.ondata = function(evt) {
  processData(evt.data);
  // Consumed data will be invalidated and memory used for the data
will be released.
};
responseStream.onclose = function() {
  // Reached end of response body
  ...
};
responseStream.start();
// Now responseStream starts forwarding events happen on XHR to its
callbacks.
  }
};
client.open("GET", "/foobar");
client.responseType = "stream";
client.send();

// Uploading big data

var client = new XMLHttpRequest();
client.open("POST", "/foobar");

var requestStream = new WriteStream(1024);

var producer = new Producer();
producer.ondata = function(evt.data) {
  requestStream.send(evt.data);
};
producer.onclose = function() {
  requestStream.close();
};

client.send(requestStream);


Re: Overlap between StreamReader and FileReader

2013-05-17 Thread Anne van Kesteren
On Fri, May 17, 2013 at 12:09 PM, Takeshi Yoshino  wrote:
> I thought the spec is clear about this but sorry it isn't. In the spec we
> should say that StreamReader invalidates consumed data in Stream and buffer
> for the invalidated bytes will be released at that point. Right?

I'm glad we're all getting on the same page now. I think there might
be use cases for a Blob without size (i.e. where you do not discard
the data after consuming) which is what Stream seems to be today, but
I'm not sure we should call that Stream.

And I think for XMLHttpRequest at least we want an API where data can
be discarded once processed so you do not have to keep multi-megabyte
sound files on disk if all you want is to provide a (potentially
post-processed) live stream.


> I wanted to understand clearly what you meant by "discard" in your posts. I
> wondered if you were suggesting that we have some method to skip incoming
> data without creating any object holding received data. I.e. something like
>
> s.skip(10);
> s.readFrom(10);
>
> not like
>
> var useless_data_at_head_remaining = 256;
> ondata = function(evt) {
>   var bytes_received = evt.data.size();
>   if (useless_data_at_head_remaining > bytes_received) {
> useless_data_at_head_remaining -= bytes_received;
> return;
>   }
>
>   processUsefulData(evt.data.slice(uselesss_data_at_head_remaining));
> }
>
> If you meant the latter, I'm ok. I'd also call the latter "reading and
> discarding".

Yeah that seems about right.


> What use cases do you have in your mind? Your example in the thread was
> passing one to  but also accessing it manually using StreamReader. I
> think it's unknown in what timing and how much  consumes data from
> the Stream to the script and it's really hard make such coordination
> successful.
>
> Are you thinking of use case like mixing chat data and video contents in the
> same HTTP response body?

I haven't really thought about what I'd use it for, but I looked at
e.g. Dart and it seems to have a concept of broadcasted streams. Maybe
analyze the incoming bits in one function and in another you'd process
the incoming data and do something with it. Above all though it needs
to be clear what happens and for IO streams where you do not want to
keep all the data around (e.g. unlike the current Streams API) it's a
question that needs answering.


--
http://annevankesteren.nl/



Re: Overlap between StreamReader and FileReader

2013-05-17 Thread Takeshi Yoshino
Sorry, I just took over this work and so was misunderstanding some point in
the Streams API spec.

On Fri, May 17, 2013 at 6:09 PM, Anne van Kesteren  wrote:

> On Thu, May 16, 2013 at 10:14 PM, Takeshi Yoshino 
> wrote:
> > I skimmed the thread before starting this and saw that you were pointing
> out
> > some issues but didn't think you're opposing so much.
>
> Well yes. I removed integration from XMLHttpRequest a while back too.
>
>
> > Let me check requirements.
> >
> > d) The I/O API needs to work with synchronous XHR.
>
> I'm not sure this is a requirement. In particular in light of
> http://infrequently.org/2013/05/the-case-against-synchronous-worker-apis-2/
> and synchronous being worker-only it's not entirely clear to me this
> needs to be a requirement from the get-go.
>
>
> > e) Resource for already processed data should be able to be released
> > explicitly as the user instructs.
>
> Can't this happen transparently?


Yes. "Read data is automatically released" model is simple and good.

I thought the spec is clear about this but sorry it isn't. In the spec we
should say that StreamReader invalidates consumed data in Stream and buffer
for the invalidated bytes will be released at that point. Right?


> > g) The I/O API should allow for skipping unnecessary data without
> creating a
> > new object for that.
>
> This would be equivalent to reading and discarding?


I wanted to understand clearly what you meant by "discard" in your posts. I
wondered if you were suggesting that we have some method to skip incoming
data without creating any object holding received data. I.e. something like

s.skip(10);
s.readFrom(10);

not like

var useless_data_at_head_remaining = 256;
ondata = function(evt) {
  var bytes_received = evt.data.size();
  if (useless_data_at_head_remaining > bytes_received) {
useless_data_at_head_remaining -= bytes_received;
return;
  }

  processUsefulData(evt.data.slice(uselesss_data_at_head_remaining));
}

If you meant the latter, I'm ok. I'd also call the latter "reading and
discarding".


> > Not requirement
> >
> > h) Some people wanted Stream to behave like not an object to store the
> data
> > but kinda dam put between response attribute and XHR's internal buffer
> (and
> > network stack) expecting that XHR doesn't consume data from the network
> > until read operation is invoked on Stream object. (i.e. Stream controls
> data
> > flow in addition to callback invocation timing). But it's no longer
> > considered to be a requirement.
>
> I'm not sure what this means. It sounds like something that indeed
> should be transparent from an API point-of-view, but it's hard to
> tell.
>

In the thread, Glenn was discussing what's consumer and what's producer,
IIRC.

I supposed that the idea behind Stream is providing a flow control
interface to control XHR has internal buffer. When the internal buffer is
full, it stops reading data from the network (e.g. BSD socket). The buffer
will be drained when and only when read operation is made on the Stream
object.

Stream has infinite length, but shouldn't have infinite capacity. It'll
swell up if the consumer (e.g. media stream?) is slow.

Of course, browsers would set some limit, but it should rather be well
discussed in the spec. Unless the limit is visible to scripts, they cannot
know if it can watch only "load" event or need to handle "progress" event
and consume arrived data progressively to process all data.


> We also need to decide whether a stream supports multiple readers or
> whether you need to explicitly clone a stream somehow. And as far as
> the API goes, we should study existing libraries.
>

What use cases do you have in your mind? Your example in the thread was
passing one to  but also accessing it manually using StreamReader. I
think it's unknown in what timing and how much  consumes data from
the Stream to the script and it's really hard make such coordination
successful.

Are you thinking of use case like mixing chat data and video contents in
the same HTTP response body?


Re: [XHR] anonymous flag

2013-05-17 Thread Anne van Kesteren
On Fri, May 17, 2013 at 11:24 AM, Charles McCathie Nevile
 wrote:
> With respect to your use case for keeping anonymous I agree with Hallvord. I
> cannot think of a real use case for a browser-like thing that accepts
> arbitrary URLs. Could you please provide some more explanation of the real
> scenarios for this use case?

We have been over this many times in the discussions over CORS and
UMP, including whether or not we care about confused deputy attacks
and ambient authority. At the time we decided we did which is why we
offered this feature.

In addition, there's been requests to have more control over whether
cookies are transmitted (as they take up space) and as to whether the
Referer header is included in requests (not the same as setting its
value to null, which is not what setRequestHeader() can be used for
anyway, as it's for additional headers, not controlling existing
ones). See e.g. http://wiki.whatwg.org/wiki/Meta_referrer for a
feature that seems to be getting some traction. Whether these should
be combined or not is unclear to me (UMP needs both).

I don't really feel it's responsible to remove this feature at this
point without anyone involved in the original discussion speaking up.
But then since it's not implemented maybe we can ignore that. :/


--
http://annevankesteren.nl/



Re: [XHR] anonymous flag

2013-05-17 Thread Charles McCathie Nevile

Hi Anne,


Please stick to the technical discussion without making assertions about  
people's motives or actions for which you don't have concrete evidence.



On Fri, 17 May 2013 13:53:08 +0400, Anne van Kesteren   
wrote:



On Thu, May 16, 2013 at 10:35 PM, Hallvord Reiar Michaelsen Steen
 wrote:
Anonymous mode still seems like useless complexity to me, so I'm still  
in favour of dropping it.


Right. I don't really get the feeling you're considering the arguments
carefully and since nobody else seems to be participating here (much
like before) I'm not sure this is a good use of our time.


Silence is not very useful as evidence nobody cares, since it may also  
mean everybody agrees (but then, it isn't strong evidence that everybody  
agrees for similar reasons).


Since Hallvord's argument made sense and was in an active discussion it  
seemed unnecessary to repeat it or "me too" it, but in the interest of  
clarity:


The OpenID scenario seems to match common real scenarios, and therefore  
the risk Hallvord identifies seems worth being careful about.


With respect to your use case for keeping anonymous I agree with Hallvord.  
I cannot think of a real use case for a browser-like thing that accepts  
arbitrary URLs. Could you please provide some more explanation of the real  
scenarios for this use case?


cheers

Chaals

--
Charles McCathie Nevile - Consultant (web standards) CTO Office, Yandex
  cha...@yandex-team.ru Find more at http://yandex.com



Re: [XHR] anonymous flag

2013-05-17 Thread Anne van Kesteren
On Thu, May 16, 2013 at 10:35 PM, Hallvord Reiar Michaelsen Steen
 wrote:
> Anonymous mode still seems like useless complexity to me, so I'm still in 
> favour of dropping it.

Right. I don't really get the feeling you're considering the arguments
carefully and since nobody else seems to be participating here (much
like before) I'm not sure this is a good use of our time.


--
http://annevankesteren.nl/



Re: Overlap between StreamReader and FileReader

2013-05-17 Thread Anne van Kesteren
On Thu, May 16, 2013 at 8:26 PM, Feras Moussa  wrote:
> Can you please go into a bit more detail? I've read through the thread, and
> it mostly focuses on the details of how a Stream is received from XHR and
> what behaviors can be expected - it only lightly touches on how you can
> operate on a stream after it is received.

The main problem is that Stream per Streams API is not what you expect
from an IO stream, but it's more what Blob should've been (Blob
without synchronous size). What we want I think is a real IO stream.
If we also need Blob without synchronous size is less clear to me.


> I do agree the API
> should allow for scenarios where data can be discarded, given that is an
> advantage of a Stream over a Blob.

It does not seem to do that currently though. It's also not clear to
me we want to allow multiple readers by default.


> That said, Anne, what is your suggestion for how Streams can be consumed?

I don't have one yet.


--
http://annevankesteren.nl/



Re: Overlap between StreamReader and FileReader

2013-05-17 Thread Anne van Kesteren
On Thu, May 16, 2013 at 10:14 PM, Takeshi Yoshino  wrote:
> I skimmed the thread before starting this and saw that you were pointing out
> some issues but didn't think you're opposing so much.

Well yes. I removed integration from XMLHttpRequest a while back too.


> Let me check requirements.
>
> d) The I/O API needs to work with synchronous XHR.

I'm not sure this is a requirement. In particular in light of
http://infrequently.org/2013/05/the-case-against-synchronous-worker-apis-2/
and synchronous being worker-only it's not entirely clear to me this
needs to be a requirement from the get-go.


> e) Resource for already processed data should be able to be released
> explicitly as the user instructs.

Can't this happen transparently?


> g) The I/O API should allow for skipping unnecessary data without creating a
> new object for that.

This would be equivalent to reading and discarding?


> Not requirement
>
> h) Some people wanted Stream to behave like not an object to store the data
> but kinda dam put between response attribute and XHR's internal buffer (and
> network stack) expecting that XHR doesn't consume data from the network
> until read operation is invoked on Stream object. (i.e. Stream controls data
> flow in addition to callback invocation timing). But it's no longer
> considered to be a requirement.

I'm not sure what this means. It sounds like something that indeed
should be transparent from an API point-of-view, but it's hard to
tell.


We also need to decide whether a stream supports multiple readers or
whether you need to explicitly clone a stream somehow. And as far as
the API goes, we should study existing libraries.


--
http://annevankesteren.nl/