Re: Polished FileSystem API proposal

2013-10-30 Thread pira...@gmail.com
+1 to symbolic links, they have almost the same functionality that hard
links and are more secure and flexible (they are usually just plain text
files...).
El 30/10/2013 01:42, Brendan Eich bren...@mozilla.com escribió:

 Hard links are peculiar to Unix filesystems. Not interoperable across all
 OSes. Symbolic links, OTOH...

 /be

  Brian Stell mailto:bst...@google.com
 October 29, 2013 4:53 PM
 I meant

eg, V1/dir1/file1, V2/dir1/file1.





Re: Overlap between StreamReader and FileReader

2013-10-30 Thread Takeshi Yoshino
On Wed, Oct 23, 2013 at 11:42 PM, Aymeric Vitte vitteayme...@gmail.comwrote:

  Your filter idea seems to be equivalent to a createStream that I
 suggested some time ago (like node), what about:

 var encryptionPromise = crypto.subtle.encrypt(aesAlgorithmEncrypt, aesKey,
 sourceStream).createStream();

 So you don't need to modify the APIs where you can not specify the
 responseType.

 I was thinking to add stop/resume and pause/unpause:

 - stop: insert eof in the stream


close() does this.


  Example : finalize the hash when eof is received

 - resume: restart from where the stream stopped
 Example : restart the hash from the state the operation was before
 receiving eof (related to Issue22 in WebCrypto that was closed without any
 solution, might imply to clone the state of the operation)


Should it really be a part of Streams API? How about just making the filter
(not Stream itself) returned by WebCrypto reusable and add some method to
recycle it?


 - pause: pause the stream, do not send eof


Sorry, what will be paused? Output?


  - unpause: restart the stream

 And flow control should be back and explicit, not sure right now how to
 define it but I think it's impossible for a js app to do a precise flow
 control, and for existing APIs like WebSockets it's not easy to control the
 flow and avoid in some situations to overload the UA.

 Regards,

 Aymeric

 Le 21/10/2013 13:14, Takeshi Yoshino a écrit :

  Sorry for blank of ~2 weeks.

  On Fri, Oct 4, 2013 at 5:57 PM, Aymeric Vitte vitteayme...@gmail.comwrote:

  I am still not very familiar with promises, but if I take your
 preceeding example:


 var sourceStream = xhr.response;
 var resultStream = new Stream();
 var fileWritingPromise = fileWriter.write(resultStream);
 var encryptionPromise = crypto.subtle.encrypt(aesAlgorithmEncrypt,
 aesKey, sourceStream, resultStream);
  Promise.all(fileWritingPromise, encryptionPromise).then(
   ...
 );


  I made a mistake. The argument of Promise.all should be an Array. So,
 [fileWritingPromise, encryptionPromise].




  shoud'nt it be more something like:

 var sourceStream = xhr.response;
 var encryptionPromise = crypto.subtle.encrypt(aesAlgorithmEncrypt,
 aesKey);
 var resultStream=sourceStream.pipe(encryptionPromise);
 var fileWritingPromise = fileWriter.write(resultStream);
  Promise.all(fileWritingPromise, encryptionPromise).then(
   ...
 );


  Promises just tell the user completion of each operation with some value
 indicating the result of the operation. It's not destination of data.

  Do you think it's good to create objects representing each encrypt
 operation? So, some objects called filter is introduced and the code
 would be like:

  var pipeToFilterPromise;

  var encryptionFilter;
  var fileWriter;

  xhr.onreadystatechange = function() {
   ...
   } else if (this.readyState == this.LOADING) {
  if (this.status != 200) {
   ...
 }

  var sourceStream = xhr.response;

  encryptionFilter =
 crypto.subtle.createEncryptionFilter(aesAlgorithmEncrypt, aesKey);
 // Starts the filter.
 var encryptionPromise = encryptionFilter.encrypt();
 // Also starts pouring data but separately from promise creation.
  pipeToFilterPromise = sourceStream.pipe(encryptionFilter);

  fileWriter = ...;
 // encryptionFilter works as data producer for FileWriter.
 var fileWritingPromise = fileWriter.write(encryptionFilter);

  // Set only handler for rejection now.
  pipeToFilterPromise.catch(
function(result) {
 xhr.abort();
 encryptionFilter.abort();
  fileWriter.abort();
   }
  );

  encryptionPromise.catch(
function(result) {
  xhr.abort();
  fileWriter.abort();
   }
  );

  fileWritingPromise.catch(
function(result) {
  xhr.abort();
  encryptionFilter.abort();
   }
  );

   // As encryptionFilter will be (successfully) closed only
  // when XMLHttpRequest and pipe() are both successful.
 // So, it's ok to set handler for fulfillment now.
  Promise.all([encryptionPromise, fileWritingPromise]).then(
function(result) {
 // Done everything successfully!
 // We come here only when encryptionFilter is close()-ed.
 fileWriter.close();
 processFile();
   }
 );
   } else if (this.readyState == this.DONE) {
  if (this.status != 200) {
   encryptionFilter.abort();
   fileWriter.abort();
 } else {
   // Now we know that XHR was successful.
   // Let's close() the filter to finish encryption
   // successfully.
   pipeToFilterPromise.then(
 function(result) {
   // XMLHttpRequest closes sourceStream but pipe()
   // resolves pipeToFilterPromise without closing
   // encryptionFilter.
   encryptionFilter.close();
 }
   );
 }
}
  };
 xhr.send();

  encryptionFilter has the same interface as normal stream but 

Re: Overlap between StreamReader and FileReader

2013-10-30 Thread Takeshi Yoshino
On Wed, Oct 30, 2013 at 8:14 PM, Takeshi Yoshino tyosh...@google.comwrote:

 On Wed, Oct 23, 2013 at 11:42 PM, Aymeric Vitte vitteayme...@gmail.comwrote:

 - pause: pause the stream, do not send eof



 Sorry, what will be paused? Output?


http://lists.w3.org/Archives/Public/public-webrtc/2013Oct/0059.html
http://www.w3.org/2011/04/webrtc/wiki/Transport_Control#Pause.2Fresume

So, you're suggesting that we make Stream be a convenient point where we
can dam up data flow and skip adding methods to pausing data producing and
consuming to producer/consumer APIs? I.e. we make it able to prevent data
queued in a Stream from being read. This typically means asynchronously
suspending ongoing pipe() or read() call on the Stream with no-argument or
very large argument.




  - unpause: restart the stream

 And flow control should be back and explicit, not sure right now how to
 define it but I think it's impossible for a js app to do a precise flow
 control, and for existing APIs like WebSockets it's not easy to control the
 flow and avoid in some situations to overload the UA.




[xhr][xhr-1] status and plans

2013-10-30 Thread Jungkee Song
Hi,

 -Original Message-
 From: Arthur Barstow [mailto:art.bars...@nokia.com]
 Sent: Thursday, October 03, 2013 1:40 AM
 
 I am also interested in the status and plans for both the version of XHR
 that is supposed to move to LC-CR-REC in 2013 and the XHR-Bleeding-Edge
 version.
 

As planned, we editors prepared XMLHttpRequest Level 1 [1] as a
Recommendation track version. Initially, we'd planned to put together a
draft with only the parts that are *already compatibly* supported across the
major implementations, but it turned out that most of the major features are
showing *subtle differences* in behavior. Hence, we'd rather endeavored to
secure wider test coverage [3] and analyzed the compatibility issues [4]
(underway). From this point on, we plan to make efforts in resolving these
compatibility issues and move the spec forward to LC(within this
year)-CR-REC. The only features left out - due to lack of implementation or
maturity - are:
  - data: URLs
  - responseType json
  - anonymous flag and XMLHttpRequestOptions dictionary

For the bleeding-edge version [2], we need further discussion in direction
as Anne recently revised the WHATWG spec [5] in terms of Fetch spec [6].


[1] https://dvcs.w3.org/hg/xhr/raw-file/tip/xhr-1/Overview.html
[2] https://dvcs.w3.org/hg/xhr/raw-file/tip/Overview.html
[3] http://w3c-test.org/web-platform-tests/master/XMLHttpRequest/
[4] http://jungkees.github.io/XMLHttpRequest-test/
[5] http://xhr.spec.whatwg.org
[6] http://fetch.spec.whatwg.org


--
Jungkee Song




[push-api]: Push API Patent Advisory Group (PAG) Recommends Continuing work on Push API Spec

2013-10-30 Thread Arthur Barstow
The Push API Patent Advisory Group published their `report` and it 
recommends WebApps continue to work on the spec 
http://www.w3.org/2013/10/push-api-pag-report.html.


On 10/30/13 12:53 PM, ext Coralie Mercier wrote:

The Push API Patent Advisory Group (PAG) [1] has published a report
recommending that W3C continue work on the Push API Specification:
http://www.w3.org/2013/10/push-api-pag-report.html

The PAG concludes that the disclosed patents do not read on the Push API
Specification, assessed as of its 15 August 2013 Working Draft, and hence
recommends that work on the Push API Specification be continued without
PAG-related change.

The PAG concludes that the initial concern has been resolved, enabling the
Working Group to continue. More detail is available in the PAG report and
PAG home page [2].

[1] http://www.w3.org/2013/03/push-pag-charter.html
[2] http://www.w3.org/2013/papag/




Re: Polished FileSystem API proposal

2013-10-30 Thread Brian Stell
Good points! I was thinking of the logical functioning and hadn't
considered the implementation. My understanding is that the UA will map
from the filename to an actual file using some kind of database. My
assumption was the logical idea of a link would happen in that layer.


On Wed, Oct 30, 2013 at 1:14 AM, pira...@gmail.com pira...@gmail.comwrote:

 +1 to symbolic links, they have almost the same functionality that hard
 links and are more secure and flexible (they are usually just plain text
 files...).
 El 30/10/2013 01:42, Brendan Eich bren...@mozilla.com escribió:

 Hard links are peculiar to Unix filesystems. Not interoperable across all
 OSes. Symbolic links, OTOH...

 /be

  Brian Stell mailto:bst...@google.com
 October 29, 2013 4:53 PM
 I meant

eg, V1/dir1/file1, V2/dir1/file1.





FYI: Resource Priorities and Beacon API

2013-10-30 Thread Philippe Le Hegaret
FYI,

the Web Performance Working Group just published Resource Priorities and
Beacon yesterday and we're interested in feedback on the ideas and
approaches. Resource priorities allows you to tweak the download
priority of your resources, while Beacon enables synchronously transfer
data from the user agent to a web server, under the responsibility of
the user agent.

 http://www.w3.org/TR/2013/WD-resource-priorities-20131029/
 http://www.w3.org/TR/2013/WD-beacon-20131029/

Feedback should go directly to public-web-p...@w3.org.

Thank you,

Philippe





Re: Polished FileSystem API proposal

2013-10-30 Thread pira...@gmail.com
On most unix OSes, symbolic links are build using plain text files
with just the path where they point inside and no more data, and later
the OS identify them as a link instead a text file just by some
special file flags, no more. On Windows, the direct access (.lnk) has
a somewhat similar functionality (files that has the location of other
files), and there was some discussions on the Wine and some Fat FS
mail lists to use them to mimic real symbolic links.

Hard links are more dificult to implement, since they are real link to
a file, and usually this needs to have a counter of how many
references are pointing to them so it doesn't get accidentally
deleted, and this need support on the filesystem itself, while as I
told you before, symbolic links can be mimic in several ways on a
higher layer.

2013/10/30 Brian Stell bst...@google.com:
 Good points! I was thinking of the logical functioning and hadn't considered
 the implementation. My understanding is that the UA will map from the
 filename to an actual file using some kind of database. My assumption was
 the logical idea of a link would happen in that layer.


 On Wed, Oct 30, 2013 at 1:14 AM, pira...@gmail.com pira...@gmail.com
 wrote:

 +1 to symbolic links, they have almost the same functionality that hard
 links and are more secure and flexible (they are usually just plain text
 files...).

 El 30/10/2013 01:42, Brendan Eich bren...@mozilla.com escribió:

 Hard links are peculiar to Unix filesystems. Not interoperable across all
 OSes. Symbolic links, OTOH...

 /be

 Brian Stell mailto:bst...@google.com
 October 29, 2013 4:53 PM
 I meant

eg, V1/dir1/file1, V2/dir1/file1.






-- 
Si quieres viajar alrededor del mundo y ser invitado a hablar en un
monton de sitios diferentes, simplemente escribe un sistema operativo
Unix.
– Linus Tordvals, creador del sistema operativo Linux



RE: publish WD of Streams API; deadline Nov 3

2013-10-30 Thread Domenic Denicola
From: Arthur Barstow [mailto:art.bars...@nokia.com]

 If you have any comments or concerns about this proposal, please reply to
 this e-mail by November 3 at the latest.

I have some concerns about this proposal, and do not think it is solving the 
problem at hand in an appropriate fashion. I believe it does not provide the 
appropriate primitives the web platform needs for a streams API. Most 
seriously, I think it has ignored the majority of lessons learned from existing 
JavaScript streaming APIs.

Here are specific critiques, in ascending order of importance.

- It has a read(n) interface, which is not valuable [1] but constraints the API 
in several awkward ways.

- It assumes MIME types are at all relevant to a streaming data abstraction, 
when this is not at all the case.

- In general, is far too backward-looking in attempting to integrate things 
like blobs or object URLs into what is supposed to be a forward-looking 
primitive for the future of the extensible web. Replacing these various 
disparate concepts is what developers want from streams [2].

- It conflates text streams and binary data. As outlined in previous messages 
[1], what type of data the stream contains *must* be an *immutable* property of 
the stream. In contrast, the proposed API actively encourage mixing multiple 
data types within a stream via readType, readEncoding, and the overloaded write 
method.

- It conflates readable and writable streams, which prevents a whole class of 
abstractions and use cases like: read-only file streams; write-only HTTP 
requests; and duplex streams which read to one channel and write to another 
(e.g. a websocket, where writing pushes data to the server and reading reads 
data from the server---not the written data, but data that the server writes to 
you). Indeed, the only use case this proposal supports is transform streams, 
which take data in one end and output new data on the other end.

- It provides no mechanism for backpressure signaling to readable stream data 
sources or from writable stream sinks. As we have heard previously, any stream 
API that does not treat backpressure as a primary issue is not a stream API at 
all. [3]

- More generally, it does not operate at the correct level of abstraction, 
which should be close to the I/O primitives streams are meant to expose. This 
is evident in the general lack of APIs  for handing interaction with and 
signaling to underlying I/O sources or sinks.

- It's pipe mechanism is poorly thought out, and does not build on top of the 
existing primitives; indeed, it seems to install some kind of mutex lock that 
prevents the other primitives from being used until the pipe is complete. The 
primitives do not support multiple consumers, so the pipe mechanism handles 
that case in an ad-hoc way. It is not composable into chains in the fashion of 
traditional stream piping. Its lack of backpressure support prevents expression 
of key use cases such as piping a fast data connection (e.g. disk) to a slow 
data connection (e.g. push to a slow mobile device); piping a slow connection 
to a fast one; piping through encoders/compressers/decoders/decompressors; and 
so on. In general, it appears to take no inspiration from prior art, which is a 
shame since existing stream implementations all agree on how pipe should work. 

In light of these critiques, I feel that this API is not worth pursuing and 
should not proceed to Working Draft status. If we are to bring streaming data 
to the web platform, we should instead do it correctly, learning the lessons of 
the JavaScript stream APIs that came before us, and provide a powerful 
primitive that gives developers what they have asked for and serves as 
something we can layer the web's many streaming I/O interfaces on top of.

I have concrete suggestions as to what such an API could look like—and, more 
importantly, how its semantics would significantly differ from this one—which I 
hope to flesh out and share more broadly by the end of this weekend. However, 
since the call for comments phase has commenced, I thought it important to 
voice these objections as soon as possible.

Thanks,
-Domenic

[1]: http://lists.w3.org/Archives/Public/public-webapps/2013JulSep/0355.html
[2]: http://imgur.com/a/9vFGa#11
[3]: http://lists.w3.org/Archives/Public/public-webapps/2013JulSep/0275.html


Re: FYI: Resource Priorities and Beacon API

2013-10-30 Thread Philippe Le Hegaret
On Wed, 2013-10-30 at 13:23 -0400, Philippe Le Hegaret wrote:
 FYI,
 
 the Web Performance Working Group just published Resource Priorities and
 Beacon yesterday and we're interested in feedback on the ideas and
 approaches. Resource priorities allows you to tweak the download
 priority of your resources, while Beacon enables synchronously transfer

I meant to say asynchronously here. Thanks to Willian Chan to spotting
my mistake.

 data from the user agent to a web server, under the responsibility of
 the user agent.
 
  http://www.w3.org/TR/2013/WD-resource-priorities-20131029/
  http://www.w3.org/TR/2013/WD-beacon-20131029/
 
 Feedback should go directly to public-web-p...@w3.org.
 
 Thank you,
 
 Philippe
 





Re: CfC: publish WD of Streams API; deadline Nov 3

2013-10-30 Thread François REMY

| If you have any comments or concerns about this proposal, please reply
| to this e-mail by November 3 at the latest.

While adding streams to the platform seems a good idea to me, I've a few 
concern with this proposal.



My biggest concerns are articulated over two issues:

- Streams should exist in at least two fashions: InputStream and 
OutputStream. Both of them serve different purposes and, while some stream 
may actually be both, this remains an exceptional behavior worth being 
noted. Finally, a Stream is not equal to a InMemoryStream as the 
constructor may seem to indicate. A stream is a much lower-level concept, 
which may actually have nothing to do with InMemory operations.


- Secondly, the Stream interface is mixing the Stream and the 
StreamReader/StreamWriter concepts. I do not have a problem, if this is done 
properly, to mix the two concepts (aka most applications will want to use a 
StreamReader/StreamWriter anyway) but the current incarnation is not 
powerfull enough to be really useful, while still managing to be confusing.



As an actionable advice to the authors of the spec, I would recommend to 
have a look to the Stream API of other modern languages and how those API 
evolved over time. Streams exist for a very long time, it would be very 
unfortunate to repeat on the web platform the mistakes already made and 
fixed the hard way in competing platforms.





| Agreement to this proposal: a) indicates support for publishing a new
| WD; and b) does not necessarily indicate support of the contents of the 
WD.


Then, I agree to publish a new Working Draft, but this draft will need much 
futher refinement before being a w3c-recommendable specification. 





Re: Shadow DOM and Fallback contents for images

2013-10-30 Thread Ryosuke Niwa

On Oct 29, 2013, at 6:04 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 10/14/13 12:11 AM, Ryosuke Niwa wrote:
 If I'm not mistaken, how alternative text is presented is up to UA vendors.
 
 You're mistaken. The HTML spec actually defines the behavior here, in 
 standards mode: it's presented as text inside a non-replaced inline 
 (effectively as if content: attr(alt) had been used).

Interesting.  Could you point me to the part of the spec. that mandates this 
behavior?

As far as I checked, Firefox is the only browser that exhibits this behavior.  
Chrome, Internet Explorer, and Safari all show a missing-image box.

- R. Niwa




Re: Shadow DOM and Fallback contents for images

2013-10-30 Thread Boris Zbarsky

On 10/30/13 5:56 PM, Ryosuke Niwa wrote:

Interesting.  Could you point me to the part of the spec. that mandates this 
behavior?


http://www.whatwg.org/specs/web-apps/current-work/multipage/rendering.html#images-0 
says:


  When an img element represents some text and the user agent does
  not expect this to change, the element is expected to be treated
  as a non-replaced phrasing element whose content is the text,
  optionally with an icon indicating that an image is missing, so
  that the user can request the image be displayed or investigate
  why it is not rendering. In non-graphical contexts, such an icon
  should be omitted.

What an img represents is at 
http://www.whatwg.org/specs/web-apps/current-work/multipage/embedded-content-1.html#the-img-element 
if you search for What an img element represents depends on the src 
attribute and the alt attribute (sadly no direct link there).  The 
relevant bit:


  If the src attribute is set and the alt attribute is set to a value
  that isn't empty
The image is a key part of the content; the alt attribute gives a
textual equivalent or replacement for the image.

If the image is available and the user agent is configured to
display that image, then the element represents the element's
image data.

Otherwise, the element represents the text given by the alt
attribute. User agents may provide the user with a notification
that an image is present but has been omitted from the rendering.


As far as I checked, Firefox is the only browser that exhibits this behavior.  
Chrome, Internet Explorer, and Safari all show a missing-image box.


Yes, this is a known bug in Chrome, IE, and Safari.  This bug makes 
image alt text inaccessible to users who are not using assistive 
technologies.


-Boris

P.S.  For those who care about the W3C HTML spec, not the WHATWG one, 
see http://www.w3.org/html/wg/drafts/html/master/rendering.html#images-0 
and 
http://www.w3.org/html/wg/drafts/html/master/embedded-content-0.html#the-img-element 
for equivalent text.




[Bug 23479] File Constructor and Blob Constructor should have same signature

2013-10-30 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=23479

Arun a...@mozilla.com changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |FIXED

--- Comment #10 from Arun a...@mozilla.com ---
I'm marking this fixed along the lines of Comment 8, but you can additionally
set lastModifiedDate.

http://dev.w3.org/2006/webapi/FileAPI/#dfn-file

-- 
You are receiving this mail because:
You are on the CC list for the bug.



Re: publish WD of Streams API; deadline Nov 3

2013-10-30 Thread Aymeric Vitte
As you mention streams are existing since the begining of times and it's 
incredible that this does not exist on a web platform, but apparently 
the subject is still not so easy, node changed its stream quite a lot of 
time.


I probably will not be able to answer back in the coming days but your 
judgement is tough, this API is the concatenation of thoughts of the 
Overlap thread. Indeed it must clarify the writable/readable aspects, as 
well as the flow control/congestion, moreover than, despite of what you 
are saying, the API does support multiple consumers, and I/O, it's close 
to node streams.


Of course you must conflate text and binary, you can not spend your 
time converting to text, to ArrayBuffer, to ArrayBufferView, to Blob, 
the user API knows what it is streaming.


Regards,

Aymeric




Le 30/10/2013 19:04, Domenic Denicola a écrit :

From: Arthur Barstow [mailto:art.bars...@nokia.com]


If you have any comments or concerns about this proposal, please reply to
this e-mail by November 3 at the latest.

I have some concerns about this proposal, and do not think it is solving the 
problem at hand in an appropriate fashion. I believe it does not provide the 
appropriate primitives the web platform needs for a streams API. Most 
seriously, I think it has ignored the majority of lessons learned from existing 
JavaScript streaming APIs.

Here are specific critiques, in ascending order of importance.

- It has a read(n) interface, which is not valuable [1] but constraints the API 
in several awkward ways.

- It assumes MIME types are at all relevant to a streaming data abstraction, 
when this is not at all the case.

- In general, is far too backward-looking in attempting to integrate things 
like blobs or object URLs into what is supposed to be a forward-looking 
primitive for the future of the extensible web. Replacing these various 
disparate concepts is what developers want from streams [2].

- It conflates text streams and binary data. As outlined in previous messages 
[1], what type of data the stream contains *must* be an *immutable* property of 
the stream. In contrast, the proposed API actively encourage mixing multiple 
data types within a stream via readType, readEncoding, and the overloaded write 
method.

- It conflates readable and writable streams, which prevents a whole class of 
abstractions and use cases like: read-only file streams; write-only HTTP 
requests; and duplex streams which read to one channel and write to another 
(e.g. a websocket, where writing pushes data to the server and reading reads 
data from the server---not the written data, but data that the server writes to 
you). Indeed, the only use case this proposal supports is transform streams, 
which take data in one end and output new data on the other end.

- It provides no mechanism for backpressure signaling to readable stream data 
sources or from writable stream sinks. As we have heard previously, any stream 
API that does not treat backpressure as a primary issue is not a stream API at 
all. [3]

- More generally, it does not operate at the correct level of abstraction, 
which should be close to the I/O primitives streams are meant to expose. This 
is evident in the general lack of APIs  for handing interaction with and 
signaling to underlying I/O sources or sinks.

- It's pipe mechanism is poorly thought out, and does not build on top of the 
existing primitives; indeed, it seems to install some kind of mutex lock that 
prevents the other primitives from being used until the pipe is complete. The 
primitives do not support multiple consumers, so the pipe mechanism handles 
that case in an ad-hoc way. It is not composable into chains in the fashion of 
traditional stream piping. Its lack of backpressure support prevents expression 
of key use cases such as piping a fast data connection (e.g. disk) to a slow 
data connection (e.g. push to a slow mobile device); piping a slow connection 
to a fast one; piping through encoders/compressers/decoders/decompressors; and 
so on. In general, it appears to take no inspiration from prior art, which is a 
shame since existing stream implementations all agree on how pipe should work.

In light of these critiques, I feel that this API is not worth pursuing and 
should not proceed to Working Draft status. If we are to bring streaming data 
to the web platform, we should instead do it correctly, learning the lessons of 
the JavaScript stream APIs that came before us, and provide a powerful 
primitive that gives developers what they have asked for and serves as 
something we can layer the web's many streaming I/O interfaces on top of.

I have concrete suggestions as to what such an API could look like—and, more 
importantly, how its semantics would significantly differ from this one—which I 
hope to flesh out and share more broadly by the end of this weekend. However, 
since the call for comments phase has commenced, I thought it important to 
voice these objections as soon 

Re: Polished FileSystem API proposal

2013-10-30 Thread pira...@gmail.com
What you are asking for could be fixed with redirects, that it's the
HTTP equivalent of filesystems symbolic links :-)

2013/10/31 Brian Stell bst...@google.com:
 In Request for feedback: Filesystem API [1] it says This filesystem would
 be origin-specific.

 This post discusses limited readonly sharing of filesystem resources between
 origins.

 To improve web site / application performance I'm interested in caching
 static [2] resources (eg, Javascript libraries, common CSS, fonts) in the
 filesystem and accessing them thru persistent URLs.

 So, what is the issue?

 I'd like to avoid duplication. Consider the following sites: they are all
 from a single organization but have different specific origins;
* https://mail.google.com/
* https://plus.google.com/
* https://sites.google.com/
* ...

 At google there are *dozens* of these origins [3]. Even within a single page
 there are iframes from different origins. (There are other things that lead
 to different origins but for this post I'm ignoring them [4].)

 There could be *dozens* of copies of exactly the same a Javascript library,
 shared CSS, or web font in the FileSystem.

 What I'm suggesting is:
* a filesystem's persistent URLs by default be read/write only for the
 same origin
* the origin be able to allow other origins to access its files
 (readonly) by persistent URL

 I'm not asking-for nor suggesting API file access but others may express
 opinions on this.

 Brian Stell


 PS: Did I somehow miss info on same-origin in the spec [7]?

 Notes:
 [1]
 http://lists.w3.org/Archives/Public/public-script-coord/2013JulSep/0379.html
 [2] I'm also assuming immutability would be handled similar to gstatic.com
 [6] where different versions of a file have a different path/filename; eg,
* V8: http://gstatic.com/fonts/roboto/v8/2UX7WLTfW3W8TclTUvlFyQ.woff
* V9: http://gstatic.com/fonts/roboto/v9/2UX7WLTfW3W8TclTUvlFyQ.woff

 [3] Here are some of Google's origins:
 https://accounts.google.com
 https://blogsearch.google.com
 https://books.google.com
 https://chrome.google.com
 https://cloud.google.com
 https://code.google.com
 https://csi.gstatic.com
 https://developers.google.com
 https://docs.google.com
 https://drive.google.com
 https://earth.google.com
 https://fonts.googleapis.com
 https://groups.google.com
 https://mail.google.com
 https://maps.google.com
 https://news.google.com
 https://www.panoramio.com
 https://picasa.google.com
 https://picasaweb.google.com
 https://play.google.com
 https://productforums.google.com
 https://plus.google.com/
 https://research.google.com
 https://support.google.com
 https://sites.google.com
 https://ssl.gstatic.com
 https://translate.google.com
 https://tables.googlelabs.com
 https://talkgadget.google.com
 https://themes.googleusercontent.com/
 https://www.blogger.com
 https://www.google.com
 https://www.gstatic.com
 https://www.orcut.com
 https://www.youtube.com

 My guess is that there are more.

 I believe the XXX.blogspot.com origins belong to Google but I'm not an
 authority on this.

 [4] These are also different top level domains:
* https://www.google.nl
* https://www.google.co.jp

 Wikipedia lists about 200 of these [5] but since users tend to stick to one
 I'm ignoring them for this posting.

 I'm also ignoring http vs https (eg, http://www.google.com) and with/without
 leading www (eg, https://google.com) since they redirect.

 [5] http://en.wikipedia.org/wiki/List_of_Google_domains
 [6] http://wiki.answers.com/Q/What_is_gstatic
 [7] http://w3c.github.io/filesystem-api/Overview.html



-- 
Si quieres viajar alrededor del mundo y ser invitado a hablar en un
monton de sitios diferentes, simplemente escribe un sistema operativo
Unix.
– Linus Tordvals, creador del sistema operativo Linux



Re: CfC: publish WD of Streams API; deadline Nov 3

2013-10-30 Thread Dean Landolt
I really like this general concepts of this proposal, but I'm confused by
what seems like an unnecessary limiting assumption: why assume all streams
are byte streams? This is a mistake node recently made in its streams
refactor that has led to an objectMode and added cruft.

Forgive me if this has been discussed -- I just learned of this today. But
as someone who's been slinging streams in javascript for years I'd really
hate to see the standard stream hampered by this bytes-only limitation.
The node ecosystem clearly demonstrates that streams are for more than
bytes and (byte-encoded) strings.

In my perfect world any arbitrary iterator could be used to characterize
stream chunks -- this would have some really interesting benefits -- but I
suspect this kind of flexibility would be overkill for now. But there's
good no reason bytes should be the only thing people can chunk up in
streams. And if we're defining streams for the whole platform they
shouldn't *just* be tied to a few very specific file-like use cases.

If streams could also consist of chunks of strings (real, native strings) a
huge swath of the API could disappear. All of readType, readEncoding and
charset could be eliminated, replaced with simple, composable transforms
that turn byte streams (of, say, utf-8) into string streams. And vice versa.

The `size` of a stream (if it exists) would be specified as the total
`length` of all chunks concatenated together. So if chunks were in bytes,
`size` would be the total bytes (as currently specified). But if chunks
consisted of real strings, `size` would be the total length of all string
chunks. Interestingly, if your source stream is in utf-8 the total bytes
wouldn't be meaningful, and the total string size couldn't be known without
iterating the whole stream. But if the source stream is utf-16 and the
`size` is known, the new `size` could also be known ahead of time -- `bytes
/ 2` (thanks to javascript's ucs-2 strings).

Of course the real draw of this approach would be when chunks are neither
blobs nor strings. Why couldn't chunks be arrays? The arrays could contain
anything (no need to reserve any value as a sigil). Regardless of the chunk
type, the zero object for any given type wouldn't be `null` (it would be
something like '' or []). That means we can use null to distinguish EOF,
and `chunk == null` would make a perfectly nice (and unambiguous) EOF
sigil, eliminating yet more API surface. This would give us a clean object
mode streams for free, and without node's arbitrary limitations.

The `size` of an array stream would be the total length of all array
chunks. As I hinted before, we could also leave the door open to specifying
chunks as any iterable, where `size` (if known) would just be the `length`
of each chunk (assuming chunks even have a `length`). This would also allow
individual chunks to be built of generators, which could be particularly
interesting if the `size` argument to `read` was specified as a maximum
number of bytes rather than the total to return -- completely sensible
considering it has to behave this way near the end of the stream anyway...

This would lead to a pattern like `stream.read(Infinity)`, which would
essentially say *give me everything you've got soon as you can*. This is
closer to node's semantics (where read is async, for added scheduling
flexibility). It would drain streams faster rather than pseudo-blocking for
a specific (and arbitrary) size chunk which ultimately can't be guaranteed
anyway, so you'll always have to do length checks.

(On a somewhat related note: why is a 0-sized stream specified to throw?
And why a SyntaxError of all things? A 0-sized stream seems perfectly
reasonable to me.)

What's particularly appealing to me about the chunk-as-generator idea is
that these chunks could still be quite large -- hundreds megabytes, even.
Just because a potentially large amount of data has become available since
the last chunk was processed doesn't mean you should have to bring it all
into memory at once.

I know this is a long email and it may sound like a lot of suggestions, but
I think it's actually a relatively minor tweak (and simplification) that
would unlock the real power of streams for their many other use cases. I've
been thinking about streams and promises (and streams with promises) for
years now, and this is the first approach that really feels right to me.






On Mon, Oct 28, 2013 at 11:29 AM, Arthur Barstow art.bars...@nokia.comwrote:

 Feras and Takeshi have begun merging their Streams proposal and this is a
 Call for Consensus to publish a new WD of Streams API using the updated ED
 as the basis:

 https://dvcs.w3.org/hg/**streams-api/raw-file/tip/**Overview.htmhttps://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm
 

 Please note the Editors may update the ED before the TR is published (but
 they do not intend to make major changes during the CfC).

 Agreement to this proposal: a) indicates support for publishing a new WD;
 and b) 

Splitting Stream into InputStream and OutputStream (was Re: CfC: publish WD of Streams API; deadline Nov 3)

2013-10-30 Thread Takeshi Yoshino
Hi François

On Thu, Oct 31, 2013 at 6:16 AM, François REMY 
francois.remy@outlook.com wrote:

 - Streams should exist in at least two fashions: InputStream and
 OutputStream. Both of them serve different purposes and, while some stream
 may actually be both, this remains an exceptional behavior worth being
 noted. Finally, a Stream is not equal to a InMemoryStream as the
 constructor may seem to indicate. A stream is a much lower-level concept,
 which may actually have nothing to do with InMemory operations.


Yes. I initially thought it'll be clearer to split in/out interface. E.g. a
Stream obtained from XHR to receive a response should not be writable. It's
reasonable to make network-to-Stream transfer happen in background
asynchronously to JavaScript, and then it doesn't make much sense to keep
it writable from JavaScript.

It has a unified IDL now but I'm designing write side and read side
independently. We could decouple it into two separate IDLs? concepts? if it
make sense. Stream would inherit from both and provides a constructor.


Defining generic Stream than considering only bytes (Re: CfC: publish WD of Streams API; deadline Nov 3)

2013-10-30 Thread Takeshi Yoshino
Hi Dean,

On Thu, Oct 31, 2013 at 11:30 AM, Dean Landolt d...@deanlandolt.com wrote:

 I really like this general concepts of this proposal, but I'm confused by
 what seems like an unnecessary limiting assumption: why assume all streams
 are byte streams? This is a mistake node recently made in its streams
 refactor that has led to an objectMode and added cruft.

 Forgive me if this has been discussed -- I just learned of this today. But
 as someone who's been slinging streams in javascript for years I'd really
 hate to see the standard stream hampered by this bytes-only limitation.
 The node ecosystem clearly demonstrates that streams are for more than
 bytes and (byte-encoded) strings.


To glue Streams with existing binary handling infrastructure such as
ArrayBuffer, Blob, we should have some specialization for Stream handling
bytes rather than using generalized Stream that would accept/output an
array or single object of the type. Maybe we can rename Streams API to
ByteStream not to occupy the name Stream that sounds like more generic, and
start standardizing generic Stream.


 In my perfect world any arbitrary iterator could be used to characterize
 stream chunks -- this would have some really interesting benefits -- but I
 suspect this kind of flexibility would be overkill for now. But there's
 good no reason bytes should be the only thing people can chunk up in
 streams. And if we're defining streams for the whole platform they
 shouldn't *just* be tied to a few very specific file-like use cases.

 If streams could also consist of chunks of strings (real, native strings)
 a huge swath of the API could disappear. All of readType, readEncoding and
 charset could be eliminated, replaced with simple, composable transforms
 that turn byte streams (of, say, utf-8) into string streams. And vice versa.


So, for example, XHR would be the point of decoding and it returns a Stream
of DOMStrings?


 Of course the real draw of this approach would be when chunks are neither
 blobs nor strings. Why couldn't chunks be arrays? The arrays could contain
 anything (no need to reserve any value as a sigil). Regardless of the chunk
 type, the zero object for any given type wouldn't be `null` (it would be
 something like '' or []). That means we can use null to distinguish EOF,
 and `chunk == null` would make a perfectly nice (and unambiguous) EOF
 sigil, eliminating yet more API surface. This would give us a clean object
 mode streams for free, and without node's arbitrary limitations.


For several reasons, I chose to use .eof than using null. One of them is to
allow the non-empty final chunk to signal EOF than requiring one more
read() call.

This point can be re-discussed.


 The `size` of an array stream would be the total length of all array
 chunks. As I hinted before, we could also leave the door open to specifying
 chunks as any iterable, where `size` (if known) would just be the `length`
 of each chunk (assuming chunks even have a `length`). This would also allow
 individual chunks to be built of generators, which could be particularly
 interesting if the `size` argument to `read` was specified as a maximum
 number of bytes rather than the total to return -- completely sensible
 considering it has to behave this way near the end of the stream anyway...


I don't really understand the last point. Could you please elaborate the
story and benefit?

IIRC, it's considered to be useful and important to be able to cut an exact
requested size of data into an ArrayBuffer object and get notified (the
returned Promise gets resolved) only when it's ready.


 This would lead to a pattern like `stream.read(Infinity)`, which would
 essentially say *give me everything you've got soon as you can*.


In current proposal, read() i.e. read() with no argument does this.


  This is closer to node's semantics (where read is async, for added
 scheduling flexibility). It would drain streams faster rather than
 pseudo-blocking for a specific (and arbitrary) size chunk which ultimately
 can't be guaranteed anyway, so you'll always have to do length checks.

 (On a somewhat related note: why is a 0-sized stream specified to throw?
 And why a SyntaxError of all things? A 0-sized stream seems perfectly
 reasonable to me.)


0-sized Stream is not prohibited.

Do you mean 0-sized read()/pipe()/skip()? I don't think they make much
sense. It's useful only when you want to sense EOF and it can be done by
read(1).


 What's particularly appealing to me about the chunk-as-generator idea is
 that these chunks could still be quite large -- hundreds megabytes, even.
 Just because a potentially large amount of data has become available since
 the last chunk was processed doesn't mean you should have to bring it all
 into memory at once.


It's interesting. Could you please list some concrete example of such a
generator?