Update on Streams API Status

2014-02-06 Thread Feras Moussa
Hi All,
 
I wanted to update everyone on the latest plan for moving forward on the 
Streams spec.
 
For a variety of reasons, there are currently two Streams Specs being worked on 
- one in the W3C, and one in the WHATWG. Each of these specs have their 
strengths and weaknesses, and were looking at problems from different 
perspectives.
 
After meeting with the WHATWG folks and discussing the various scenarios being 
targeted by the Streams specs as well as other considerations, we all agreed 
that we have the same goals and should work together to get alignment and avoid 
having different implementations.
 
This is an opportunity to get a strong consistent API which behaves similarly 
across the various platforms, from browsers to servers. We are excited with the 
potential here, because it lets us tell one story.
 
Moving forward, we've agreed to revise the approach to working on the Streams 
spec as follows:
 
Create a 'base' Stream spec, which we will work together on. This will be 
seeded with the base of the WHATWG spec, and we will incorporate various pieces 
from either spec as needed.
This base Stream should:
1. Be the lowest primitive that is independent of any platform
2. Be a layer that could make it into the JS language/ES
3. Could be prototyped in JavaScript directly to showcase it
4. Supports the various Stream goals we discussed, such as creation, 
backpressure, read/write behaviors, etc.
 
In addition to the base Stream spec, the remaining platform-specific pieces 
which do not fit into the shared-base spec will live in an independent spec. 
This includes things such as support in other APIs (XHR, MediaStreaming, etc) 
or DOM specific scenarios - (createObjectURL()). The current W3C Streams API 
will focus on this aspect of the API surface, while leaving the core 
functionality to be defined in the base spec.
 
Once we've reorganized the components as defined above, we will share out 
further details for locations of the specs as well as solicit review.
 
Thanks,
Feras
  

RE: Request for feedback: Streams API

2013-12-04 Thread Feras Moussa
Thanks Art.

We've also had Rob (cc'd) interested from the FOMS (Open Media Standards) 
group. I'll follow up with Rob for further feedback from that group.


In the spec, we tried to capture all the various areas we think this spec can 
affect - this is the stream consumers/producers section 
(http://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm#producers-consumers)

In addition to the ones you've outlined,the one that comes to mind from the 
list in the spec would be the web-crypto group.

-Feras


 Date: Wed, 4 Dec 2013 12:57:50 -0500
 From: art.bars...@nokia.com
 To: feras.mou...@hotmail.com; dome...@domenicdenicola.com; 
 vitteayme...@gmail.com
 CC: public-webapps@w3.org
 Subject: Re: Request for feedback: Streams API

 Thanks for the update Feras.

 Re getting `wide review` of the latest [ED], which groups, lists and
 individuals should be asked to review the spec?

 In IRC just now, jgraham mentioned TC39, WHATWG and Domenic. Would
 someone please ask these two groups to review the latest ED?

 Aymeric - would you please ask the WebRTC list(s) to review the latest
 ED or provide the list name(s) and I'll ask them.

 -Thanks, ArtB

 [ED] https://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm

 On 12/4/13 11:27 AM, ext Feras Moussa wrote:
 The editors of the Streams API have reached a milestone where we feel
 many of the major issues that have been identified thus far are now
 resolved and incorporated in the editors draft.

 The editors draft [1] has been heavily updated and reviewed the past
 few weeks to address all concerns raised, including:
 1. Separation into two distinct types -ReadableByteStream and
 WritableByteStream
 2. Explicit support for back pressure management
 3. Improvements to help with pipe( ) and flow-control management
 4. Updated spec text and diagrams for further clarifications

 There are still a set of bugs being tracked in bugzilla. We would like
 others to please review the updated proposal, and provide any feedback
 they may have (or file bugs).

 Thanks.
 -Feras


 [1] https://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm

 


RE: Comments on version web-apps specs from 2013-10-31

2013-11-20 Thread Feras Moussa
Hi Francois,Thanks for the feedback.

 From: francois-xavier.kowal...@hp.com
 To: public-webapps@w3.org
 Date: Wed, 20 Nov 2013 20:30:47 +
 Subject: Comments on version web-apps specs from 2013-10-31
 
 Hello
 
 I have a few comments on: 
 https://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm from 2013-10-31. 
 Apologies wether it is not the latest version: It took me some time to 
 figure-out where was the right forum to send these comments to.
 
 Section 2.1.3:
 1. Close(): For writeable streams, the close() method does not provide a 
 data-completion hook (all-data-flushed-to-destination), unlike the close 
 method that resolved the Promise returned by read().The version of the spec 
 you linked doesn't differentiate writeable/readable streams, but is something 
 we are considering adding in a future version. I don't quite understand what 
 you're referring to here. close is independent of future reads - you can call 
 a read after close, and once EOF is reached, the promise is resolved and you 
 get a result with eof=true.
 2. Pipe(): the readable Steam (the one that provides the pipe() method) 
 is neutered in case of I/O error, but the state change of the writeable 
 stream is not indicated. What if the write operation failed?Are you asking 
 what the chain of error propagation is when multiple streams are chained?
 
 Section 3.2:
 1. Shouldn't a FileWrite also be capable of consuming a Stream? (Like 
 XHR-pipe-to-FileWriter)Yes, I think so - this is a use case we can add.
 2. Shouldn't an XMLHttpRequest also be capable of consuming a Stream? 
 (eg. chunked PUT/POST)?Section 5.4 covers this - support for posting a 
 Stream. That said, this is a section I think will need to be flushed out more.
 
 br.
 
 —FiX
 
 PS: My request to join this group is pending, so please Cc me in any 
 reply/comment you may have until membership is fixed.
 
 
  

RE: CfC: publish WD of Streams API; deadline Nov 3

2013-11-03 Thread Feras Moussa
 Streams instantiations somewhere make me think to the structured clone
 algorithm, as I proposed before there should be a method like a
 createStream so you just need to say for a given API that it supports
 this method and you don't have to modify the API except for specific
 cases (xhr,ws,etc), like for the structured clone algo, and this is missing.

This is an interesting idea. But I'm not entirely clear on your proposal. Is 
[1] where you mentioned it, or is there another thread I've missed?

You're not proposing changing the stream constructor, but rather also defining 
a generic way an API can add support for stream by implementing a 
strongly-defined createStream method?

Is your thinking to have this in order to give users a consistent way to obtain 
a stream from various APIs? 
On first thought I like the idea, but I think once we settle on a definition of 
'Stream', we can asses what is really required for other APIs to begin 
supporting it. If so, I can create a bug to track this concept.

[1] http://lists.w3.org/Archives/Public/public-webapps/2013OctDec/0246.html



 Date: Sun, 3 Nov 2013 23:16:12 +0100
 From: vitteayme...@gmail.com
 To: art.bars...@nokia.com
 CC: public-webapps@w3.org
 Subject: Re: CfC: publish WD of Streams API; deadline Nov 3

 Yes, with good results, groups are throwing the ball to others... I
 don't know right now all the groups that might need to be involved,
 that's the reason of my question.

 4 days out without internet connection, usually one email every two
 weeks on the subject and suddendly tons of emails, looks like a
 conspiracy...

 I will reread the threads (still perplex about some issues, a txt stream
 is a binary stream that should be piped to textEncoder/Decoder from my
 standpoint, making it a special case just complicates everything, maybe
 it's too late to revert this) but it looks like the consensus is to wait
 for Domenic's proposal, OK but as I mentioned he missed some points in
 the current proposal and it's interesting to read carefully the Overlap
 thread, and I find it important to have a simple way to handle
 ArrayBuffer, View, Blob without converting all the time.

 Streams instantiations somewhere make me think to the structured clone
 algorithm, as I proposed before there should be a method like a
 createStream so you just need to say for a given API that it supports
 this method and you don't have to modify the API except for specific
 cases (xhr,ws,etc), like for the structured clone algo, and this is missing.

 Regards

 Aymeric

 Le 03/11/2013 19:02, Arthur Barstow a écrit :
 Hi Aymeric,

 On 10/29/13 7:22 AM, ext Aymeric Vitte wrote:
 Who is coordinating each group that should get involved?

 I thought you agreed to do that ;).

 MediaStream for example should be based on the Stream interface and
 all related streams proposals.

 More seriously though, this is good to know, and if there is
 additional coordination that needs to be done, please let us know.

 -Thanks, ArtB



 --
 Peersm : http://www.peersm.com
 node-Tor : https://www.github.com/Ayms/node-Tor
 GitHub : https://www.github.com/Ayms

 


RE: Defining generic Stream than considering only bytes (Re: CfC: publish WD of Streams API; deadline Nov 3)

2013-10-31 Thread Feras Moussa
A few comments inline below -


 From: tyosh...@google.com 
 Date: Thu, 31 Oct 2013 13:23:26 +0900 
 To: d...@deanlandolt.com 
 CC: art.bars...@nokia.com; public-webapps@w3.org 
 Subject: Defining generic Stream than considering only bytes (Re: CfC: 
 publish WD of Streams API; deadline Nov 3) 
 
 Hi Dean, 
 
 On Thu, Oct 31, 2013 at 11:30 AM, Dean Landolt 
 d...@deanlandolt.commailto:d...@deanlandolt.com wrote: 
 I really like this general concepts of this proposal, but I'm confused 
 by what seems like an unnecessary limiting assumption: why assume all 
 streams are byte streams? This is a mistake node recently made in its 
 streams refactor that has led to an objectMode and added cruft. 
 
 Forgive me if this has been discussed -- I just learned of this today. 
 But as someone who's been slinging streams in javascript for years I'd 
 really hate to see the standard stream hampered by this bytes-only 
 limitation. The node ecosystem clearly demonstrates that streams are 
 for more than bytes and (byte-encoded) strings. 
 
 
 To glue Streams with existing binary handling infrastructure such as 
 ArrayBuffer, Blob, we should have some specialization for Stream 
 handling bytes rather than using generalized Stream that would 
 accept/output an array or single object of the type. Maybe we can 
 rename Streams API to ByteStream not to occupy the name Stream that 
 sounds like more generic, and start standardizing generic Stream. 

Dean, it sounds like your concern isnt just around the naming, but rather how 
data is read out of a stream. I've reviewed both the Node Streams and Buffer 
APIs previously, and from my understanding the data is provided as either a 
Buffer or String. This is on-par with ArrayBuffer/String. What data do you want 
to obtain that is missing, and for what scenario? Are these data types that 
already exist in the web platform, or new types you think are missing?

 
 In my perfect world any arbitrary iterator could be used to 
 characterize stream chunks -- this would have some really interesting 
 benefits -- but I suspect this kind of flexibility would be overkill 
 for now. But there's good no reason bytes should be the only thing 
 people can chunk up in streams. And if we're defining streams for the 
 whole platform they shouldn't just be tied to a few very specific 
 file-like use cases. 
 If streams could also consist of chunks of strings (real, native 
 strings) a huge swath of the API could disappear. All of readType, 
 readEncoding and charset could be eliminated, replaced with simple, 
 composable transforms that turn byte streams (of, say, utf-8) into 
 string streams. And vice versa. 
 
 
 So, for example, XHR would be the point of decoding and it returns a 
 Stream of DOMStrings? 
 
 Of course the real draw of this approach would be when chunks are 
 neither blobs nor strings. Why couldn't chunks be arrays? The arrays 
 could contain anything (no need to reserve any value as a sigil). 
 Regardless of the chunk type, the zero object for any given type 
 wouldn't be `null` (it would be something like '' or []). That means we 
 can use null to distinguish EOF, and `chunk == null` would make a 
 perfectly nice (and unambiguous) EOF sigil, eliminating yet more API 
 surface. This would give us a clean object mode streams for free, and 
 without node's arbitrary limitations. 
 
 For several reasons, I chose to use .eof than using null. One of them 
 is to allow the non-empty final chunk to signal EOF than requiring one 
 more read() call. 
 
 This point can be re-discussed. 

I thought EOF made sense here as well, but it's something that can be changed. 
Your proposal is interesting - is something like this currently implemented 
anywhere? This behavior feels like it'd require several changes elsewhere, 
since some APIs and libraries may explicitly look for an EOF.

 
 The `size` of an array stream would be the total length of all array 
 chunks. As I hinted before, we could also leave the door open to 
 specifying chunks as any iterable, where `size` (if known) would just 
 be the `length` of each chunk (assuming chunks even have a `length`). 
 This would also allow individual chunks to be built of generators, 
 which could be particularly interesting if the `size` argument to 
 `read` was specified as a maximum number of bytes rather than the total 
 to return -- completely sensible considering it has to behave this way 
 near the end of the stream anyway... 
 
 I don't really understand the last point. Could you please elaborate 
 the story and benefit? 
 
 IIRC, it's considered to be useful and important to be able to cut an 
 exact requested size of data into an ArrayBuffer object and get 
 notified (the returned Promise gets resolved) only when it's ready. 
 
 This would lead to a pattern like `stream.read(Infinity)`, which would 
 essentially say give me everything you've got soon as you can. 
 
 In current proposal, read() i.e. read() with 

RE: publish WD of Streams API; deadline Nov 3

2013-10-31 Thread Feras Moussa
Agreed. Some of the points listed appear to be things already addressed. 
Takeshi and I have some feedback on the initial mail, but will wait and provide 
thoughts on the proposal instead. Looking forward to seeing it.


 From: tyosh...@google.com 
 Date: Fri, 1 Nov 2013 12:18:47 +0900 
 To: ann...@annevk.nl 
 CC: dome...@domenicdenicola.com; art.bars...@nokia.com; public-webapps@w3.org 
 Subject: Re: publish WD of Streams API; deadline Nov 3 
 
 OK. There seems to be some disconnect, but I'm fine with waiting for 
 Domenic's proposal first. 
 
 Takeshi 
 
 
 On Thu, Oct 31, 2013 at 7:41 PM, Anne van Kesteren 
 ann...@annevk.nlmailto:ann...@annevk.nl wrote: 
 On Wed, Oct 30, 2013 at 6:04 PM, Domenic Denicola 
 dome...@domenicdenicola.commailto:dome...@domenicdenicola.com wrote: 
 I have concrete suggestions as to what such an API could look 
 like—and, more importantly, how its semantics would significantly 
 differ from this one—which I hope to flesh out and share more broadly 
 by the end of this weekend. However, since the call for comments phase 
 has commenced, I thought it important to voice these objections as soon 
 as possible. 
 
 Given how long we have been trying to figure out streams, waiting a 
 little longer to see Domenic's proposal should be fine I think. No 
 need to start rushing things through the process now. (Although on the 
 flipside at some point we will need to start shipping something.) 
 
 
 -- 
 http://annevankesteren.nl/ 
 
 


RE: CfC: publish WD of Streams API; deadline Nov 3

2013-10-31 Thread Feras Moussa
Yes, WebSockets was missing - I've gone ahead and updated the spec to include 
it.

Thanks for sharing the links, the content is well thought out. In particular, 
your diagram does a good job summarizing some of the key consumers and 
producers that come to play regarding Streams. I'll review it in detail.

DataChannels also seem like a possible candidate, although I'm not yet very 
familiar with them. This can be something reviewed and thought through, and 
added accordingly.

 Who is coordinating each group that should get involved? MediaStream for 
example should be based on the Stream interface and all related streams 
proposals.

Once we come to a consensus in the WG what Streams look like and their role, we 
can begin to coordinate what the impact is on other groups.

Thanks,
Feras



 Date: Fri, 1 Nov 2013 16:05:22 +1100
 From: rob...@mob-labs.com
 To: public-webapps@w3.org
 Subject: Re: CfC: publish WD of Streams API; deadline Nov 3

 Along with WebSockets as Aymeric mentioned...WebRTC DataChannels are
 also missing.

 And I think Aymeric's point about MediaStream is important too...but
 there is very strong push-back from within the Media Capture  Streams
 TF that they don't think this is relevant 8/

 Also, here's a couple of links for things I've shared/proposed recently
 related to this.

 public message
 http://lists.w3.org/Archives/Public/public-media-capture/2013Sep/0229.html

 presentation
 http://www.slideshare.net/robman/web-standards-for-ar-workshop-at-ismar13

 code
 https://github.com/buildar/getting_started_with_webrtc#image_processing_pipelinehtml


 All thoughts and feedback welcome.


 roBman


 On 29/10/13 10:22 PM, Aymeric Vitte wrote:
 I have suggested some additions/changes in my latest reply to the
 Overlap thread.

 The list of streams producers/consumers is not final but obviously
 WebSockets are missing.

 Who is coordinating each group that should get involved? MediaStream
 for example should be based on the Stream interface and all related
 streams proposals.

 Regards,

 Aymeric

 Le 28/10/2013 16:29, Arthur Barstow a écrit :
 Feras and Takeshi have begun merging their Streams proposal and this
 is a Call for Consensus to publish a new WD of Streams API using the
 updated ED as the basis:

 https://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm

 Please note the Editors may update the ED before the TR is published
 (but they do not intend to make major changes during the CfC).

 Agreement to this proposal: a) indicates support for publishing a new
 WD; and b) does not necessarily indicate support of the contents of
 the WD.

 If you have any comments or concerns about this proposal, please
 reply to this e-mail by November 3 at the latest. Positive response
 to this CfC is preferred and encouraged and silence will be assumed
 to mean agreement with the proposal.

 -Thanks, ArtB



 


RE: [streams-api] Seeking status and plans

2013-10-12 Thread Feras Moussa

 Date: Fri, 11 Oct 2013 08:47:23 -0400
 From: art.bars...@nokia.com
 To: tyosh...@google.com; feras.mou...@hotmail.com
 CC: public-webapps@w3.org
 Subject: Re: [streams-api] Seeking status and plans

 On 10/11/13 8:05 AM, ext Takeshi Yoshino wrote:
 On Thu, Oct 10, 2013 at 11:34 PM, Feras Moussa
 feras.mou...@hotmail.com mailto:feras.mou...@hotmail.com wrote:

 Apologies for the delay, I had broken one of my mail rules and
 didn't see this initially.

 Asymeric is correct - there have been a few threads and revisions.
 A more up-to-date version is the one Asymeric linked -
 http://htmlpreview.github.io/?https://github.com/tyoshino/stream/blob/master/streams.html
 The above version incorporates both promises and streams and is a
 more refined version of Streams.

 From other threads on Stream, it became apparent that there were a
 few pieces of the current Streams API ED that were designed around
 older paradigms and needed refining to be better aligned with
 current APIs. I think it does not make sense to have two
 different specs, and instead have a combined one that we move
 forward.

 I can work with Takeshi on getting his version incorporated into
 the Streams ED, which we can then move forward with.


 I'm happy to.

 OK, thanks Feras and Takeshi. Re PubStatus, I added Taekshi as an Editor
 and update the Plan to reflect the impending integration.

 I think it would be helpful if Feras' spec was updated as soon as
 possible to clearly state the intent to integrate Takeshi's spec.

 -Thanks, Art





The spec has been updated with a note clarifying the state. I will work with 
Takeshi on getting the ED updated accordingly.

Thanks, 
Feras 


RE: [streams-api] Seeking status and plans

2013-10-10 Thread Feras Moussa
Apologies for the delay, I had broken one of my mail rules and didn't see this 
initially.

Asymeric is correct - there have been a few threads and revisions. A more 
up-to-date version is the one Asymeric linked - 
http://htmlpreview.github.io/?https://github.com/tyoshino/stream/blob/master/streams.html
The above version incorporates both promises and streams and is a more refined 
version of Streams. 

From other threads on Stream, it became apparent that there were a few pieces 
of the current Streams API ED that were designed around older paradigms and 
needed refining to be better aligned with current APIs.  I think it does not 
make sense to have two different specs, and instead have a combined one that 
we move forward. 

I can work with Takeshi on getting his version incorporated into the Streams 
ED, which we can then move forward with.

Thanks,
Feras


 Date: Thu, 10 Oct 2013 09:32:20 -0400
 From: art.bars...@nokia.com
 To: vitteayme...@gmail.com; feras.mou...@hotmail.com; tyosh...@google.com
 CC: public-webapps@w3.org
 Subject: Re: [streams-api] Seeking status and plans
 
 On 10/10/13 6:26 AM, ext Aymeric Vitte wrote:
 I think the plan should be more here now: 
 http://lists.w3.org/Archives/Public/public-webapps/2013OctDec/0049.html
 
 There are indeed at least two specs here:
 
 [1] Feras' https://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm
 
 [2] Takeshi's 
 http://htmlpreview.github.io/?https://github.com/tyoshino/stream/blob/master/streams.html
 
 Among the Qs I have here are ...
 
 * What is the implementation status of these specs?
 
 * Would it make sense or be useful to merge or layer the specs, or 
 should we only work on one of these specs?
 
 * Who favors WebApps stopping work on [1] and starting work on [2]?
 
 * Would anyone object to WebApps stopped working on [1]? If yes, are you 
 willing to help lead the effort to move Feras' spec forward?
 
 * Takeshi - I noticed you are not a member of WebApps. Are you willing 
 to work on [2] within the context of WebApps?
 
 -Thanks, AB
 
 

 Regards

 Aymeric

 Le 02/10/2013 18:32, Arthur Barstow a écrit :
 Hi Feras,

 If any of the data for the Streams API spec in [PubStatus] is not 
 accurate, please provide corrections.

 Also, please see the following thread and let us know your plan for 
 this spec 
 http://lists.w3.org/Archives/Public/public-webapps/2013JulSep/0599.html.

 -Thanks, ArtB

 [PubStatus] http://www.w3.org/2008/webapps/wiki/PubStatus


 
 


RE: Overlap between StreamReader and FileReader

2013-05-16 Thread Feras Moussa
Can you please go into a bit more detail? I've read through the thread, and it 
mostly focuses on the details of how a Stream is received from XHR and what 
behaviors can be expected - it only lightly touches on how you can operate on a 
stream after it is received.
The StreamReader by design mimics the FileReader, in order to give a consistent 
experience to developers. If we agree the FileReader has some flaws and we want 
to take an opportunity to address them with StreamReader, or an alternative, 
then I think that is reasonable. I do agree the API should allow for scenarios 
where data can be discarded, given that is an advantage of a Stream over a Blob.
That said, Anne, what is your suggestion for how Streams can be consumed?
Also, apologies for being a bit late to the conversation - I missed the 
conversations the past month. I'm now hoping to solicit more feedback and 
update the Streams spec accordingly.

 Date: Thu, 16 May 2013 18:41:21 +0100
 From: ann...@annevk.nl
 To: travis.leith...@microsoft.com
 CC: tyosh...@google.com; slightly...@google.com; public-webapps@w3.org
 Subject: Re: Overlap between StreamReader and FileReader
 
 On Thu, May 16, 2013 at 6:31 PM, Travis Leithead
 travis.leith...@microsoft.com wrote:
  Since we have Streams implemented to some degree, I'd love to hear 
  suggestions to improve it relative to IO. Anne can you summarize the points 
  you've made on the other various threads?
 
 I recommend reading through
 http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/thread.html#msg569
 
 Problems:
 
 * Too much complexity for being an Blob without synchronous size.
 * The API is bad. The API for File is bad too, but we cannot change
 it, this however is new.
 
 And I think we really want an IO API that's not about incremental, but
 can actively discard incoming data once it's processed.
 
 
 --
 http://annevankesteren.nl/
 
  

RE: [Streams API] typo

2013-01-17 Thread Feras Moussa
Thanks, I've gone ahead and updated the Streams API spec accordingly.

 Date: Thu, 17 Jan 2013 10:46:16 +0100
 From: cyril.concol...@telecom-paristech.fr
 To: public-webapps@w3.org
 Subject: [Streams API] typo
 
 Hi all,
 
 I noticed a typo in the W3C Editor's Draft 25 October 2012 of Streams 
 API (section 2.2):  not allow and future reads should be  not allow 
 *any* future reads.
 
 Cyril
 
 -- 
 Cyril Concolato
 Maître de Conférences/Associate Professor
 Groupe Multimedia/Multimedia Group
 Telecom ParisTech
 46 rue Barrault
 75 013 Paris, France
 http://concolato.wp.mines-telecom.fr/
 
 
  

Affiliation change and Streams API status

2013-01-02 Thread Feras Moussa
Hi all,this is to announce that my affiliation has changed and I no longer work 
for Microsoft. I will remain part of the WG as an invited expert, and will 
continue on as editor of the Streams API spec.
I've recently made a set of changes to the Streams API[1] and believe the spec 
is feature complete in breadth and will be ready for FPWD. I expect to start 
moving the spec towards FPWD soon.
Thanks, Feras
[1]. http://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm
  

RE: [File API] Blob URI creation

2012-08-14 Thread Feras Moussa
In general we are OK with changing it to the autoRevoke behavior below, but 
have some concerns around changing the default behavior. 
Changing the default behavior is a breaking change and any apps which expect 
the URL to work multiple times will now be broken. In Windows 8, we also 
implemented the oneTimeOnly behavior and it was very widely used, these 
consumers will be broken as well.  

We would like to support autoRevoke as the default as it helps reduce the 
chance of leaking unintentionally, but  we think developers should have a way 
to feature detect the new change. If a developer can feature detect which 
default behavior is present, then they can reason about what to expect from the 
API.

If a way is provided for developers to detect the change, then we support this 
change. Additionally, we support the changes outlined in the bug at 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=17765, and feel it resolves 
several of the edge cases we saw when implementing the File API.


From: Arun Ranganathan [mailto:aranganat...@mozilla.com] 
Sent: Wednesday, July 11, 2012 1:20 PM
To: Glenn Maynard
Cc: Rich Tibbett; public-webapps; Arun Ranganathan; Jonas Sicking
Subject: Re: [File API] Blob URI creation

On May 30, 2012, at 6:48 PM, Glenn Maynard wrote:


On your main question, I've had the same thought in the past--a url property 
on Blob which simply creates a new auto-revoking blob URL.  I didn't bring it 
up since I'm not sure if creating a URL for a blob is actually something you do 
so often that it's worth having a shortcut.  If so, a function is probably 
better than a property--more future-proof, and it'd be unusual on the platform 
to have a property that returns a different value every time you read it.

On Wed, May 30, 2012 at 1:50 PM, Rich Tibbett ri...@opera.com wrote:
Yes, this might be a better solution. I was working on what was available in 
the editor's draft and looking for a way to remove the need to ever call 
revokeObjectUrl.

This is part of what's wrong with oneTimeOnly--it *doesn't* actually completely 
remove the need to call revokeObjectUrl.  For example:

function f(blob) {
    var url = URL.createObjectURL(blob, {oneTimeOnly: true});
    if(showImage)
        img.src = url;
    else
    URL.revokeObjectURL(url);
}

Without the revoke case, the URL (and so the whole blob) is leaked as it's 
never actually used.  autoRevoke doesn't have this problem.

Arun/Jonas: Can we hide this feature in the spec before more people implement 
it, or at least tag it with not ready for implementations or something?


I'll do one better, and introduce autoRevoke semantics:

http://www.w3.org/TR/2012/WD-FileAPI-20120712/#creating-revoking

By default, this does not need a corresponding revokeObjectURL() call.  In 
order for Blob URLs to persist past a stable state (for that unit of script) 
createObjectURL has to be invoked with autoRevoke set to false.



That is, you shouldn't ever have to pass a Blob URI obtained via Blob.getURL 
through revokeObjectUrl because it assumes some auto-revocation behavior. Using 
microtasks to release at the next stable state does seem ok as long as 
developers have a very good understanding of when a Blob URI will be implicitly 
revoked. Saying that you can use a Blob URI exactly once, as per onetimeonly 
could still end up being easier to understand though.

(s/microtasks/stable states/; they're not quite the same)
It's actually a bit hard to understand (or easy to misunderstand), since 
there's no clear concept of using a URL.  For example, if you start two XHR's 
on the same URL one after the other in the same script, the order in which the 
fetches actually begin is undefined (they happen in different task queues), so 
which would succeed is undefined.  (Some work would also be needed here for the 
autoRevoke approach, but it's much simpler.)

autoRevoke is pretty simple from the user's perspective: the URL is revoked 
when your script returns to the browser.


In fact, I think this addresses the lion's share of use cases, and if a 
developer wants to explicitly create longer lasting Blob URLs, they have that 
option (just not by default).

-- A*





RE: File API oneTimeOnly is too poorly defined

2012-04-09 Thread Feras Moussa
We agree that the spec text should be updated to more clearly define what 
dereference means. 
When we were trying to solve this problem, we looked for a simple and 
consistent way that a developer can understand what dereferencing is. 
What we came up with was the following definition: revoking should happen at 
the first time that any of the bits of the BLOB are accessed. 

This is a simple concept for a developer to understand, and is not complex to 
spec or implement. This also helps avoid having to explicitly spec 
out in the File API spec the various edge cases that different APIs exhibit – 
such as XHR open/send versus imgtag.src load versus css link href 
versus when a URL gets resolved or not. Instead those behaviors will continue 
to be documented in their respective spec.

The definition above would imply that some cases, such as a 
cross-site-of-origin request to a Blob URL do not revoke, but we think that is 
OK 
since it implies a developer error. If we use the above definition for 
dereferencing, then in the XHR example you provided, xhr.send would 
be responsible for revoking the URL.

Thanks,
Feras

From: Charles Pritchard [mailto:ch...@jumis.com] 
Sent: Thursday, March 29, 2012 1:03 AM
To: Glenn Maynard
Cc: Jonas Sicking; public-webapps WG
Subject: Re: File API oneTimeOnly is too poorly defined

Any feedback on what exists in modern implementations? MS seems to have the 
most hard-line stance when talking about this API.

When it comes to it, we ought to look at what happened in the latest harvest. 
IE10, O12, C19, and so forth.


On Mar 28, 2012, at 6:12 PM, Glenn Maynard gl...@zewt.org wrote:
On Wed, Mar 28, 2012 at 7:49 PM, Jonas Sicking jo...@sicking.cc wrote:
 This would still require work in each URL-consuming spec, to define taking a
 reference to the underlying blob's data when it receives an object URL.  I
 think this is inherent to the feature.
This is an interesting idea for sure. It doesn't solve any of the
issues I brought up, so we still need to define when dereferencing
happens. But it does solve the problem of the URL leaking if it never
gets dereferenced, which is nice.

Right, that's what I meant above.  The dereferencing step needs to be 
defined no matter what you do.  This just makes it easier to define 
(eliminating task ordering problems as a source of problems).

Also, I still think that all APIs should consistently do that as soon as it 
first sees the URL.  For example, XHR should do it in open(), not in send().  
That's makes it easy for developers to understand when the dereferencing 
actually happens (in the general case, for all APIs).

One other thing: dereferencing should take a reference to the underlying 
data of the Blob, not the Blob itself, so it's unaffected by neutering 
(transfers and Blob.close).  That avoids a whole category of problems.




RE: [FileAPI] Deterministic release of Blob proposal

2012-03-07 Thread Feras Moussa
 Then let's try this again.

 var a = new Image();
 a.onerror = function() { console.log(Oh no, my parent was neutered!); }; 
 a.src = URL.createObjectURL(blob); blob.close();

 Is that error going to hit?
I documented this in my proposal, but in this case the URI would have 
been minted prior to calling close. The Blob URI would still resolve 
until it has been revoked, so in your example onerror would not be hit 
due to calling close.

 var a = new Worker('#');
 a.postMessage(blob);
 blob.close();

 Is that blob going to make it to the worker?
SCA runs synchronously (so that subsequent changes to mutable values 
in the object don't impact the message) so the blob will have been 
cloned prior to close. 
The above would work as expected.


RE: [FileAPI] Deterministic release of Blob proposal

2012-03-07 Thread Feras Moussa
 -Original Message-
 From: Anne van Kesteren [mailto:ann...@opera.com] 
 Sent: Wednesday, March 07, 2012 12:49 AM
 To: Arun Ranganathan; Feras Moussa
 Cc: Adrian Bateman; public-webapps@w3.org; Ian Hickson
 Subject: Re: [FileAPI] Deterministic release of Blob proposal

 On Wed, 07 Mar 2012 02:12:39 +0100, Feras Moussa fer...@microsoft.com
 wrote:
  xhr.send(blob);
  blob.close(); // method name TBD
 
  In our implementation, this case would fail. We think this is 
  reasonable because the need for having a close() method is to allow 
  deterministic release of the resource.

 Reasonable or not, would fail is not something we can put in a standard.  
 What happens exactly? What if a connection is established and data is being 
 transmitted already?
In the case where close was called on a Blob that is being used in a 
pending request, then the request should be canceled. The expected 
result is the same as if abort() was called.


RE: FileReader abort, again

2012-03-06 Thread Feras Moussa
 Anne confirmed that a new open in onerror or onload /would/ suppress the 
 loadend of the first send.  
So we would want a new read/write in onload or onerror to do the same, not 
just those in onabort.
We encountered this same ambiguity in the spec, and we agree with the above. 
This is also what we 
have implemented and shipped in the consumer preview.

 Hack 2: Add a virtual generation counter/timestamp, not exposed to 
 script.  Increment it in read*, check it in abort before sending 
 loadend.  This is kind of complex, but works [and might be how I end 
 up implementing this in Chrome].
Of the list of 'hacks', we also implemented hack #2 to get the correct behavior.



 -Original Message-
 From: Eric U [mailto:er...@google.com]
 Sent: Tuesday, March 06, 2012 12:16 PM
 To: Arun Ranganathan
 Cc: Web Applications Working Group WG
 Subject: Re: FileReader abort, again
 
 On Mon, Mar 5, 2012 at 2:01 PM, Eric U er...@google.com wrote:
  On Thu, Mar 1, 2012 at 11:20 AM, Arun Ranganathan
  aranganat...@mozilla.com wrote:
  Eric,
 
So we could:
1. Say not to fire a loadend if onloadend or onabort
 
  Do you mean if onload, onerror, or onabort...?
 
 
  No, actually.  I'm looking for the right sequence of steps that results in
 abort's loadend not firing if terminated by another read*.  Since abort will 
 fire
 an abort event and a loadened event as spec'd
 (http://dev.w3.org/2006/webapi/FileAPI/#dfn-abort), if *those* event
 handlers initiate a readAs*, we could then suppress abort's loadend.  This
 seems messy.
 
  Ah, right--so a new read initiated from onload or onerror would NOT
  suppress the loadend of the first read.  And I believe that this
  matches XHR2, so we're good.  Nevermind.
 
 No, I retract that.  In
 http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/1627.html
 Anne confirmed that a new open in onerror or onload /would/ suppress the
 loadend of the first send.  So we would want a new read/write in onload or
 onerror to do the same, not just those in onabort.
 
  Actually, if we really want to match XHR2, we should qualify all the
  places that we fire loadend.  If the user calls XHR2's open in
  onerror or onload, that cancels its loadend.  However, a simple
  check on readyState at step 6 won't do it.  Because the user could
  call readAsText in onerror, then call abort in the second read's
  onloadstart, and we'd see readyState as DONE and fire loadend twice.
 
  To emulate XHR2 entirely, we'd need to have read methods dequeue
 any
  leftover tasks for previous read methods AND terminate the abort
  algorithm AND terminate the error algorithm of any previous read
  method.  What a mess.
 
 
  This may be the way to do it.
 
  The problem with emulating XHR2 is that open() and send() are distinct
 concepts in XHR2, but in FileAPI, they are the same.  So in XHR2 an open()
 canceling abort does make sense; abort() cancels a send(), and thus an
 open() should cancel an abort().  But in FileAPI, our readAs* methods are
 equivalent to *both* open() and send().  In FileAPI, an abort() cancels a
 readAs*; we now have a scenario where a readAs* may cancel an
 abort().  How to make that clear?
 
  I'm not sure why it's any more confusing that read* is open+send.
  read* can cancel abort, and abort can cancel read*.  OK.
 
 
  Perhaps there's a simpler way to say successfully calling a read
  method inhibits any previous read's loadend?
 
  I'm in favor of any shorthand :)  But this may not do justice to each
 readAs* algorithm being better defined.
 
  Hack 1: Don't call loadend synchronously.  Enqueue it, and let read*
  methods clear the queues when they start up.  This differs from XHR,
  though, and is a little odd.
 
 Still works, but needs to be applied in multiple places.
 
  Hack 2: Add a virtual generation counter/timestamp, not exposed to
  script.  Increment it in read*, check it in abort before sending
  loadend.  This is kind of complex, but works [and might be how I end
  up implementing this in Chrome].
 
 
 Still works, but needs to be applied in multiple places.
 
  But really, I don't think either of those is better than just saying,
  in read*, something like terminate the algorithm for any abort
  sequence being processed.
 
 ...or any previously-initiated read being processed.
 





RE: [FileAPI] Deterministic release of Blob proposal

2012-03-06 Thread Feras Moussa
 From: Arun Ranganathan [mailto:aranganat...@mozilla.com] 
 Sent: Tuesday, March 06, 2012 1:27 PM
 To: Feras Moussa
 Cc: Adrian Bateman; public-webapps@w3.org; Ian Hickson; Anne van Kesteren
 Subject: Re: [FileAPI] Deterministic release of Blob proposal

 Feras,

 In practice, I think this is important enough and manageable enough to 
 include in the spec., and I'm willing to slow the train down if necessary, 
 but I'd like to understand a few things first.  Below:
 
  At TPAC we discussed the ability to deterministically close blobs with a 
  few 
  others.
  
  As we’ve discussed in the createObjectURL thread[1], a Blob may represent 
  an expensive resource (eg. expensive in terms of memory, battery, or disk 
  space). At present there is no way for an application to deterministically 
  release the resource backing the Blob. Instead, an application must rely on 
  the resource being cleaned up through a non-deterministic garbage collector 
  once all references have been released. We have found that not having a way 
  to deterministically release the resource causes a performance impact for a 
  certain class of applications, and is especially important for mobile 
  applications 
  or devices with more limited resources.
  
  In particular, we’ve seen this become a problem for media intensive 
  applications 
  which interact with a large number of expensive blobs. For example, a 
  gallery 
  application may want to cycle through displaying many large images 
  downloaded 
  through websockets, and without a deterministic way to immediately release 
  the reference to each image Blob, can easily begin to consume vast amounts 
  of 
  resources before the garbage collector is executed. 
  
  To address this issue, we propose that a close method be added to the Blob 
  interface.
  When called, the close method should release the underlying resource of the 
  Blob, and future operations on the Blob will return a new error, a 
  ClosedError. 
  This allows an application to signal when it's finished using the Blob.
  

 Do you agree that Transferable 
 (http://dev.w3.org/html5/spec/Overview.html#transferable-objects) seems to be 
 what 
 we're looking for, and that Blob should implement Transferable?  

 Transferable addresses the use case of copying across threads, and neuters 
 the source 
 object (though honestly, the word neuter makes me wince -- naming is a 
 problem on the 
 web).  We can have a more generic method on Transferable that serves our 
 purpose here, 
 rather than *.close(), and Blob can avail of that.  This is something we can 
 work out with HTML, 
 and might be the right thing to do for the platform (although this creates 
 something to think 
 about for MessagePort and for ArrayBuffer, which also implement Transferable).

 I agree with your changes, but am confused by some edge cases:
 To support this change, the following changes in the File API spec are 
 needed:
 
 * In section 6 (The Blob Interface)
  - Addition of a close method. When called, the close method releases the 
 underlying resource of the Blob. Close renders the blob invalid, and further 
 operations such as URL.createObjectURL or the FileReader read methods on 
 the closed blob will fail and return a ClosedError.  If there are any 
 non-revoked 
 URLs to the Blob, these URLs will continue to resolve until they have been 
 revoked. 
  - For the slice method, state that the returned Blob is a new Blob with its 
own 
 lifetime semantics – calling close on the new Blob is independent of calling 
 close 
 on the original Blob.
 
 *In section 8 (The FIleReader Interface)
 - State the FileReader reads directly over the given Blob, and not a copy 
 with 
 an independent lifetime.
 
 * In section 10 (Errors and Exceptions)
 - Addition of a ClosedError. If the File or Blob has had the close method 
 called, 
 then for asynchronous read methods the error attribute MUST return a 
 “ClosedError” DOMError and synchronous read methods MUST throw a 
 ClosedError exception.
 
 * In section 11.8 (Creating and Revoking a Blob URI)
 - For createObjectURL – If this method is called with a closed Blob 
 argument, 
 then user agents must throw a ClosedError exception.
 
 Similarly to how slice() clones the initial Blob to return one with its own 
 independent lifetime, the same notion will be needed in other APIs which 
 conceptually clone the data – namely FormData, any place the Structured 
 Clone 
 Algorithm is used, and BlobBuilder.
 Similarly to how FileReader must act directly on the Blob’s data, the same 
 notion 
 will be needed in other APIs which must act on the data - namely XHR.send 
 and 
 WebSocket. These APIs will need to throw an error if called on a Blob that 
 was 
 closed and the resources are released.

 So Blob.slice() already presumes a new Blob, but I can certainly make this 
 clearer.  
 And I agree with the changes above, including the addition of something liked

RE: Transferable and structured clones, was: Re: [FileAPI] Deterministic release of Blob proposal

2012-03-06 Thread Feras Moussa
 -Original Message-
 From: Arun Ranganathan [mailto:aranganat...@mozilla.com]
 Sent: Tuesday, March 06, 2012 1:32 PM
 To: Kenneth Russell
 Cc: public-webapps@w3.org; Charles Pritchard; Glenn Maynard; Feras
 Moussa; Adrian Bateman; Greg Billock
 Subject: Re: Transferable and structured clones, was: Re: [FileAPI]
 Deterministic release of Blob proposal
 
 Ken,
 
  I'm not sure that adding close() to Transferable is a good idea. Not
  all Transferable types may want to support that explicit operation.
  What about adding close() to Blob, and having the neutering operation
  on Blob be defined to call close() on it?
 
 
 Specifically, you think this is not something ArrayBuffer should inherit?  If 
 it's
 also a bad idea for MessagePort, then those are really our only two use cases
 of Transferable right now.  I'm happy to create something like a close() on
 Blob.
 
 -- A*
We agree Blobs do not need to be transferrable, and thus it makes sense to have 
close directly on Blob, independent of being transferable.


RE: [FileAPI] Deterministic release of Blob proposal

2012-03-05 Thread Feras Moussa
The feedback is implementation feedback that we have refined in the past few 
weeks as we've updated our implementation. 
We're happy with it to be treated as a LC comment, but we'd also give this 
feedback in CR too since in recent weeks we've found it to be a problem in apps 
which make extensive use of the APIs.

 -Original Message-
 From: Arthur Barstow [mailto:art.bars...@nokia.com]
 Sent: Monday, March 05, 2012 12:52 PM
 To: Feras Moussa; Arun Ranganathan; Jonas Sicking
 Cc: public-webapps@w3.org; Adrian Bateman
 Subject: Re: [FileAPI] Deterministic release of Blob proposal
 
 Feras - this seems kinda' late, especially since the two-week pre-LC comment
 period for File API ended Feb 24.
 
 Is this a feature that can be postponed to v.next?
 
 On 3/2/12 7:54 PM, ext Feras Moussa wrote:
 
  At TPAC we discussed the ability to deterministically close blobs with
  a few
 
  others.
 
  As we've discussed in the createObjectURL thread[1], a Blob may
  represent
 
  an expensive resource (eg. expensive in terms of memory, battery, or
  disk
 
  space). At present there is no way for an application to
  deterministically
 
  release the resource backing the Blob. Instead, an application must
  rely on
 
  the resource being cleaned up through a non-deterministic garbage
  collector
 
  once all references have been released. We have found that not having
  a way
 
  to deterministically release the resource causes a performance impact
  for a
 
  certain class of applications, and is especially important for mobile
  applications
 
  or devices with more limited resources.
 
  In particular, we've seen this become a problem for media intensive
  applications
 
  which interact with a large number of expensive blobs. For example, a
  gallery
 
  application may want to cycle through displaying many large images
  downloaded
 
  through websockets, and without a deterministic way to immediately
  release
 
  the reference to each image Blob, can easily begin to consume vast
  amounts of
 
  resources before the garbage collector is executed.
 
  To address this issue, we propose that a close method be added to the
  Blob
 
  interface.
 
  When called, the close method should release the underlying resource
  of the
 
  Blob, and future operations on the Blob will return a new error, a
  ClosedError.
 
  This allows an application to signal when it's finished using the Blob.
 
  To support this change, the following changes in the File API spec are
  needed:
 
  * In section 6 (The Blob Interface)
 
  - Addition of a close method. When called, the close method releases
  the
 
  underlying resource of the Blob. Close renders the blob invalid, and
  further
 
  operations such as URL.createObjectURL or the FileReader read methods
  on
 
  the closed blob will fail and return a ClosedError. If there are any
  non-revoked
 
  URLs to the Blob, these URLs will continue to resolve until they have
  been
 
  revoked.
 
  - For the slice method, state that the returned Blob is a new Blob
  with its own
 
  lifetime semantics - calling close on the new Blob is independent of
  calling close
 
  on the original Blob.
 
  *In section 8 (The FIleReader Interface)
 
  - State the FileReader reads directly over the given Blob, and not a
  copy with
 
  an independent lifetime.
 
  * In section 10 (Errors and Exceptions)
 
  - Addition of a ClosedError. If the File or Blob has had the close
  method called,
 
  then for asynchronous read methods the error attribute MUST return a
 
  ClosedError DOMError and synchronous read methods MUST throw a
 
  ClosedError exception.
 
  * In section 11.8 (Creating and Revoking a Blob URI)
 
  - For createObjectURL - If this method is called with a closed Blob
  argument,
 
  then user agents must throw a ClosedError exception.
 
  Similarly to how slice() clones the initial Blob to return one with
  its own
 
  independent lifetime, the same notion will be needed in other APIs
  which
 
  conceptually clone the data - namely FormData, any place the
  Structured Clone
 
  Algorithm is used, and BlobBuilder.
 
  Similarly to how FileReader must act directly on the Blob's data, the
  same notion
 
  will be needed in other APIs which must act on the data - namely
  XHR.send and
 
  WebSocket. These APIs will need to throw an error if called on a Blob
  that was
 
  closed and the resources are released.
 
  We've recently implemented this in experimental builds and have seen
  measurable
 
  performance improvements.
 
  The feedback we heard from our discussions with others at TPAC
  regarding our
 
  proposal to add a close() method to the Blob interface was that
  objects in the web
 
  platform potentially backed by expensive resources should have a
  deterministic
 
  way to be released.
 
  Thanks,
 
  Feras
 
  [1]
  http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/1499.htm
  l
 





[FileAPI] Deterministic release of Blob proposal

2012-03-02 Thread Feras Moussa
At TPAC we discussed the ability to deterministically close blobs with a few
others.

As we've discussed in the createObjectURL thread[1], a Blob may represent
an expensive resource (eg. expensive in terms of memory, battery, or disk
space). At present there is no way for an application to deterministically
release the resource backing the Blob. Instead, an application must rely on
the resource being cleaned up through a non-deterministic garbage collector
once all references have been released. We have found that not having a way
to deterministically release the resource causes a performance impact for a
certain class of applications, and is especially important for mobile 
applications
or devices with more limited resources.

In particular, we've seen this become a problem for media intensive applications
which interact with a large number of expensive blobs. For example, a gallery
application may want to cycle through displaying many large images downloaded
through websockets, and without a deterministic way to immediately release
the reference to each image Blob, can easily begin to consume vast amounts of
resources before the garbage collector is executed.

To address this issue, we propose that a close method be added to the Blob
interface.
When called, the close method should release the underlying resource of the
Blob, and future operations on the Blob will return a new error, a ClosedError.
This allows an application to signal when it's finished using the Blob.

To support this change, the following changes in the File API spec are needed:

* In section 6 (The Blob Interface)
  - Addition of a close method. When called, the close method releases the
underlying resource of the Blob. Close renders the blob invalid, and further
operations such as URL.createObjectURL or the FileReader read methods on
the closed blob will fail and return a ClosedError.  If there are any 
non-revoked
URLs to the Blob, these URLs will continue to resolve until they have been
revoked.
  - For the slice method, state that the returned Blob is a new Blob with its 
own
lifetime semantics - calling close on the new Blob is independent of calling 
close
on the original Blob.

*In section 8 (The FIleReader Interface)
- State the FileReader reads directly over the given Blob, and not a copy with
an independent lifetime.

* In section 10 (Errors and Exceptions)
- Addition of a ClosedError. If the File or Blob has had the close method 
called,
then for asynchronous read methods the error attribute MUST return a
ClosedError DOMError and synchronous read methods MUST throw a
ClosedError exception.

* In section 11.8 (Creating and Revoking a Blob URI)
- For createObjectURL - If this method is called with a closed Blob argument,
then user agents must throw a ClosedError exception.

Similarly to how slice() clones the initial Blob to return one with its own
independent lifetime, the same notion will be needed in other APIs which
conceptually clone the data - namely FormData, any place the Structured Clone
Algorithm is used, and BlobBuilder.
Similarly to how FileReader must act directly on the Blob's data, the same 
notion
will be needed in other APIs which must act on the data - namely XHR.send and
WebSocket. These APIs will need to throw an error if called on a Blob that was
closed and the resources are released.

We've recently implemented this in experimental builds and have seen measurable
performance improvements.

The feedback we heard from our discussions with others at TPAC regarding our
proposal to add a close() method to the Blob interface was that objects in the 
web
platform potentially backed by expensive resources should have a deterministic
way to be released.

Thanks,
Feras

[1] http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/1499.html


RE: [FileAPI] createObjectURL isReusable proposal

2012-02-29 Thread Feras Moussa
We think the new property bag (objectURLOptions) semantics in the latest 
editors draft are very reasonable. We have an implementation of this and 
from our experience have found it very widely used internally with app 
developers - many leverage it as a way to get an easy to use one-time-use 
URL and avoid leaks in their applications. We've also noticed many 
developers easily overlook the URL.revokeObjectURL API, thus failing to 
realize they are pinning the resource behind the blob and further validating 
the usefulness of this.
 
To address a few of the implementation questions that were raised in 
this thread:
 Something else that needs to be defined: does xhr.open('GET', url) 
 consume the URL, or does that only happen when xhr.send() is called?
We think a URL does not get consumed until the data has been accessed. As 
XHR does not begin accessing the data until send has been called, we expect 
Blob URLs to be no different. The URL should get revoked  after xhr.send() 
gets called. This is also what we've done in our implementation, and have not 
noticed any confusion from developers.
 
Another case: whether loading a one-shot URL from a different origin, 
where you aren't allowed to load the content, still causes the URL to 
be revoked.  (My first impression was that it shouldn't affect it at 
all, but my second impression is that in practice that error mode would 
probably always result in the URL never being revoked and ending up 
leaked, so it's probably best to free it anyway.)
Similar to the above case, the URL is not revoked until after the data is 
accessed. If a URL is used from a different site of origin, the download fails 
and the data is not accessed, thus the URL is not revoked. Developers can 
notice this condition from the onerror handler for an img tag, where they 
can revoke the URL if it did not resolve correctly.
 
 What do you think of a global release mechanism? Such as 
 URL.revokeAllObjectUrls();
This wouldn't solve any of the problems previously listed in this thread, 
and would only be useful as a convenience API. That said, I'd question 
the trade-off of adding another API versus a developer writing their 
own version of this, which should be fairly trivial.
 
We also think the spec should clarify what the expected behavior is for 
a revoked URL when accounting for the image cache. The concept of 
revoking URLs is to give the developer a way to say they are done with 
the object. If a user agent still has the bits in memory, it should not be 
in the business of blocking the URL from loading, even if it is revoked.
 
We’d like to see the spec updated to clarify the points listed above 
and I'd be happy to help with these changes in any way possible.
 
Thanks,
Feras

 -Original Message-
 From: Bronislav Klučka [mailto:bronislav.klu...@bauglir.com]
 Sent: Friday, February 24, 2012 1:10 PM
 To: public-webapps@w3.org
 Cc: public-webapps@w3.org
 Subject: Re: [FileAPI] createObjectURL isReusable proposal
 
 
 
 On 24.2.2012 20:49, Arun Ranganathan wrote:
  On 24.2.2012 20:12, Arun Ranganathan wrote:
  Bronislav,
 
 
  I could also go with reverse approach, with createObjectURL being
  oneTimeOnly by default createObjectURL(Blob aBlob, boolean?
  isPermanent) instead of current createObjectURL(Blob aBlob,
  boolean? isOneTime) the fact, that user would have to explicitly
  specify, that such URL is permanent should limit cases of I forgot
  to release something somewhere... and I thing could be easier to
  understant, that explicit request for pemranent = explicit release.
  Would break current implementations, sure, but if we are
  considering changes
  So, having these URLs be oneTimeOnly by default itself has issues,
  as Glenn (and Darin) point out:
 
  http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/0377.h
  tml
 
  The existing model makes that scenario false by default, trading off
  anything racy against culling strings.
  We are back in an issue of someone using oneTimeOnly or permanent in
  an inappropriate case.  Programmers should be aware of what they are
  doing.
  I actually have no problem with current specification (rermanent as
  default, expicit release), I'm just trying to prevent changes like
  assigning object to string attribute (e.g. src), returning innerHTML
  with empty string attribute (e.g. src)
 
  My solution is that src should be modified to take both a string and a URL
 object, which makes innerHTML behavior easier; I'm less sure of it taking Blob
 directly.
 
  -- A*
 What change would it make compared to current scenario? URL as string or URL
 as stringifiable object? What's the difference?
 
 
 B.
 
 



RE: StreamBuilder threshold

2012-02-22 Thread Feras Moussa
 -Original Message-
 From: Stefan Hakansson LK [mailto:stefan.lk.hakans...@ericsson.com]
 Sent: Sunday, February 05, 2012 4:50 AM
 To: Feras Moussa
 Cc: Travis Leithead; public-webapps@w3.org
 Subject: Re: StreamBuilder threshold
 
 On 01/26/2012 07:05 PM, Feras Moussa wrote:
  Can you please clarify what scenario you are looking at regarding
  multiple consumers? When designing the StreamBuilder API, we looked at
  it as a more primitive API which other abstractions (such as multiple
  consumers) can be built upon.
 
 (Please forgive me if I am making stupid input - I am in a learning phase). A
 very simple scenario would be the example in the draft that demonstrates
 how to use StreamBuilder to load a stream into the audio tag. In this
 example the consumer is an audio tag, and new data is appended to the
 stream each time the buffer falls below 1024 bytes. Fine so far, but what
 happens if the same stream (via createObjectURL) is connected to one more
 audio tag, but at T ms later.
 
 In this case the first audio tag would have consumed down to the threshold
 (1024 bytes) T ms before the second.

This isn't clear from the spec (And I've made a note to clarify it) but URLs for
 streams should be one time use URLs (once used it should be automatically 
revoked). Thus a scenario as you said below (connecting the same stream to 
multiple tags) isn't possible. There will only be one event to notify the 
threshold 
has been reached, thus there should not be multiple consumers 'racing' for this 
event.
 
 Another example could be that one Stream is uploaded using two parallel
 xhr's; one of them could have a couple of pakcet losses and then consume
 slower than the other (and if WS could take send(stream) the same would
 apply).
 
  If you can please let me know what issue you're trying to address, I'm
  happy to discuss the possibilities.
 
 I hope the above input explained the issue.

However, I'm still not clear on what the scenario you'd like to accomplish is - 
can 
you please explain it in more detail? If you're looking for a way to reuse data 
from a Stream, then you should use StreamReader to read the data as a Blob 
(or another type) which will then provide you with all the Blob semantics, 
including multiple reads on the data.

Thanks,
Feras

 
 
  Also, For future reference, the latest draft is now located on the W3
  site at http://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm
 
 Thanks for updating me!
 
 Stefan
 
 
  Thanks, Feras
 
  -Original Message- From: Stefan Hakansson LK
  [mailto:stefan.lk.hakans...@ericsson.com] Sent: Tuesday, January 17,
  2012 12:28 AM To: Feras Moussa; Travis Leithead Cc:
  public-webapps@w3.org Subject: StreamBuilder threshold
 
  I'm looking at
  http://html5labs.interoperabilitybridges.com/streamsapi/, and
  specifically at the StreamBuilder.
 
  It has the possibility to generate an event if the data available
  falls below a threshold. How is this supposed to work if there is
  more than one consumer, and those consumers either don't start
  consuming at exactly the same time or consume at different rates, of
  the Stream?
 
  --Stefan
 
 
 





RE: StreamBuilder threshold

2012-01-26 Thread Feras Moussa
Can you please clarify what scenario you are looking at regarding multiple 
consumers?
When designing the StreamBuilder API, we looked at it as a more primitive API 
which other abstractions (such as multiple consumers) can be built upon. 

If you can please let me know what issue you're trying to address, I'm happy to 
discuss the possibilities.

Also, For future reference, the latest draft is now located on the W3 site at 
http://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm

Thanks,
Feras

 -Original Message-
 From: Stefan Hakansson LK [mailto:stefan.lk.hakans...@ericsson.com]
 Sent: Tuesday, January 17, 2012 12:28 AM
 To: Feras Moussa; Travis Leithead
 Cc: public-webapps@w3.org
 Subject: StreamBuilder threshold
 
 I'm looking at http://html5labs.interoperabilitybridges.com/streamsapi/,
 and specifically at the StreamBuilder.
 
 It has the possibility to generate an event if the data available falls below 
 a
 threshold. How is this supposed to work if there is more than one consumer,
 and those consumers either don't start consuming at exactly the same time
 or consume at different rates, of the Stream?
 
 --Stefan





RE: [XHR] chunked requests

2011-12-14 Thread Feras Moussa
We've only recently uploaded the draft for the Streams API and shared it with 
the Working Group [1], which also has the new location.

There is a conversation currently taking place in the media capture WG [2] 
discussing how to better align Stream and MediaStream. I think it's an 
interesting topic worth investigating, and there are a few ideas that have been 
proposed. We'll have to see where the conversation lands.

Thanks,
Feras

[1] http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/1494.html
[2] http://lists.w3.org/Archives/Public/public-media-capture/2011Dec/0037.html


 -Original Message-
 From: Anne van Kesteren [mailto:ann...@opera.com]
 Sent: Friday, December 09, 2011 5:15 AM
 To: Wenbo Zhu; Jonas Sicking; Robert O'Callahan; Feras Moussa
 Cc: WebApps WG
 Subject: Re: [XHR] chunked requests
 
 On Thu, 08 Dec 2011 23:16:37 +0100, Jonas Sicking jo...@sicking.cc wrote:
  I think Microsoft's stream proposal would address this use case.
 
 So that would be: http://html5labs.interoperabilitybridges.com/streamsapi/
 
 How does that relate to the various APIs for streaming media?
 
 (I added roc and Feras Moussa.)
 
 
 --
 Anne van Kesteren
 http://annevankesteren.nl/