Re: File API: reading a Blob

2014-07-03 Thread Anne van Kesteren
On Wed, Jul 2, 2014 at 7:06 PM, Arun Ranganathan a...@mozilla.com wrote:
 For instance, I thought the idea was that within Fetch to read /blob/ we’d
 do something like:

 1. Let /s/ be a new body.  Return /s/ and perform the rest of the steps
 async.
 2. Perform a read operation [File API] on /blob/.
 3. To process read…
 4. To process read data, transfer each byte read to /s/ and set /s/’s
 transmitted to the number of bytes read.

 // Chunked byte transfer is possible within the 50ms delta for process read
 data. We could specify that here better.//

 5. To process read EOF ...
 6. Otherwise, to process read error with a failure reason on /s/ ….

 Why is something like that unworkable and why do we need another variant of
 the read operation exactly?

It's unclear to me why we'd want to use the event loop for this,
basically. Also, the current set of synchronous steps (which I could
opt not to use, granted, but other APIs already might, and I'd like us
to be consistent) simply return failure when something bad happens
rather than returning the bytes read so far. It seems like that is a
problem, perhaps following from not having this lower-level
description.


On Wed, Jul 2, 2014 at 8:51 PM, Domenic Denicola
dome...@domenicdenicola.com wrote:
 Or are you trying to transfer from a vague conceptual model of /blob/'s 
 underlying data stream to a different conceptual model of A body is a byte 
 stream, only to later actually reify the byte stream as a stream object? 
 That seems potentially problematic, but perhaps it could be made to work...

Yes.

E.g. for img there would not be an observable stream here. Just some
of the effects (JPEG progression) might be observable. Fetch is the
model of the networking layer, there are no concrete JavaScript
objects there.


-- 
http://annevankesteren.nl/



Re: [editing] Use Cases (was: Leading with ContentEditable=Minimal)

2014-07-03 Thread Johannes Wilm
Do you mean browser developers or editor developers? Wouldn't this task
force group be a good place? I've notified the Substance.io team about it.
Some others that might be interested, that I haven't seen here yet:
TinyMCE, Aloha, Wikimedia Wysywig editor, the WYSYWIG editor project of
PLOS, HalloJS, Codemirror, Booktype (another project to write books in the
browser), Firepad, ShareLatex, WriteLatex, WebODF, ICE (I have contributed
most of the Chrome code to it, but I'm not part of the core developer
team). Others?

I don't know what your procedures are for such things, but maybe send them
all an email? We may have to accept that a lot of projects are slightly
tired of contenteditable/caret-moving fixing efforts, given that so many
attempts to fix it have failed in the past. Lets make sure this doesn't
happen this time. :)

From my experience, the issues/use cases mentioned will cover just about
all the main problems. I would personally move the DTP out of this, at
least if DTP is to understood mainly as fragmentation stuff. That is very
important, but the debates on fragmentation are there already and we should
be able to find an agreement on cursor movement even if the issues around
fragmentation cannot be resolved.




On Thu, Jul 3, 2014 at 12:46 AM, Ryosuke Niwa rn...@apple.com wrote:

 Thank you Ben!

 It would be great if we could get more use cases from developers who are
 interested in improving editing API.

 Is there any communication channel we can use for that?

 - R. Niwa

 On Jul 2, 2014, at 3:12 PM, Ben Peters ben.pet...@microsoft.com wrote:

  Great discussion on this! I have added these to the Explainer Doc[1].
 Please let me know what you think of the list so far. I would also like
 discuss this in the meeting next Friday[2], so anyone that can come would
 be great!
 
  Ben
 
  [1] http://w3c.github.io/editing-explainer/commands-explainer.html
  [2]
 http://lists.w3.org/Archives/Public/public-webapps/2014JulSep/0011.html
 
  On Mon, Jun 30, 2014 at 10:33 PM, Johannes Wilm 
 johan...@fiduswriter.org wrote:
 
 
 
  On Tue, Jul 1, 2014 at 4:39 AM, Ryosuke Niwa rn...@apple.com wrote:
 
  On Jun 30, 2014, at 1:43 PM, Johannes Wilm johan...@fiduswriter.org
  wrote:
 
  On Mon, Jun 30, 2014 at 10:01 PM, Ryosuke Niwa rn...@apple.com
 wrote:
 
  snip
 
 
  Web-based DTP application: The app can creates a document that
 contains
  pagination, columns, and a complex illustrations.  It needs to let
 users
  edit any text that appears in the document.  Each editable text needs
 to be
  accessible, and styles applied to text need to be backed by the
  application's internal model of the document.
 
  Yes, but wouldn't this require some fragmentation logic? To create
  something like what Quark Xpress was in the 1990s, I think you'd need
 CSS
  Regions or equivalent. It seems as if that would be a little outside
 of the
  scope of the caret moving logic, or how do you mean? I would find it
 great
  if this ever happens, but as I understand it the whole fragmentation
 debate
  was left aside for a while.
 
 
  CSS regions is still shipped in iOS/OS X Safari, and I don't expect it
 to
  go away anytime soon.  CSS columns support is still in Blink as well.
  Furthermore, I don't think CSS WG is halting the work on fragmentations
  either.  We should still keep it as a use case for the new editing
 API.  In
  addition, I would suggest that you go talk with people in CSS WG and
 add
  your use case.
 
 
  Yes, I am aware of that. I spent a year creating a CSS Regions based
 book
  layout engine ( http://fiduswriter.github.io/pagination.js/ ) , so I am
  absolutely interested in fragmentation coming back one day. In the
 meantime
  I have created an ad-hoc solution using CSS columns (
  http://fiduswriter.github.io/simplePagination.js/simplePagination.html
 )
 
  The main difference is that the CSS fragmentation based version allows
 to
  combine it with contenteditable (text flowing from page to page while
  editing/writing it). Using Javascript/CSS multicolumns to create the
 same
  design means cutting up the DOM, so it has to be done after the text is
  written. We switched to this second approach when it became clear that
 CSS
  Regions would be removed from Chrome.
 
  The way we handle it is to let the user write the text in one large page
  with the footnotes off to the right. When the user hits CTRL+P, the
 current
  contents of the edited doc are copied, the original contents is hidden
 and
  the copied version is cut up into individual pages. By the time the user
  gets to the print preview, page numbers, headers, table of contents,
  footnotes, etc. have all been put in place. Fragmentation would be
 great to
  have, but for now I would already sleep much better if we would have a
 more
  solid selection/caret-moving base to build upon.
 
 
 
  Semantic HTML WYSIWYG editor for a blogging platform: The editor
 needs to
  able to add both semantic and visual annotation to the document as it
 will

Re: File API: reading a Blob

2014-07-03 Thread Arun Ranganathan

On Jul 3, 2014, at 4:14 AM, Anne van Kesteren ann...@annevk.nl wrote:

 It's unclear to me why we'd want to use the event loop for this,
 basically.


The FileReader object uses the event loop; your initial request for Fetch was 
to have a reusable “abstract” read operation which used tasks. You’ve since 
changed your mind, which is totally fine: we could have a different read 
operation that doesn’t use the event loop that’s put in place for Fetch, but 
FileReader needs the event loop. 

We’re talking about an abstract model in specification land on which to pin 
higher level concepts that culminate eventually in JS objects. It’s useful (I 
agreed long ago), but overhauling the current read operation for a change of 
mind/model is a lot of pain without elegant gain. 

Also, since there isn’t an observable stream or an object, but merely things 
like load progression (JPEG progression), tasks does seem useful for that. The 
one thing I suggested which we could do better is the “one byte vs. 50ms” 
model, and use the “chunk” concept of bytes that the streams folks use. The one 
thing I’m not clear on here is how to get to a pull-chunk size for files, but I 
think we could do this better.


 Also, the current set of synchronous steps (which I could
 opt not to use, granted, but other APIs already might, and I'd like us
 to be consistent) simply return failure when something bad happens
 rather than returning the bytes read so far. It seems like that is a
 problem, perhaps following from not having this lower-level
 description.



OK, this could be a problem. But this is really immediately usable by the 
FileReaderSync object on threads, for which a use case for partial data didn’t 
materialize (and in general, the spec. shunned partial data — references to 
those threads date way back in time, but they’re there). It seems that for a 
synchronous block of file i/o, all or nothing catered to most use cases.

But if it IS a problem — that is, if you think synchronous I/O has implications 
outside of FileReaderSync, OR that FileReaderSync’s return itself should be 
partial if a failure occurs, then let’s file bugs and solve them.

I’d REALLY like to have solid abstract models in place for Fetch, since I buy 
into the cross-purposability of it. But I’d also like shipping implementations 
to be defined (99% done in File API), with the small delta remaining — Blob URL 
autorevoking and Blob closing — to be nailed down.

— A*

Re: File API: reading a Blob

2014-07-03 Thread Anne van Kesteren
On Thu, Jul 3, 2014 at 3:58 PM, Arun Ranganathan a...@mozilla.com wrote:
 You’ve since changed your mind, which is totally fine

I wish I could foresee all problems before starting to write things
out, that'd be great! :-)


 but overhauling the current read operation for a change
 of mind/model is a lot of pain without elegant gain.

Well, it does seem like a problem that synchronous and asynchronous
operations have different error handling as far as what bytes they
return is concerned.


 But if it IS a problem — that is, if you think synchronous I/O has
 implications outside of FileReaderSync, OR that FileReaderSync’s return
 itself should be partial if a failure occurs, then let’s file bugs and solve
 them.

So most of Fetch is asynchronous. If it's invoked with the synchronous
flag set it's just that it waits for the entire response before
returning. That's why I'd like to use the asynchronous path of reading
a blob. But I'd like that not to be observably different from using
the synchronous path of reading a blob. That seems wrong.


-- 
http://annevankesteren.nl/



Re: File API: reading a Blob

2014-07-03 Thread Arun Ranganathan

On Jul 3, 2014, at 10:17 AM, Anne van Kesteren ann...@annevk.nl wrote:

 So most of Fetch is asynchronous. If it's invoked with the synchronous
 flag set it's just that it waits for the entire response before
 returning. That's why I'd like to use the asynchronous path of reading
 a blob. But I'd like that not to be observably different from using
 the synchronous path of reading a blob. That seems wrong.



OK, this is fixable. I’ll ensure that the read operation’s synchronous 
component does return the results thus far, but that FIleReaderSync itself 
ignores them in case of a midstream error, unless we collectively agree that it 
SHOULD return partial instead of “all or nothing” as an API. My understanding 
of the FileReaderSync requirement is all or nothing, but I’m open to being 
wrong via bug filing.

Are you agreed (as far as it is possible for you to agree with me on anything) 
that the event loop async read might allow us to address cases like JPEG 
progression? It seems imminently usable here.

— A*



Re: File API: reading a Blob

2014-07-03 Thread Anne van Kesteren
On Thu, Jul 3, 2014 at 4:29 PM, Arun Ranganathan a...@mozilla.com wrote:
 OK, this is fixable. I’ll ensure that the read operation’s synchronous
 component does return the results thus far, but that FIleReaderSync itself
 ignores them in case of a midstream error, unless we collectively agree that
 it SHOULD return partial instead of “all or nothing” as an API. My
 understanding of the FileReaderSync requirement is all or nothing, but I’m
 open to being wrong via bug filing.

That would mean you would get different results between using
FileReaderSync and XMLHttpRequest. That does not seem ideal.


 Are you agreed (as far as it is possible for you to agree with me on
 anything) that the event loop async read might allow us to address cases
 like JPEG progression? It seems imminently usable here.

The tasks are still a bit of a concern as a normal implementation
would not queue tasks. E.g. it's not even clear what task loop Fetch
would retrieve this from as fetch is itself a different process.


-- 
http://annevankesteren.nl/



[service-workers] SW event syntax and Cache API

2014-07-03 Thread Tobie Langel
Hi folks,

Couple of issues I've bumped into recently while looking at Service Workers
more closely.

1. e.respondWith + e.waitUntil.
I feel like those are strong code smells we haven't found the right design
for yet.
I have a suggestion for waitUntil[1]. None yet for respondWith, but plan to
tinker with it in the upcoming weeks. I like Anne's express.js inspired
suggestions.

2. Cache.add feels misnamed and/or heavily magic.
- I'm not sure handling atomic fetches should happen at this layer.
- Doing so shouldn't be too difficult to build given the right primitives
(fetch + Promise.all)
- It's unclear whether the promise returned by Cache.add is resolved with
responseArray or with void (WebIDL reads Promiseany, algo doesn't mention
resolving ith responseArray). Same issue for Cache.put.

3. It should be a lot easier to prime the cache after a cache miss.
Currently the solutions I've tried are less than desirable[2]. Would like
something like this[3] to work.

Not sure what the best medium is to discuss/help with these issues. Maybe
in person?

LMK.

--tobie
---
[1]:
https://github.com/slightlyoff/ServiceWorker/issues/256#issuecomment-47878042
[2]:
https://gist.github.com/tobie/0689c5dda8f6d49d500d#file-gistfile2-js-L25-L32
[3]:
https://gist.github.com/tobie/113280d3b3db714dc199#file-gistfile1-js-L27