Re: [webcomponents]: Changing names of custom element callbacks

2013-08-08 Thread Anne van Kesteren
On Wed, Aug 7, 2013 at 11:41 PM, Dimitri Glazkov dglaz...@google.com wrote:
 The only thing peeps asked for consistently was the knowledge of when
 the in-a-document state changes. I haven't heard any other requests
 for callbacks.

I'm having a hard time following how we're designing this thing. Are
we just following whatever people happen to be asking for or are we
actually trying to expose the functionality browsers use to implement
elements, so pages have the same amount of power (minus the powers we
do not wish to grant, such as synchronous creation)?


-- 
http://annevankesteren.nl/



RE: Overlap between StreamReader and FileReader

2013-08-08 Thread Domenic Denicola
From: Takeshi Yoshino [mailto:tyosh...@google.com] 

 On Thu, Aug 1, 2013 at 12:54 AM, Domenic Denicola 
 dome...@domenicdenicola.com wrote:
 Hey all, I was directed here by Anne helpfully posting to 
 public-script-coord and es-discuss. I would love it if a summary of what 
 proposal is currently under discussion: is it [1]? Or maybe some form of [2]?

 [1]: https://rawgithub.com/tyoshino/stream/master/streams.html
 [2]: http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0727.html

 I'm drafting [1] based on [2] and summarizing comments on this list in order 
 to build up concrete algorithm and get consensus on it.
 
Great! Can you explain why this needs to return an AbortableProgressPromise, 
instead of simply a Promise? All existing stream APIs (as prototyped in Node.js 
and in other environments, such as in js-git's multi-platform implementation) 
do not signal progress or allow aborting at the during a chunk level, but 
instead count on you recording progress by yourself depending on what you've 
seen come in so far, and aborting on your own between chunks. This allows 
better pipelining and backpressure down to the network and file descriptor 
layer, from what I understand.


RE: Overlap between StreamReader and FileReader

2013-08-08 Thread Domenic Denicola
From: Takeshi Yoshino [tyosh...@google.com]

 Sorry, which one? stream.Readable's readable event and read method?

Exactly.

 I agree flow control is an issue not addressed well yet and needs to be fixed.

I would definitely suggest thinking about it as soon as possible, since it will 
likely have a significant effect on the overall API. For example, all this talk 
of standardizing ProgressPromise (much less AbortableProgressPromise) will 
likely fall by the wayside once you consider how it hurts flow control.




Re: Overlap between StreamReader and FileReader

2013-08-08 Thread Jonas Sicking
On Thu, Aug 8, 2013 at 6:42 AM, Domenic Denicola
dome...@domenicdenicola.com wrote:
 From: Takeshi Yoshino [mailto:tyosh...@google.com]

 On Thu, Aug 1, 2013 at 12:54 AM, Domenic Denicola 
 dome...@domenicdenicola.com wrote:
 Hey all, I was directed here by Anne helpfully posting to 
 public-script-coord and es-discuss. I would love it if a summary of what 
 proposal is currently under discussion: is it [1]? Or maybe some form of 
 [2]?

 [1]: https://rawgithub.com/tyoshino/stream/master/streams.html
 [2]: http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0727.html

 I'm drafting [1] based on [2] and summarizing comments on this list in order 
 to build up concrete algorithm and get consensus on it.

 Great! Can you explain why this needs to return an AbortableProgressPromise, 
 instead of simply a Promise? All existing stream APIs (as prototyped in 
 Node.js and in other environments, such as in js-git's multi-platform 
 implementation) do not signal progress or allow aborting at the during a 
 chunk level, but instead count on you recording progress by yourself 
 depending on what you've seen come in so far, and aborting on your own 
 between chunks. This allows better pipelining and backpressure down to the 
 network and file descriptor layer, from what I understand.

Can you explain what you mean by This allows better pipelining and
backpressure down to the network and file descriptor layer?

I definitely agree that we don't want to cause too large performance
overheads. But it's not obvious to me how performance is affected by
putting progress and/or aborting functionality on the returned Promise
instance, rather than on a separate object (which you suggested in
another thread).

We should absolutely learn from Node.js and other environments. Do you
have any pointers to discussions about why they didn't end up with
progress in their read a chunk API?

/ Jonas



Re: Overlap between StreamReader and FileReader

2013-08-08 Thread Austin William Wright
On Thu, Aug 8, 2013 at 2:56 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Thu, Aug 8, 2013 at 6:42 AM, Domenic Denicola
 dome...@domenicdenicola.com wrote:
  From: Takeshi Yoshino [mailto:tyosh...@google.com]
 
  On Thu, Aug 1, 2013 at 12:54 AM, Domenic Denicola 
 dome...@domenicdenicola.com wrote:
  Hey all, I was directed here by Anne helpfully posting to
 public-script-coord and es-discuss. I would love it if a summary of what
 proposal is currently under discussion: is it [1]? Or maybe some form of
 [2]?
 
  [1]: https://rawgithub.com/tyoshino/stream/master/streams.html
  [2]:
 http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0727.html
 
  I'm drafting [1] based on [2] and summarizing comments on this list in
 order to build up concrete algorithm and get consensus on it.
 
  Great! Can you explain why this needs to return an
 AbortableProgressPromise, instead of simply a Promise? All existing stream
 APIs (as prototyped in Node.js and in other environments, such as in
 js-git's multi-platform implementation) do not signal progress or allow
 aborting at the during a chunk level, but instead count on you recording
 progress by yourself depending on what you've seen come in so far, and
 aborting on your own between chunks. This allows better pipelining and
 backpressure down to the network and file descriptor layer, from what I
 understand.

 Can you explain what you mean by This allows better pipelining and
 backpressure down to the network and file descriptor layer?


I believe the term is congestion control such as the TCP congestion
control algorithm. That is, don't send data to the application faster than
it can parse it or pass it off, or otherwise some mechanism to allow the
application to throttle down the incoming flow, essential to any
networked application like the Web.

I think there's some confusion as to what the abort() call is going to do
exactly.