Re: Proposal for a DOM L3 Events Telecon

2013-05-01 Thread Wez
Hi guys, mind if I tag along with Gary on the call?


On 30 April 2013 13:46, Gary Kačmarčík (Кошмарчик) gary...@google.comwrote:

 On Mon, Apr 29, 2013 at 12:59 PM, Travis Leithead 
 travis.leith...@microsoft.com wrote:

  I’d like to propose we start a call to begin to work toward resolving
 the final bugs in the spec and for other business related to getting DOM L3
 Events to CR. On the call we can workout what subsequent meetings we should
 also arrange.

 ** **

 Does next Tuesday (May 7th), at 11 am PST work your you?


 Note that 11am PDT = 3am JST.  If Masayuki is interested in joining, we
 should pick a late afternoon PDT time:
4pm PDT (Tue) = 8am JST (Wed)
5pm PDT (Tue) = 9am JST (Wed)




Re: Proposal for a DOM L3 Events Telecon

2013-05-01 Thread 河内 隆仁
If Masayuki-san is joining and the time is JST-friendly, I would also like
to join,
but feel free to ignore me if not.


On Wed, May 1, 2013 at 6:30 PM, Wez w...@google.com wrote:

 Hi guys, mind if I tag along with Gary on the call?


 On 30 April 2013 13:46, Gary Kačmarčík (Кошмарчик) gary...@google.comwrote:

 On Mon, Apr 29, 2013 at 12:59 PM, Travis Leithead 
 travis.leith...@microsoft.com wrote:

  I’d like to propose we start a call to begin to work toward resolving
 the final bugs in the spec and for other business related to getting DOM L3
 Events to CR. On the call we can workout what subsequent meetings we should
 also arrange.

 ** **

 Does next Tuesday (May 7th), at 11 am PST work your you?


 Note that 11am PDT = 3am JST.  If Masayuki is interested in joining, we
 should pick a late afternoon PDT time:
4pm PDT (Tue) = 8am JST (Wed)
5pm PDT (Tue) = 9am JST (Wed)





-- 
Takayoshi Kochi


Re: URL comparison

2013-05-01 Thread Anne van Kesteren
On Sun, Apr 28, 2013 at 12:56 PM, Brian Kardell bkard...@gmail.com wrote:
 We created a prollyfill for this about a year ago (called :-link-local
 instead of :local-link for forward compatibility):

 http://hitchjs.wordpress.com/2012/05/18/content-based-css-link/

Cool!


 If you can specify the workings, we (public-nextweb community group) can rev
 the prollyfill, help create tests, collect feedback, etc so that when it
 comes time for implementation and rec there are few surprises.

Did you get any feedback thus far about desired functionality,
problems that are difficult to overcome, ..?


--
http://annevankesteren.nl/



Re: [PointerLock] Should there be onpointerlockchange/onpointerlockerror properties defined in the spec

2013-05-01 Thread Anne van Kesteren
On Thu, Apr 11, 2013 at 5:59 PM, Vincent Scheib sch...@google.com wrote:
 I argue on that issue that we should not bubble the event and have the
 handler on document only. Pointer lock doesn't have as much legacy spec
 churn as fullscreen, but I think we're in a position to have both of them be
 cleaned up before un-prefixing.

Okay. Lets kill the bubbling and define the event handlers inline.
Sorry for the delay.

Fullscreen still depends on HTML for iframe allowfullscreen and for
HTML terminating fullscreen when navigating. Does pointer lock require
something similar?


--
http://annevankesteren.nl/



Re: Collecting real world use cases (Was: Fixing appcache: a proposal to get us started)

2013-05-01 Thread Alec Flett
I think there are some good use cases for not-quite-offline as well. Sort
of a combination of your twitter and wikipedia use cases:

Community-content site: Logged-out users have content cached aggressively
offline - meaning every page visited should be cached until told otherwise.
Intermediate caches / proxies should be able to cache the latest version of
a URL. As soon as a user logs in, the same urls they just used should now
have editing controls. (note that actual page contents *may* not have not
changed, just the UI) Pages now need to be fresh meaning that users
should never edit stale content. In an ideal world, once a logged in user
has edited a page, that page is pushed to users or proxies who have
previously cached that page and will likely visit it again soon.

I know this example in particular seems like it could be accomplished with
a series of If-Modified-Since / 304's, but connection latency is the killer
here, especially for mobile - the fact that you have a white screen while
you wait to see if the page has changed. The idea that you could visit a
cached page, (i.e. avoid hitting the network) and then a few seconds later
be told there is a newer version of this page available after the fact,
(or even just silently update the page so the next visit delivers a fresh
but network-free page) would be pretty huge. Especially if you could then
proactively fetch a select set of pages - i.e. imagine an in-browser
process that says for each link on this page, if I have a stale copy of
the url, go fetch it in the background so it is ready in the cache

(On this note it would probably be worth reaching out to the wiki
foundation to learn about the hassle they've gone through over the years
trying to distribute the load of wikipedia traffic given the constraints of
HTTP caching, broken proxies, CDNs, ISPs, etc)

Alec

On Tue, Apr 30, 2013 at 9:06 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Apr 18, 2013 6:19 PM, Paul Bakaus pbak...@zynga.com wrote:
 
  Hi Jonas,
 
  Thanks for this ­ I feel this is heading somewhere, finally! I still need
  to work on submitting my full feedback, but I'd like to mention this: Why
  did nobody so far in this thread include real world use cases?
 
  For a highly complex topic like this in particular, I would think that
  collecting a large number of user use cases, not only requirements, and
  furthermore finding the lowest common denominator based on them, would
  prove very helpful, even if it's just about validation and making people
  understand your lengthy proposal. I.e. a news reader that needs to sync
  content, but has an offline UI.
 
  Do you have a list collected somewhere?

 Sorry for not including the list in the initial email. It was long
 enough as it was so I decided to stop.

 Some of the use cases we discussed were:

 Small simple game
 The game consists of a set of static resources. A few HTML pages, like
 high score page, start page, in-game page, etc. A larger number of
 media resources. A few data resources which contain level metadata.
 Small amount of dynamic data being generated, such as progress on each
 level, high score, user info.
 In-game performance is critical, all resources must be guaranteed to
 be available locally once the game starts.
 Little need for network connectivity other than to update game
 resources whenever an update is available.

 Advanced game
 Same as simple game, but also downloads additional levels dynamically.
 Also wants to store game progress on servers so that it can be synced
 across devices.

 Wikipedia
 Top level page and its resources are made available offline.
 Application logic can enable additional pages to be made available
 offline. When such a page is made available offline both the page and
 any media resources that it uses needs to be cached.
 Doesn't need to be updated very aggressively, maybe only upon user request.

 Twitter
 A set of HTML templates that are used to create a UI for a database of
 tweets. The same data is visualized in several different ways, for
 example in the user's default tweet stream, in the page for an
 individual tweet, and in the conversation thread view.
 Downloading the actual tweet contents and metadata shouldn't need to
 happen multiple times in order to support the separate views.
 The URLs for watching individual tweets needs to be the same whether
 the user is using appcache or not so that linking to a tweet always
 works.
 It is very important that users are upgraded to the latest version of
 scripts and templates very quickly after they become available. The
 website likely will want to be able to check for updates on demand
 rather than relying on implementation logic.
 If the user is online but has appcached the website it should be able
 to use the cached version. This should be the case even if the user
 navigates to a tweet page for a tweet for which the user hasn't yet
 cached the tweet content or metadata. In this case only the tweet
 content and metadata 

Re: [Shadow DOM] Simplifying level 1 of Shadow DOM

2013-05-01 Thread Tab Atkins Jr.
On Tue, Apr 30, 2013 at 11:10 AM, Ryosuke Niwa rn...@apple.com wrote:
 I'm concerned that we're over engineering here.

A valid concern in general, but I'm confident that we're actually
hitting the right notes here, precisely because we've developed these
more complicated features due to direct requests and complaints from
actual users of the API internally, who needed them to do fairly basic
things that we expect will be common.

It's difficult to understand without working through examples
yourself, but removing these abilities does not make Shadow DOM
simpler, it just makes it much, much weaker.

~TJ



Re: [PointerLock] Should there be onpointerlockchange/onpointerlockerror properties defined in the spec

2013-05-01 Thread Vincent Scheib
On Wed, May 1, 2013 at 7:00 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Thu, Apr 11, 2013 at 5:59 PM, Vincent Scheib sch...@google.com wrote:
  I argue on that issue that we should not bubble the event and have the
  handler on document only. Pointer lock doesn't have as much legacy spec
  churn as fullscreen, but I think we're in a position to have both of
 them be
  cleaned up before un-prefixing.

 Okay. Lets kill the bubbling and define the event handlers inline.
 Sorry for the delay.


Thanks, sounds good to me. I see and agree with your changes listed here
[1].

I have modified the pointer lock spec to not bubble the events.


 Fullscreen still depends on HTML for iframe allowfullscreen and for
 HTML terminating fullscreen when navigating. Does pointer lock require
 something similar?


Pointer lock does use the concept of the sandboxed pointer lock browsing
context flag allow-pointer-lock[2], which has been included in whatwg
living HTML.

I'm neutral on the potential argument of defining the sandbox flag inline
as well, it is mentioned  referenced in the pointer lock specification.


This should conclude this thread, onpointerlockchange/onpointerlockerror
properties defined in the spec.


[1] https://www.w3.org/Bugs/Public/show_bug.cgi?id=20637#c10
[2]
http://www.whatwg.org/specs/web-apps/current-work/multipage/origin-0.html#sandboxed-pointer-lock-browsing-context-flag


Re: URL comparison

2013-05-01 Thread Brian Kardell
+ the public-nextweb list...

On Wed, May 1, 2013 at 9:00 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Sun, Apr 28, 2013 at 12:56 PM, Brian Kardell bkard...@gmail.com wrote:
 We created a prollyfill for this about a year ago (called :-link-local
 instead of :local-link for forward compatibility):

 http://hitchjs.wordpress.com/2012/05/18/content-based-css-link/

 Cool!


 If you can specify the workings, we (public-nextweb community group) can rev
 the prollyfill, help create tests, collect feedback, etc so that when it
 comes time for implementation and rec there are few surprises.

 Did you get any feedback thus far about desired functionality,
 problems that are difficult to overcome, ..?


 --
 http://annevankesteren.nl/

We have not uncovered much on this one other than that the few people
who commented were confused by what it meant - but we didn't really
make a huge effort to push it out there... By comparison to some
others it isn't a very 'exciting' fill (our :has() for example had
lots of comment as did our mathematical attribute selectors) - but we
definitely can.  I'd like to open it up to these groups where/how you
think might be an effective means of collecting necessary data -
should we ask people to contribute comments to the list? set up a git
project where people can pull/create issues, register tests/track fork
suggestions, etc?  Most of our stuff for collecting information has
been admittedly all over the place (twitter, HN, reddit, blog
comments, etc), but this predates the nextweb group and larger
coordination, so I'm *very happy* if we can begin to change that.



--
Brian Kardell :: @briankardell :: hitchjs.com



RE: Proposal for a DOM L3 Events Telecon

2013-05-01 Thread Travis Leithead
I’d like to be sure we get Masayuki in our discussions as this represents at 
least a trio of implementors on the call.

The 4pm PDT (Tue) / 8am JST (Wed) will work for me. I’ll set up the telco 
details and send them out.

From: Takayoshi Kochi (河内 隆仁) [mailto:ko...@google.com]
Sent: Wednesday, May 1, 2013 3:23 AM
To: Wez
Cc: Gary Kačmarčík (Кошмарчик); Travis Leithead; masay...@d-toybox.com; 
public-webapps; www-dom
Subject: Re: Proposal for a DOM L3 Events Telecon

If Masayuki-san is joining and the time is JST-friendly, I would also like to 
join,
but feel free to ignore me if not.

On Wed, May 1, 2013 at 6:30 PM, Wez w...@google.commailto:w...@google.com 
wrote:
Hi guys, mind if I tag along with Gary on the call?

On 30 April 2013 13:46, Gary Kačmarčík (Кошмарчик) 
gary...@google.commailto:gary...@google.com wrote:
On Mon, Apr 29, 2013 at 12:59 PM, Travis Leithead 
travis.leith...@microsoft.commailto:travis.leith...@microsoft.com wrote:
I’d like to propose we start a call to begin to work toward resolving the final 
bugs in the spec and for other business related to getting DOM L3 Events to CR. 
On the call we can workout what subsequent meetings we should also arrange.

Does next Tuesday (May 7th), at 11 am PST work your you?

Note that 11am PDT = 3am JST.  If Masayuki is interested in joining, we should 
pick a late afternoon PDT time:
   4pm PDT (Tue) = 8am JST (Wed)
   5pm PDT (Tue) = 9am JST (Wed)





--
Takayoshi Kochi


Re: [Shadow DOM] Simplifying level 1 of Shadow DOM

2013-05-01 Thread Ryosuke Niwa
On Apr 30, 2013, at 12:07 PM, Daniel Freedman dfre...@google.com wrote:

 I'm concerned that if the spec shipped as you described, that it would not be 
 useful enough to developers to bother using it at all.

I'm concerned that we can never ship this feature due to the performance 
penalties it imposes.

 Without useful redistributions, authors can't use composition of web 
 components very well without scripting.
 At that point, it's not much better than just leaving it all in the document 
 tree.

I don't think having to inspect the light DOM manually is terrible, and we had 
been using shadow DOM to implement textarea, input, and other elements years 
before we introduced node redistributions.

On May 1, 2013, at 8:57 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 It's difficult to understand without working through examples
 yourself, but removing these abilities does not make Shadow DOM
 simpler, it just makes it much, much weaker.

It does make shadow DOM significantly simpler at least in the areas we're 
concerned about.

- R. Niwa




Re: [Shadow DOM] Simplifying level 1 of Shadow DOM

2013-05-01 Thread Dimitri Glazkov
On Wed, May 1, 2013 at 11:49 AM, Ryosuke Niwa rn...@apple.com wrote:
 On Apr 30, 2013, at 12:07 PM, Daniel Freedman dfre...@google.com wrote:

 I'm concerned that if the spec shipped as you described, that it would not 
 be useful enough to developers to bother using it at all.

 I'm concerned that we can never ship this feature due to the performance 
 penalties it imposes.

Can you tell me more about this concern? I am pretty sure the current
implementation in WebKit/Blink does not regress performance for the
Web-at-large.

:DG



Re: [Shadow DOM] Simplifying level 1 of Shadow DOM

2013-05-01 Thread Scott Miles
 I'm concerned that we can never ship this feature due to the performance
penalties it imposes.

Can you be more explicit about the penalty to which you refer? I understand
there are concerns about whether the features can be made fast, but I
wasn't aware of an overall penalty on code that is not actually using said
features. Can you elucidate?

 It does make shadow DOM significantly simpler at least in the areas we're
concerned about.

Certainly there is no argument there. I believe the point that Tab was
making that at some point it becomes so simple it's only useful for very
basic problems, and developers at large no longer care. This question is at
least worthy of discussion, yes?

Scott


On Wed, May 1, 2013 at 11:49 AM, Ryosuke Niwa rn...@apple.com wrote:

 On Apr 30, 2013, at 12:07 PM, Daniel Freedman dfre...@google.com wrote:

  I'm concerned that if the spec shipped as you described, that it would
 not be useful enough to developers to bother using it at all.

 I'm concerned that we can never ship this feature due to the performance
 penalties it imposes.

  Without useful redistributions, authors can't use composition of web
 components very well without scripting.
  At that point, it's not much better than just leaving it all in the
 document tree.

 I don't think having to inspect the light DOM manually is terrible, and we
 had been using shadow DOM to implement textarea, input, and other elements
 years before we introduced node redistributions.

 On May 1, 2013, at 8:57 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:

  It's difficult to understand without working through examples
  yourself, but removing these abilities does not make Shadow DOM
  simpler, it just makes it much, much weaker.

 It does make shadow DOM significantly simpler at least in the areas we're
 concerned about.

 - R. Niwa





Re: [Shadow DOM] Simplifying level 1 of Shadow DOM

2013-05-01 Thread Jonas Sicking
My proposal is to allow for multiple insertion points, and use
selectors to filter the insertion points.

However restrict the set of selectors such that only an elements
intrinsic state affects which insertion point it is inserted in. I.e.
only an elements name, attributes and possibly a few states like
:target and :visited affects which insertion point it is inserted
into.

That way when an element is inserted or modified, you don't have to
worry about having to check any descendants or any siblings to see if
the selectors that they match suddenly changed.

Only when the attributes on an element changes do you have to re-test
which insertion point it should be inserted into. And then you only
have to recheck the node itself, no siblings or other nodes. And you
only have to do so if the parent has a binding.

/ Jonas

On Wed, May 1, 2013 at 11:49 AM, Ryosuke Niwa rn...@apple.com wrote:
 On Apr 30, 2013, at 12:07 PM, Daniel Freedman dfre...@google.com wrote:

 I'm concerned that if the spec shipped as you described, that it would not 
 be useful enough to developers to bother using it at all.

 I'm concerned that we can never ship this feature due to the performance 
 penalties it imposes.

 Without useful redistributions, authors can't use composition of web 
 components very well without scripting.
 At that point, it's not much better than just leaving it all in the document 
 tree.

 I don't think having to inspect the light DOM manually is terrible, and we 
 had been using shadow DOM to implement textarea, input, and other elements 
 years before we introduced node redistributions.

 On May 1, 2013, at 8:57 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 It's difficult to understand without working through examples
 yourself, but removing these abilities does not make Shadow DOM
 simpler, it just makes it much, much weaker.

 It does make shadow DOM significantly simpler at least in the areas we're 
 concerned about.

 - R. Niwa





Re: [Shadow DOM] Simplifying level 1 of Shadow DOM

2013-05-01 Thread Scott Miles
 Note that the interesting restriction isn't that it shouldn't regress 
 performance
for the web-at-large.

No argument, but afaict, the implication of R. Niwa's statement was was in
fact that there was a penalty for these features merely existing.

 The restriction is that it shouldn't be slow when there is heavy usage
of Shadow DOM on the page.

Again, no argument. But as a developer happily coding away against Canary's
Shadow DOM implementation, it's hard for me to accept the the prima facie
case that it must be simplified to achieve this goal.

Scott

P.S. No footguns!


On Wed, May 1, 2013 at 12:41 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, May 1, 2013 at 12:15 PM, Dimitri Glazkov dglaz...@chromium.org
 wrote:
  On Wed, May 1, 2013 at 11:49 AM, Ryosuke Niwa rn...@apple.com wrote:
  On Apr 30, 2013, at 12:07 PM, Daniel Freedman dfre...@google.com
 wrote:
 
  I'm concerned that if the spec shipped as you described, that it would
 not be useful enough to developers to bother using it at all.
 
  I'm concerned that we can never ship this feature due to the
 performance penalties it imposes.
 
  Can you tell me more about this concern? I am pretty sure the current
  implementation in WebKit/Blink does not regress performance for the
  Web-at-large.

 Note that the interesting restriction isn't that it shouldn't regress
 performance for the web-at-large. The restriction is that it
 shouldn't be slow when there is heavy usage of Shadow DOM on the
 page.

 Otherwise we recreate one of the problems of Mutation Events. Gecko
 was able to make them not regress performance as long as they weren't
 used. But that meant that we had to go around telling everyone to not
 use them. And creating features and then telling people not to use
 them is a pretty boring exercise.

 Or, to put it another way: Don't create footguns.

 / Jonas




Re: [Shadow DOM] Simplifying level 1 of Shadow DOM

2013-05-01 Thread Daniel Freedman
On Wed, May 1, 2013 at 11:49 AM, Ryosuke Niwa rn...@apple.com wrote:

 On Apr 30, 2013, at 12:07 PM, Daniel Freedman dfre...@google.com wrote:

  I'm concerned that if the spec shipped as you described, that it would
 not be useful enough to developers to bother using it at all.

 I'm concerned that we can never ship this feature due to the performance
 penalties it imposes.

  Without useful redistributions, authors can't use composition of web
 components very well without scripting.
  At that point, it's not much better than just leaving it all in the
 document tree.

 I don't think having to inspect the light DOM manually is terrible


I'm surprised to hear you say this. The complexity of the DOM and CSS
styling that moden web applications demand is mind numbing.
Having to create possibly hundreds of unique CSS selectors applied to
possibly thousands of DOM nodes, hoping that no properties conflict, and
that no bizarre corner cases arise as nodes move in and out of the document.

Inspecting that DOM is a nightmare.

Just looking at Twitter, a Tweet UI element is very complicated.
It seems like they embed parts of the UI into data attributes (like
data-expanded-footer).
That to me looks like a prime candidate for placement in a ShadowRoot.
The nested structure of it also suggests that they would benefit from node
distribution through composition.

That's why ShadowDOM is so important. It has the ability to scope
complexity into things that normal web developers can understand, compose,
and reuse.


 , and we had been using shadow DOM to implement textarea, input, and other
 elements years before we introduced node redistributions.


Things like input and textarea are trivial compared to a youtube video
player, or a threaded email list with reply buttons and formatting toolbars.
These are the real candidates for ShadowDOM: the UI controls that are
complicated.



 On May 1, 2013, at 8:57 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:

  It's difficult to understand without working through examples
  yourself, but removing these abilities does not make Shadow DOM
  simpler, it just makes it much, much weaker.

 It does make shadow DOM significantly simpler at least in the areas we're
 concerned about.

 - R. Niwa





Re: [Shadow DOM] Simplifying level 1 of Shadow DOM

2013-05-01 Thread Dimitri Glazkov
FWIW, I don't mind revisiting and even tightening selectors on
insertion points. I don't want this to be a sticking point.

:DG

On Wed, May 1, 2013 at 12:46 PM, Scott Miles sjmi...@google.com wrote:
 Note that the interesting restriction isn't that it shouldn't regress
 performance for the web-at-large.

 No argument, but afaict, the implication of R. Niwa's statement was was in
 fact that there was a penalty for these features merely existing.

 The restriction is that it shouldn't be slow when there is heavy usage
 of Shadow DOM on the page.

 Again, no argument. But as a developer happily coding away against Canary's
 Shadow DOM implementation, it's hard for me to accept the the prima facie
 case that it must be simplified to achieve this goal.

 Scott

 P.S. No footguns!


 On Wed, May 1, 2013 at 12:41 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, May 1, 2013 at 12:15 PM, Dimitri Glazkov dglaz...@chromium.org
 wrote:
  On Wed, May 1, 2013 at 11:49 AM, Ryosuke Niwa rn...@apple.com wrote:
  On Apr 30, 2013, at 12:07 PM, Daniel Freedman dfre...@google.com
  wrote:
 
  I'm concerned that if the spec shipped as you described, that it would
  not be useful enough to developers to bother using it at all.
 
  I'm concerned that we can never ship this feature due to the
  performance penalties it imposes.
 
  Can you tell me more about this concern? I am pretty sure the current
  implementation in WebKit/Blink does not regress performance for the
  Web-at-large.

 Note that the interesting restriction isn't that it shouldn't regress
 performance for the web-at-large. The restriction is that it
 shouldn't be slow when there is heavy usage of Shadow DOM on the
 page.

 Otherwise we recreate one of the problems of Mutation Events. Gecko
 was able to make them not regress performance as long as they weren't
 used. But that meant that we had to go around telling everyone to not
 use them. And creating features and then telling people not to use
 them is a pretty boring exercise.

 Or, to put it another way: Don't create footguns.

 / Jonas





Re: [Shadow DOM] Simplifying level 1 of Shadow DOM

2013-05-01 Thread Scott Miles
Sorry it got lost in other messages, but fwiw, I also don't have problem
with

 revisiting and even tightening selectors

Scott


On Wed, May 1, 2013 at 12:55 PM, Dimitri Glazkov dglaz...@chromium.orgwrote:

 FWIW, I don't mind revisiting and even tightening selectors on
 insertion points. I don't want this to be a sticking point.

 :DG

 On Wed, May 1, 2013 at 12:46 PM, Scott Miles sjmi...@google.com wrote:
  Note that the interesting restriction isn't that it shouldn't regress
  performance for the web-at-large.
 
  No argument, but afaict, the implication of R. Niwa's statement was was
 in
  fact that there was a penalty for these features merely existing.
 
  The restriction is that it shouldn't be slow when there is heavy usage
  of Shadow DOM on the page.
 
  Again, no argument. But as a developer happily coding away against
 Canary's
  Shadow DOM implementation, it's hard for me to accept the the prima facie
  case that it must be simplified to achieve this goal.
 
  Scott
 
  P.S. No footguns!
 
 
  On Wed, May 1, 2013 at 12:41 PM, Jonas Sicking jo...@sicking.cc wrote:
 
  On Wed, May 1, 2013 at 12:15 PM, Dimitri Glazkov dglaz...@chromium.org
 
  wrote:
   On Wed, May 1, 2013 at 11:49 AM, Ryosuke Niwa rn...@apple.com
 wrote:
   On Apr 30, 2013, at 12:07 PM, Daniel Freedman dfre...@google.com
   wrote:
  
   I'm concerned that if the spec shipped as you described, that it
 would
   not be useful enough to developers to bother using it at all.
  
   I'm concerned that we can never ship this feature due to the
   performance penalties it imposes.
  
   Can you tell me more about this concern? I am pretty sure the current
   implementation in WebKit/Blink does not regress performance for the
   Web-at-large.
 
  Note that the interesting restriction isn't that it shouldn't regress
  performance for the web-at-large. The restriction is that it
  shouldn't be slow when there is heavy usage of Shadow DOM on the
  page.
 
  Otherwise we recreate one of the problems of Mutation Events. Gecko
  was able to make them not regress performance as long as they weren't
  used. But that meant that we had to go around telling everyone to not
  use them. And creating features and then telling people not to use
  them is a pretty boring exercise.
 
  Or, to put it another way: Don't create footguns.
 
  / Jonas
 
 



Re: URL comparison

2013-05-01 Thread Clint Hill
I'd like to add to what Brian is saying below: This is effectively the
goal of the public-nextweb group as I see it. We want to provide a way for
developers to build these types of prollyfills and .. More importantly ..
Share and publicize them to get the necessary exposure and feedback.

Without that second part there is no point in the first. I think
prollyfill.org could be more useful if it acted as a registry for these
things. In fact Brian and I did this with Hitch (leveraging Github as the
source host) and I tend to believe this is the right way. Right now there
is a wiki listing the known prollyfills out in the wild but maybe this is
not easy to find and not easy to provide feedback too.

It's my opinion that prollyfill.org turn into a registry that is simply
pointer to Github projects.

Thoughts?


On 5/1/13 10:03 AM, Brian Kardell bkard...@gmail.com wrote:

+ the public-nextweb list...

On Wed, May 1, 2013 at 9:00 AM, Anne van Kesteren ann...@annevk.nl
wrote:
 On Sun, Apr 28, 2013 at 12:56 PM, Brian Kardell bkard...@gmail.com
wrote:
 We created a prollyfill for this about a year ago (called :-link-local
 instead of :local-link for forward compatibility):

 http://hitchjs.wordpress.com/2012/05/18/content-based-css-link/

 Cool!


 If you can specify the workings, we (public-nextweb community group)
can rev
 the prollyfill, help create tests, collect feedback, etc so that when
it
 comes time for implementation and rec there are few surprises.

 Did you get any feedback thus far about desired functionality,
 problems that are difficult to overcome, ..?


 --
 http://annevankesteren.nl/

We have not uncovered much on this one other than that the few people
who commented were confused by what it meant - but we didn't really
make a huge effort to push it out there... By comparison to some
others it isn't a very 'exciting' fill (our :has() for example had
lots of comment as did our mathematical attribute selectors) - but we
definitely can.  I'd like to open it up to these groups where/how you
think might be an effective means of collecting necessary data -
should we ask people to contribute comments to the list? set up a git
project where people can pull/create issues, register tests/track fork
suggestions, etc?  Most of our stuff for collecting information has
been admittedly all over the place (twitter, HN, reddit, blog
comments, etc), but this predates the nextweb group and larger
coordination, so I'm *very happy* if we can begin to change that.



--
Brian Kardell :: @briankardell :: hitchjs.com






Blob URLs | autoRevoke, defaults, and resolutions

2013-05-01 Thread Arun Ranganathan
At the recent TPAC for Working Groups held in San Jose, Adrian Bateman, Jonas 
Sicking and I spent some time taking a look at how to remedy what the spec. 
says today about Blob URLs, both from the perspective of default behavior and 
in terms of what correct autoRevoke behavior should be.  This email is to 
summarize those discussions.

Blob URLs are used in different parts of the platform today, and are expected 
to work on the platform wherever URLs do.  This includes CSS, MediaStream and 
MediaSource use cases [1], along with use of 'src='.   

(Separate discussions about a v2 of the File API spec, including use of a 
Futures-based model in lieu of the event model, took place, but submitting a 
LCWD with major interoperability amongst all browsers is a good goal for this 
draft.)

Here's a summary of the Blob URL issues:

1. There's the relatively easy question of defaults.  While the spec says that 
URL.createObjectURL should create a Blob URL which has autoRevoke: true by 
default [2], there isn't any implementation that supports this, whether that's 
IE's oneTimeOnly behavior (which is related but different), or Firefox's 
autoRevoke implementation.  Chrome doesn't touch this yet :)

The spec. will roll back the default from true to false.  At least this 
matches what implementations do; there's been resistance to changing the 
default due to shipping applications relying on autoRevoke being false by 
default, or at least implementor reluctance [1].

Switching the default to false would enable IE, Chrome, andFirefox to have 
interoperability with URL.createObjectURL(blobArg), though such a default 
places burdens on web developers to couple create* calls with revoke* calls to 
not leak Blobs.  Jonas proposes a separate method, 
URL.createAutoRevokeObjectURL, which creates an autoRevoke URL.  I'm lukewarm 
on that :-\

2. Regardless of the default, there's the hard question of what to do with Blob 
URL revocation.  Glenn / zewt points out that this applies, though perhaps less 
dramatically, to *manually* revoked Blob URLs, and provides some test cases 
[3].  

Options are:

2a. To meticulously special-case Blob URLs, per Bug 17765 [4].  This calls for 
a synchronous step attached to wherever URLs are used to peg Blob URL data at 
fetch, so that the chance of a concurrent revocation doesn't cause things to 
behave unpredictably.  Firefox does a variation of this with keeping channels 
open, but solving this bug interoperably is going to be very hard, and has to 
be done in different places across the platform.  And even within CSS.  This is 
hard to move forward with.

2b.To adopt an 80-20 rule, and only specify what happens for some cases that 
seem common, but expressly disallow other cases.  This might be a more muted 
version of Bug 17765, especially if it can't be done within fetch [5].  

This could mean that the blob clause for basic fetch[5] only defines some 
cases where a synchronous fetch can be run (TBD) but expressly disallows others 
where synchronous fetching is not feasible.  This would limit the use of Blob 
URLs pretty drastically, but might be the only solution.  For instance, 
asynchronous calls accompanying embed, defer etc. might have to be 
expressly disallowed.  It would be great if we do this in fetch [5] :-)

Essentially, this might be to do what Firefox does but document what 
dereference means [6], and be clear about what might break.  Most 
implementors acknowledge that use of Blob URLs simply won't work in some cases 
(e.g. CSS cases, etc.).  We should formalize that; it would involve listing 
what works explicitly.  Anne?

2c. Re-use oneTimeOnly as in IE's behavior for autoRevoke (but call it 
autoRevoke).  But we jettisoned this for race conditions e.g.

// This is in IE only
 
img2.src = URL.createObjectURL(fileBlob, {oneTimeOnly: true});

// race now! then fail in IE only
img1.src = img2.src;

will fail in IE with oneTimeOnly.  It appears to fail reliably, but again, 
dereference URL may not be interoperable here.  This is probably not what we 
should do, but it was worth listing, since it carries the brute force of a 
shipping implementation, and shows how some % of the market has actively solved 
this problem :)

3. We can lift origin restrictions in v2 on Blob URL; currently, one shipping 
implementation (IE) actively relies on origin restrictions, but expressed 
willingness to phase this out.  Most use cases needing Blob data across origins 
can be met without needing Blob URLs to not be origin restricted.  Blob URLs 
must be unguessable for that to happen, and today, they aren't unguessable in 
some implementations.

-- A*


[1] https://www.w3.org/Bugs/Public/show_bug.cgi?id=19594
[2] http://dev.w3.org/2006/webapi/FileAPI/#creating-revoking
[3] http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0294.html
[4] https://www.w3.org/Bugs/Public/show_bug.cgi?id=17765
[5] http://fetch.spec.whatwg.org/#concept-fetch
[6] 

Re: Blob URLs | autoRevoke, defaults, and resolutions

2013-05-01 Thread Eric U
On Wed, May 1, 2013 at 3:36 PM, Arun Ranganathan a...@mozilla.com wrote:
 At the recent TPAC for Working Groups held in San Jose, Adrian Bateman, Jonas 
 Sicking and I spent some time taking a look at how to remedy what the spec. 
 says today about Blob URLs, both from the perspective of default behavior and 
 in terms of what correct autoRevoke behavior should be.  This email is to 
 summarize those discussions.

 Blob URLs are used in different parts of the platform today, and are expected 
 to work on the platform wherever URLs do.  This includes CSS, MediaStream and 
 MediaSource use cases [1], along with use of 'src='.

 (Separate discussions about a v2 of the File API spec, including use of a 
 Futures-based model in lieu of the event model, took place, but submitting a 
 LCWD with major interoperability amongst all browsers is a good goal for this 
 draft.)

 Here's a summary of the Blob URL issues:

 1. There's the relatively easy question of defaults.  While the spec says 
 that URL.createObjectURL should create a Blob URL which has autoRevoke: true 
 by default [2], there isn't any implementation that supports this, whether 
 that's IE's oneTimeOnly behavior (which is related but different), or 
 Firefox's autoRevoke implementation.  Chrome doesn't touch this yet :)

 The spec. will roll back the default from true to false.  At least this 
 matches what implementations do; there's been resistance to changing the 
 default due to shipping applications relying on autoRevoke being false by 
 default, or at least implementor reluctance [1].

Sounds good.  Let's just be consistent.

 Switching the default to false would enable IE, Chrome, andFirefox to have 
 interoperability with URL.createObjectURL(blobArg), though such a default 
 places burdens on web developers to couple create* calls with revoke* calls 
 to not leak Blobs.  Jonas proposes a separate method, 
 URL.createAutoRevokeObjectURL, which creates an autoRevoke URL.  I'm lukewarm 
 on that :-\

I'd support a new method with a different default, if we could figure
out a reasonable thing for that new method to do.

 2. Regardless of the default, there's the hard question of what to do with 
 Blob URL revocation.  Glenn / zewt points out that this applies, though 
 perhaps less dramatically, to *manually* revoked Blob URLs, and provides some 
 test cases [3].

 Options are:

 2a. To meticulously special-case Blob URLs, per Bug 17765 [4].  This calls 
 for a synchronous step attached to wherever URLs are used to peg Blob URL 
 data at fetch, so that the chance of a concurrent revocation doesn't cause 
 things to behave unpredictably.  Firefox does a variation of this with 
 keeping channels open, but solving this bug interoperably is going to be very 
 hard, and has to be done in different places across the platform.  And even 
 within CSS.  This is hard to move forward with.

Hard.

 2b.To adopt an 80-20 rule, and only specify what happens for some cases that 
 seem common, but expressly disallow other cases.  This might be a more muted 
 version of Bug 17765, especially if it can't be done within fetch [5].

Ugly.

 This could mean that the blob clause for basic fetch[5] only defines some 
 cases where a synchronous fetch can be run (TBD) but expressly disallows 
 others where synchronous fetching is not feasible.  This would limit the use 
 of Blob URLs pretty drastically, but might be the only solution.  For 
 instance, asynchronous calls accompanying embed, defer etc. might have to 
 be expressly disallowed.  It would be great if we do this in fetch [5] :-)

Just to be clear, this would limit the use of *autoRevoke* Blob URLs,
not all Blob URLs, yes?

 Essentially, this might be to do what Firefox does but document what 
 dereference means [6], and be clear about what might break.  Most 
 implementors acknowledge that use of Blob URLs simply won't work in some 
 cases (e.g. CSS cases, etc.).  We should formalize that; it would involve 
 listing what works explicitly.  Anne?

 2c. Re-use oneTimeOnly as in IE's behavior for autoRevoke (but call it 
 autoRevoke).  But we jettisoned this for race conditions e.g.

 // This is in IE only

 img2.src = URL.createObjectURL(fileBlob, {oneTimeOnly: true});

 // race now! then fail in IE only
 img1.src = img2.src;

 will fail in IE with oneTimeOnly.  It appears to fail reliably, but again, 
 dereference URL may not be interoperable here.  This is probably not what 
 we should do, but it was worth listing, since it carries the brute force of a 
 shipping implementation, and shows how some % of the market has actively 
 solved this problem :)

I'm not really sure this is so bad.  I know it's the case I brought
up, and I must admit that I disliked the oneTimeOnly when I first
heard about it, but all other proposals [including not having
automatic revocation at all] now seem worse.  Here you've set
something to be oneTimeOnly and used it twice; if that fails in IE,
that's correct.  If it works some of 

Re: Blob URLs | autoRevoke, defaults, and resolutions

2013-05-01 Thread Jonas Sicking
On Wed, May 1, 2013 at 4:25 PM, Eric U er...@google.com wrote:
 On Wed, May 1, 2013 at 3:36 PM, Arun Ranganathan a...@mozilla.com wrote:
 Switching the default to false would enable IE, Chrome, andFirefox to have 
 interoperability with URL.createObjectURL(blobArg), though such a default 
 places burdens on web developers to couple create* calls with revoke* calls 
 to not leak Blobs.  Jonas proposes a separate method, 
 URL.createAutoRevokeObjectURL, which creates an autoRevoke URL.  I'm 
 lukewarm on that :-\

 I'd support a new method with a different default, if we could figure
 out a reasonable thing for that new method to do.

Yeah, the if-condition here is quite important.

But if we can figure out this problem, then my proposal would be to
add a new method which has a nicer name than createObjectURL as to
encourage authors to use that and have fewer leaks.

 2. Regardless of the default, there's the hard question of what to do with 
 Blob URL revocation.  Glenn / zewt points out that this applies, though 
 perhaps less dramatically, to *manually* revoked Blob URLs, and provides 
 some test cases [3].

 Options are:

 2a. To meticulously special-case Blob URLs, per Bug 17765 [4].  This calls 
 for a synchronous step attached to wherever URLs are used to peg Blob URL 
 data at fetch, so that the chance of a concurrent revocation doesn't cause 
 things to behave unpredictably.  Firefox does a variation of this with 
 keeping channels open, but solving this bug interoperably is going to be 
 very hard, and has to be done in different places across the platform.  And 
 even within CSS.  This is hard to move forward with.

 Hard.

It actually has turned out to be surprisingly easy in Gecko. But I
realize the same might not be true everywhere.

 2b.To adopt an 80-20 rule, and only specify what happens for some cases that 
 seem common, but expressly disallow other cases.  This might be a more muted 
 version of Bug 17765, especially if it can't be done within fetch [5].

 Ugly.

 This could mean that the blob clause for basic fetch[5] only defines 
 some cases where a synchronous fetch can be run (TBD) but expressly 
 disallows others where synchronous fetching is not feasible.  This would 
 limit the use of Blob URLs pretty drastically, but might be the only 
 solution.  For instance, asynchronous calls accompanying embed, defer 
 etc. might have to be expressly disallowed.  It would be great if we do this 
 in fetch [5] :-)

 Just to be clear, this would limit the use of *autoRevoke* Blob URLs,
 not all Blob URLs, yes?

No, it would limit the use of all *revokable* Blob URLs. Since you get
exactly the same issues when the page calls revokeObjectURL manually.
So that means that it applies to all Blob URLs.

 2c. Re-use oneTimeOnly as in IE's behavior for autoRevoke (but call it 
 autoRevoke).  But we jettisoned this for race conditions e.g.

 // This is in IE only

 img2.src = URL.createObjectURL(fileBlob, {oneTimeOnly: true});

 // race now! then fail in IE only
 img1.src = img2.src;

 will fail in IE with oneTimeOnly.  It appears to fail reliably, but again, 
 dereference URL may not be interoperable here.  This is probably not what 
 we should do, but it was worth listing, since it carries the brute force of 
 a shipping implementation, and shows how some % of the market has actively 
 solved this problem :)

 I'm not really sure this is so bad.  I know it's the case I brought
 up, and I must admit that I disliked the oneTimeOnly when I first
 heard about it, but all other proposals [including not having
 automatic revocation at all] now seem worse.  Here you've set
 something to be oneTimeOnly and used it twice; if that fails in IE,
 that's correct.  If it works some of the time in other browsers [after
 they implement oneTimeOnly], that's not good, but you did pretty much
 aim at your own foot.  Developers that actively try to do the right
 thing will have consistent good results without extra code, at least.
 I realize that img1.src = img2.src failing is odd, but as [IIRC]
 Adrian pointed out, if it's an uncacheable image on a server that's
 gone away, couldn't that already happen, depending on your network
 stack implementation?

I'm more worried that if implementations doesn't initiate the load
synchronously, which is hard per your comment above, then it can
easily be random which of the two loads succeeds and which fails. If
the revoking happens at the end of the load, both loads could even
succeed depending on timing and implementation details.

/ Jonas



Re: Blob URLs | autoRevoke, defaults, and resolutions

2013-05-01 Thread Glenn Maynard
On Wed, May 1, 2013 at 5:36 PM, Arun Ranganathan a...@mozilla.com wrote:

 2a. To meticulously special-case Blob URLs, per Bug 17765 [4].  This calls
 for a synchronous step attached to wherever URLs are used to peg Blob URL
 data at fetch, so that the chance of a concurrent revocation doesn't cause
 things to behave unpredictably.  Firefox does a variation of this with
 keeping channels open, but solving this bug interoperably is going to be
 very hard, and has to be done in different places across the platform.  And
 even within CSS.  This is hard to move forward with.

 2b.To adopt an 80-20 rule, and only specify what happens for some cases
 that seem common, but expressly disallow other cases.  This might be a more
 muted version of Bug 17765, especially if it can't be done within fetch [5].


I'm okay with limiting this in cases where it's particularly hard to
define.  In particular, it seems like placing a hook in CSS in any
deterministic way is hard, at least today: from what I understand, the time
CSS parsing happens is unspecified.

However, we probably can't break non-autorevoke blob URLs with CSS.  So,
I'd propose:

- Start by defining that auto-revoke blob URLs may only be used with APIs
that explicitly capture the blob (putting aside the mechanics of how we do
that, for now).  Blob capture would still affect non-autorevoke blob URLs,
since it fixes race conditions, but an uncaptured blob URL would continue
to work with non-autorevoke URLs.
- Apply blob capture to one or two test cases.  I think XHR is a good place
for this, because it's easy to test, due to the xhr.open() and xhr.send()
split.  xhr.open() is where blob capture should happen, and xhr.send() is
where the fetch happens.
- Once people are comfortable with how it works, start applying it to other
major blob URL cases (eg. img).  Whether to apply it broadly to all APIs
next or not is something that could be decided at this point.

This will make autorevoke blob URLs work, gradually fix manual-revoke blob
URLs as a side-effect, and leave manual-revoke URLs unspecified but
functional for the remaining cases.  It also doesn't require us to dive in
head-first and try to apply this to every API on the platform all at once,
which nobody wants to do; it lets us test it out, then apply it to more
APIs at whatever pace makes sense.

(I don't know any way to deal with the CSS case.)



 2c. Re-use oneTimeOnly as in IE's behavior for autoRevoke (but call it
 autoRevoke).  But we jettisoned this for race conditions e.g.

 // This is in IE only

 img2.src = URL.createObjectURL(fileBlob, {oneTimeOnly: true});

 // race now! then fail in IE only
 img1.src = img2.src;

 will fail in IE with oneTimeOnly.  It appears to fail reliably, but again,
 dereference URL may not be interoperable here.  This is probably not what
 we should do, but it was worth listing, since it carries the brute force of
 a shipping implementation, and shows how some % of the market has actively
 solved this problem :)


There are a lot of problems with oneTimeOnly.  It's very easy for the URL
to never actually be used, which results in a subtle and expensive blob
leak.  For example, this:

setInterval(function() {
img.src = URL.createObjectURL(createBlob(), {oneTimeOnly: true});
}, 100);

might leak 10 blobs per second, since a browser that obtains images on
demand might not fetch the blob at all, while a browser that obtains
images immediately wouldn't.

-- 
Glenn Maynard


Re: Blob URLs | autoRevoke, defaults, and resolutions

2013-05-01 Thread Eric U
On Wed, May 1, 2013 at 4:53 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, May 1, 2013 at 4:25 PM, Eric U er...@google.com wrote:
 On Wed, May 1, 2013 at 3:36 PM, Arun Ranganathan a...@mozilla.com wrote:
 Switching the default to false would enable IE, Chrome, andFirefox to 
 have interoperability with URL.createObjectURL(blobArg), though such a 
 default places burdens on web developers to couple create* calls with 
 revoke* calls to not leak Blobs.  Jonas proposes a separate method, 
 URL.createAutoRevokeObjectURL, which creates an autoRevoke URL.  I'm 
 lukewarm on that :-\

 I'd support a new method with a different default, if we could figure
 out a reasonable thing for that new method to do.

 Yeah, the if-condition here is quite important.

 But if we can figure out this problem, then my proposal would be to
 add a new method which has a nicer name than createObjectURL as to
 encourage authors to use that and have fewer leaks.

Heh; I wasn't even going to mention the name.

 2. Regardless of the default, there's the hard question of what to do with 
 Blob URL revocation.  Glenn / zewt points out that this applies, though 
 perhaps less dramatically, to *manually* revoked Blob URLs, and provides 
 some test cases [3].

 Options are:

 2a. To meticulously special-case Blob URLs, per Bug 17765 [4].  This calls 
 for a synchronous step attached to wherever URLs are used to peg Blob URL 
 data at fetch, so that the chance of a concurrent revocation doesn't cause 
 things to behave unpredictably.  Firefox does a variation of this with 
 keeping channels open, but solving this bug interoperably is going to be 
 very hard, and has to be done in different places across the platform.  And 
 even within CSS.  This is hard to move forward with.

 Hard.

 It actually has turned out to be surprisingly easy in Gecko. But I
 realize the same might not be true everywhere.

Right, and defining just when it happens, across browsers, may also be hard.

 2b.To adopt an 80-20 rule, and only specify what happens for some cases 
 that seem common, but expressly disallow other cases.  This might be a more 
 muted version of Bug 17765, especially if it can't be done within fetch [5].

 Ugly.

 This could mean that the blob clause for basic fetch[5] only defines 
 some cases where a synchronous fetch can be run (TBD) but expressly 
 disallows others where synchronous fetching is not feasible.  This would 
 limit the use of Blob URLs pretty drastically, but might be the only 
 solution.  For instance, asynchronous calls accompanying embed, defer 
 etc. might have to be expressly disallowed.  It would be great if we do 
 this in fetch [5] :-)

 Just to be clear, this would limit the use of *autoRevoke* Blob URLs,
 not all Blob URLs, yes?

 No, it would limit the use of all *revokable* Blob URLs. Since you get
 exactly the same issues when the page calls revokeObjectURL manually.
 So that means that it applies to all Blob URLs.

Ah, right; all revoked Blob URLs.

 2c. Re-use oneTimeOnly as in IE's behavior for autoRevoke (but call it 
 autoRevoke).  But we jettisoned this for race conditions e.g.

 // This is in IE only

 img2.src = URL.createObjectURL(fileBlob, {oneTimeOnly: true});

 // race now! then fail in IE only
 img1.src = img2.src;

 will fail in IE with oneTimeOnly.  It appears to fail reliably, but again, 
 dereference URL may not be interoperable here.  This is probably not what 
 we should do, but it was worth listing, since it carries the brute force of 
 a shipping implementation, and shows how some % of the market has actively 
 solved this problem :)

 I'm not really sure this is so bad.  I know it's the case I brought
 up, and I must admit that I disliked the oneTimeOnly when I first
 heard about it, but all other proposals [including not having
 automatic revocation at all] now seem worse.  Here you've set
 something to be oneTimeOnly and used it twice; if that fails in IE,
 that's correct.  If it works some of the time in other browsers [after
 they implement oneTimeOnly], that's not good, but you did pretty much
 aim at your own foot.  Developers that actively try to do the right
 thing will have consistent good results without extra code, at least.
 I realize that img1.src = img2.src failing is odd, but as [IIRC]
 Adrian pointed out, if it's an uncacheable image on a server that's
 gone away, couldn't that already happen, depending on your network
 stack implementation?

 I'm more worried that if implementations doesn't initiate the load
 synchronously, which is hard per your comment above, then it can
 easily be random which of the two loads succeeds and which fails. If
 the revoking happens at the end of the load, both loads could even
 succeed depending on timing and implementation details.

Yup; I'm just saying that if you get a failure here, you shouldn't be
surprised, no matter which img gets it.  You did something explicitly
wrong.  Ideally we'd give predictable behavior, but if we can't do

Re: Blob URLs | autoRevoke, defaults, and resolutions

2013-05-01 Thread Glenn Maynard
On Wed, May 1, 2013 at 7:01 PM, Eric U er...@google.com wrote:

 Hmm...now Glenn points out another problem: if you /never/ load the
 image, for whatever reason, you can still leak it.  How likely is that
 in good code, though?  And is it worse than the current state in good
 or bad code?


I think it's much too easy for well-meaning developers to mess this up.
The example I gave is code that *does* use the URL, but the browser may or
may not actually do anything with it.  (I wouldn't even call that author
error--it's an interoperability failure.)  Also, the failures are both
expensive and subtle (eg. lots of big blobs being silently leaked to disk),
which is a pretty nasty failure mode.

Another problem is that APIs should be able to receive an API, then use it
multiple times.  For example, srcset can change the image being displayed
when the environment changes.  oneTimeOnly would be weird in that case.
For example, it would work when you load your page on a tablet, then work
again when your browser outputs the display to a TV and changes the srcset
image.  (The image was never used, so the URL is still valid.)  But then
when you go back to the tablet screen and reconfigure back to the original
configuration, it suddenly breaks, since the first URL was already used and
discarded.  The blob capture approach can be made to work with srcset, so
this would work reliably.

-- 
Glenn Maynard


Re: ZIP archive API?

2013-05-01 Thread Paul Bakaus
Still waiting for it as well. I think it'd be very useful to transfer sets of 
assets etc.

From: Florian Bösch pya...@gmail.commailto:pya...@gmail.com
Date: Tue, 30 Apr 2013 15:58:22 +0200
To: Anne van Kesteren ann...@annevk.nlmailto:ann...@annevk.nl
Cc: Charles McCathie Nevile 
cha...@yandex-team.rumailto:cha...@yandex-team.ru, public-webapps WG 
public-webapps@w3.orgmailto:public-webapps@w3.org, Andrea Marchesini 
amarches...@mozilla.commailto:amarches...@mozilla.com, Paul Bakaus 
pbak...@zynga.commailto:pbak...@zynga.com
Subject: Re: ZIP archive API?

I am very interested in working with archives. I'm currently using it as a 
delivery from server (like quake packs), import and export format for WebGL 
apps.


On Tue, Apr 30, 2013 at 3:18 PM, Anne van Kesteren 
ann...@annevk.nlmailto:ann...@annevk.nl wrote:
On Tue, Apr 30, 2013 at 1:07 PM, Charles McCathie Nevile
cha...@yandex-team.rumailto:cha...@yandex-team.ru wrote:
 Hi all, at the last TPAC there was discussion of a ZIP archive proposal.
 This has come and gone in various guises.

 Are there people currently interested in being able to work with ZIP in a
 web app? Are there implementors and is there an editor?

We have https://wiki.mozilla.org/WebAPI/ArchiveAPI which is
implemented as well (cc'd Andrea, who implemented it).

There's also https://bugzilla.mozilla.org/show_bug.cgi?id=681967 about
a somewhat-related proposal by Paul.


--
http://annevankesteren.nl/




Re: Collecting real world use cases (Was: Fixing appcache: a proposal to get us started)

2013-05-01 Thread Paul Bakaus
Hi Jonas, hi Alec,

I think all of this is great stuff that helps a ton – thanks! Let's definitely 
put them onto a wiki somewhere.

Charles: Where would be the right place to put that list? I'm imagining a big 
table with columns for the proposal and then each of the AppCache 2.0 proposals 
validated against it in the others.

Thanks,
Paul

From: Alec Flett alecfl...@chromium.orgmailto:alecfl...@chromium.org
Date: Wed, 1 May 2013 08:50:33 -0700
To: Jonas Sicking jo...@sicking.ccmailto:jo...@sicking.cc
Cc: Paul Bakaus pbak...@zynga.commailto:pbak...@zynga.com, Webapps WG 
public-webapps@w3.orgmailto:public-webapps@w3.org
Subject: Re: Collecting real world use cases (Was: Fixing appcache: a proposal 
to get us started)

I think there are some good use cases for not-quite-offline as well. Sort of a 
combination of your twitter and wikipedia use cases:

Community-content site: Logged-out users have content cached aggressively 
offline - meaning every page visited should be cached until told otherwise. 
Intermediate caches / proxies should be able to cache the latest version of a 
URL. As soon as a user logs in, the same urls they just used should now have 
editing controls. (note that actual page contents *may* not have not changed, 
just the UI) Pages now need to be fresh meaning that users should never edit 
stale content. In an ideal world, once a logged in user has edited a page, that 
page is pushed to users or proxies who have previously cached that page and 
will likely visit it again soon.

I know this example in particular seems like it could be accomplished with a 
series of If-Modified-Since / 304's, but connection latency is the killer here, 
especially for mobile - the fact that you have a white screen while you wait to 
see if the page has changed. The idea that you could visit a cached page, (i.e. 
avoid hitting the network) and then a few seconds later be told there is a 
newer version of this page available after the fact, (or even just silently 
update the page so the next visit delivers a fresh but network-free page) would 
be pretty huge. Especially if you could then proactively fetch a select set of 
pages - i.e. imagine an in-browser process that says for each link on this 
page, if I have a stale copy of the url, go fetch it in the background so it is 
ready in the cache

(On this note it would probably be worth reaching out to the wiki foundation to 
learn about the hassle they've gone through over the years trying to distribute 
the load of wikipedia traffic given the constraints of HTTP caching, broken 
proxies, CDNs, ISPs, etc)

Alec

On Tue, Apr 30, 2013 at 9:06 PM, Jonas Sicking 
jo...@sicking.ccmailto:jo...@sicking.cc wrote:
On Apr 18, 2013 6:19 PM, Paul Bakaus 
pbak...@zynga.commailto:pbak...@zynga.com wrote:

 Hi Jonas,

 Thanks for this ­ I feel this is heading somewhere, finally! I still need
 to work on submitting my full feedback, but I'd like to mention this: Why
 did nobody so far in this thread include real world use cases?

 For a highly complex topic like this in particular, I would think that
 collecting a large number of user use cases, not only requirements, and
 furthermore finding the lowest common denominator based on them, would
 prove very helpful, even if it's just about validation and making people
 understand your lengthy proposal. I.e. a news reader that needs to sync
 content, but has an offline UI.

 Do you have a list collected somewhere?

Sorry for not including the list in the initial email. It was long
enough as it was so I decided to stop.

Some of the use cases we discussed were:

Small simple game
The game consists of a set of static resources. A few HTML pages, like
high score page, start page, in-game page, etc. A larger number of
media resources. A few data resources which contain level metadata.
Small amount of dynamic data being generated, such as progress on each
level, high score, user info.
In-game performance is critical, all resources must be guaranteed to
be available locally once the game starts.
Little need for network connectivity other than to update game
resources whenever an update is available.

Advanced game
Same as simple game, but also downloads additional levels dynamically.
Also wants to store game progress on servers so that it can be synced
across devices.

Wikipedia
Top level page and its resources are made available offline.
Application logic can enable additional pages to be made available
offline. When such a page is made available offline both the page and
any media resources that it uses needs to be cached.
Doesn't need to be updated very aggressively, maybe only upon user request.

Twitter
A set of HTML templates that are used to create a UI for a database of
tweets. The same data is visualized in several different ways, for
example in the user's default tweet stream, in the page for an
individual tweet, and in the conversation thread view.
Downloading the actual tweet contents and metadata shouldn't need to
happen 

Re: [Shadow DOM] Simplifying level 1 of Shadow DOM

2013-05-01 Thread Ryosuke Niwa
On May 1, 2013, at 12:46 PM, Scott Miles sjmi...@google.com wrote:

  Note that the interesting restriction isn't that it shouldn't regress 
  performance for the web-at-large.
 
 No argument, but afaict, the implication of R. Niwa's statement was was in 
 fact that there was a penalty for these features merely existing.

Node redistributions restricts the kinds of performance optimizations we can 
implement and negatively affects our code maintainability.

  The restriction is that it shouldn't be slow when there is heavy usage of 
  Shadow DOM on the page.
 
 Again, no argument. But as a developer happily coding away against Canary's 
 Shadow DOM implementation, it's hard for me to accept the the prima facie 
 case that it must be simplified to achieve this goal.

I'm sure Web developers are happy to have more features but I don't want to 
introduce a feature that imposes such a high maintenance cost without knowing 
for sure that they're absolutely necessary.

On May 1, 2013, at 12:46 PM, Daniel Freedman dfre...@google.com wrote:
 I'm surprised to hear you say this. The complexity of the DOM and CSS styling 
 that moden web applications demand is mind numbing.
 Having to create possibly hundreds of unique CSS selectors applied to 
 possibly thousands of DOM nodes, hoping that no properties conflict, and that 
 no bizarre corner cases arise as nodes move in and out of the document.

I'm not sure why you're talking about CSS selectors here because that problem 
has been solved by scoped style element regardless of whether we have a shadow 
DOM or not.

 Just looking at Twitter, a Tweet UI element is very complicated.
 It seems like they embed parts of the UI into data attributes (like 
 data-expanded-footer).
 That to me looks like a prime candidate for placement in a ShadowRoot.
 The nested structure of it also suggests that they would benefit from node 
 distribution through composition.

Whether Twitter uses data attribute or text nodes and custom elements is 
completely orthogonal to node redistributions. They can write one line 
JavaScript to extra data out from either embedding mechanism.

 That's why ShadowDOM is so important. It has the ability to scope complexity 
 into things that normal web developers can understand, compose, and reuse.

I'm not objecting to the usefulness of shadow DOM.  I'm objecting to the 
usefulness of node redistributions.

 Things like input and textarea are trivial compared to a youtube video 
 player, or a threaded email list with reply buttons and formatting toolbars.
 These are the real candidates for ShadowDOM: the UI controls that are 
 complicated.


FYI, input and textarea elements aren't trivial.


On May 1, 2013, at 12:41 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, May 1, 2013 at 12:15 PM, Dimitri Glazkov dglaz...@chromium.org 
 wrote:
 On Wed, May 1, 2013 at 11:49 AM, Ryosuke Niwa rn...@apple.com wrote:
 On Apr 30, 2013, at 12:07 PM, Daniel Freedman dfre...@google.com wrote:
 
 I'm concerned that if the spec shipped as you described, that it would not 
 be useful enough to developers to bother using it at all.
 
 I'm concerned that we can never ship this feature due to the performance 
 penalties it imposes.
 
 Can you tell me more about this concern? I am pretty sure the current
 implementation in WebKit/Blink does not regress performance for the
 Web-at-large.
 
 Note that the interesting restriction isn't that it shouldn't regress
 performance for the web-at-large. The restriction is that it
 shouldn't be slow when there is heavy usage of Shadow DOM on the
 page.

Exactly.

 Otherwise we recreate one of the problems of Mutation Events. Gecko
 was able to make them not regress performance as long as they weren't
 used. But that meant that we had to go around telling everyone to not
 use them. And creating features and then telling people not to use
 them is a pretty boring exercise.


Agreed.


On May 1, 2013, at 12:37 PM, Jonas Sicking jo...@sicking.cc wrote:

 However restrict the set of selectors such that only an elements
 intrinsic state affects which insertion point it is inserted in.

Wouldn't that be confusing? How can author tell which selector is allowed and 
which one isn't?

 That way when an element is inserted or modified, you don't have to
 worry about having to check any descendants or any siblings to see if
 the selectors that they match suddenly changed.

Yeah, that's a much saner requirement.  However, even seemingly intrinsic 
information of an element may not really be intrinsic depending on each browser 
engine. For example, a selector that depends on style attribute is expensive to 
compute in WebKit because we have to serialize the style attribute in order to 
match it.

 Only when the attributes on an element changes do you have to re-test
 which insertion point it should be inserted into. And then you only
 have to recheck the node itself, no siblings or other nodes. And you
 only have to do so if the parent has a binding.


Re: [Shadow DOM] Simplifying level 1 of Shadow DOM

2013-05-01 Thread Scott Miles
 I'm sure Web developers are happy to have more features but I don't want
to introduce a feature that imposes such a high maintenance cost without
knowing for sure that they're absolutely necessary.

You are not taking 'yes' for an answer. :) I don't really disagree with you
here.

With respect to my statement to which you refer, I'm only saying that we
haven't had a discussion about the costs or the features. The discussion
jumped straight to mitigation.



On Wed, May 1, 2013 at 9:45 PM, Ryosuke Niwa rn...@apple.com wrote:

 On May 1, 2013, at 12:46 PM, Scott Miles sjmi...@google.com wrote:

  Note that the interesting restriction isn't that it shouldn't regress 
  performance
 for the web-at-large.

 No argument, but afaict, the implication of R. Niwa's statement was was in
 fact that there was a penalty for these features merely existing.


 Node redistributions restricts the kinds of performance optimizations we
 can implement and negatively affects our code maintainability.

  The restriction is that it shouldn't be slow when there is heavy
 usage of Shadow DOM on the page.

 Again, no argument. But as a developer happily coding away against
 Canary's Shadow DOM implementation, it's hard for me to accept the the prima
 facie case that it must be simplified to achieve this goal.


 I'm sure Web developers are happy to have more features but I don't want
 to introduce a feature that imposes such a high maintenance cost without
 knowing for sure that they're absolutely necessary.

 On May 1, 2013, at 12:46 PM, Daniel Freedman dfre...@google.com wrote:

 I'm surprised to hear you say this. The complexity of the DOM and CSS
 styling that moden web applications demand is mind numbing.
 Having to create possibly hundreds of unique CSS selectors applied to
 possibly thousands of DOM nodes, hoping that no properties conflict, and
 that no bizarre corner cases arise as nodes move in and out of the document.


 I'm not sure why you're talking about CSS selectors here because that
 problem has been solved by scoped style element regardless of whether we
 have a shadow DOM or not.

 Just looking at Twitter, a Tweet UI element is very complicated.
 It seems like they embed parts of the UI into data attributes (like
 data-expanded-footer).
 That to me looks like a prime candidate for placement in a ShadowRoot.
 The nested structure of it also suggests that they would benefit from node
 distribution through composition.


 Whether Twitter uses data attribute or text nodes and custom elements is
 completely orthogonal to node redistributions. They can write one line
 JavaScript to extra data out from either embedding mechanism.

 That's why ShadowDOM is so important. It has the ability to scope
 complexity into things that normal web developers can understand, compose,
 and reuse.


 I'm not objecting to the usefulness of shadow DOM.  I'm objecting to the
 usefulness of node redistributions.

 Things like input and textarea are trivial compared to a youtube video
 player, or a threaded email list with reply buttons and formatting toolbars.
 These are the real candidates for ShadowDOM: the UI controls that are
 complicated.


 FYI, input and textarea elements aren't trivial.


 On May 1, 2013, at 12:41 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, May 1, 2013 at 12:15 PM, Dimitri Glazkov dglaz...@chromium.org
 wrote:

 On Wed, May 1, 2013 at 11:49 AM, Ryosuke Niwa rn...@apple.com wrote:

 On Apr 30, 2013, at 12:07 PM, Daniel Freedman dfre...@google.com wrote:

 I'm concerned that if the spec shipped as you described, that it would not
 be useful enough to developers to bother using it at all.


 I'm concerned that we can never ship this feature due to the performance
 penalties it imposes.


 Can you tell me more about this concern? I am pretty sure the current
 implementation in WebKit/Blink does not regress performance for the
 Web-at-large.


 Note that the interesting restriction isn't that it shouldn't regress
 performance for the web-at-large. The restriction is that it
 shouldn't be slow when there is heavy usage of Shadow DOM on the
 page.


 Exactly.

 Otherwise we recreate one of the problems of Mutation Events. Gecko
 was able to make them not regress performance as long as they weren't
 used. But that meant that we had to go around telling everyone to not
 use them. And creating features and then telling people not to use
 them is a pretty boring exercise.


 Agreed.


 On May 1, 2013, at 12:37 PM, Jonas Sicking jo...@sicking.cc wrote:

 However restrict the set of selectors such that only an elements
 intrinsic state affects which insertion point it is inserted in.


 Wouldn't that be confusing? How can author tell which selector is allowed
 and which one isn't?

 That way when an element is inserted or modified, you don't have to
 worry about having to check any descendants or any siblings to see if
 the selectors that they match suddenly changed.


 Yeah, that's a much saner requirement.