[Bug 24349] [imports]: Import documents should always be in no-quirks mode

2014-02-14 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=24349

Anne ann...@annevk.nl changed:

   What|Removed |Added

 Status|RESOLVED|REOPENED
 CC||i...@hixie.ch
 Resolution|FIXED   |---

--- Comment #5 from Anne ann...@annevk.nl ---
Sorry, I should have done some more research.

What I think we want is for the HTML parser to accept an override for quirks
mode just as it has for encoding. HTML can then use that override for iframe
srcdoc (rather than special casing that in the parser) and then HTML imports
and other specifications that need to parse HTML can use it too.

-- 
You are receiving this mail because:
You are on the CC list for the bug.



Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)

2014-02-14 Thread Erik Arvidsson
On Thu, Feb 13, 2014 at 9:00 PM, Maciej Stachowiak m...@apple.com wrote:


 On Feb 13, 2014, at 4:01 PM, Alex Russell slightly...@google.com wrote:

 A closure is an iron-clad isolation mechanism for object ownership with
 regards to the closing-over function object. There's absolutely no
 iteration of the closed-over state of a function object; any such
 enumeration would be a security hole (as with the old Mozilla
 object-as-param-to-eval bug). You can't get the value of foo in this
 example except with the consent of the returned function:


 var maybeVendFoo = function() {
   var foo = 1;
   return function(willMaybeCall) {
 if (/* some test */) { willMaybeCall(foo); }
   }
 };

 Leakage via other methods can be locked down by the first code to run in
 an environment (caja does this, and nothing prevents it from doing this for
 SD as it can pre-process/filter scripts that might try to access internals).


 Caja is effective for protecting a page from code it embeds, since the
 page can have a guarantee that its code is the first to run. But it cannot
 be used to protect embedded code from a page, so for example a JS library
 cannot guarantee that objects it holds only in closure variables will not
 leak to the surrounding page...


That is incorrect. It is definitely possible to write code that does not
leak to the environment. It is painful to do because like Ryosuke wrote you
cannot use any of the built in functions or objects. You can only use
primitives and literals. But with a compile to JS language this can be made
less painful and in the days of LLVM to JS compilers this seems like a
trivial problem.

-- 
erik


Re: Why can't we just use constructor instead of createdCallback?

2014-02-14 Thread Dimitri Glazkov
On Thu, Feb 13, 2014 at 6:50 PM, Jonas Sicking jo...@sicking.cc wrote:

 Dimitri, I'd still love to hear feedback from you on the idea above.
 Seems like it could fix one of the design issues that a lot of people
 have reacted to.


I am not sure I fully understand how this will work. Let me try to repeat
it back and see if I got this right.

Basically, we are modifying the tree construction algorithm to be a 3-pass
system:

1) Build a meta tree (each node in the tree is a meta object that
represents an element that will be constructed)
2) Instantiate all elements by calling constructors on them
3) Build the tree of elements from the meta tree.

Right?

:DG


Re: Why can't we just use constructor instead of createdCallback?

2014-02-14 Thread Jonas Sicking
On Fri, Feb 14, 2014 at 9:25 AM, Dimitri Glazkov dglaz...@google.com wrote:
 On Thu, Feb 13, 2014 at 6:50 PM, Jonas Sicking jo...@sicking.cc wrote:

 Dimitri, I'd still love to hear feedback from you on the idea above.
 Seems like it could fix one of the design issues that a lot of people
 have reacted to.


 I am not sure I fully understand how this will work. Let me try to repeat it
 back and see if I got this right.

 Basically, we are modifying the tree construction algorithm to be a 3-pass
 system:

 1) Build a meta tree (each node in the tree is a meta object that represents
 an element that will be constructed)
 2) Instantiate all elements by calling constructors on them
 3) Build the tree of elements from the meta tree.

 Right?

I'd rather put it as:

1) Construct the objects, but rather than inserting them in their
parents, remember which parent they should be inserted in.
2) Call constructors on all elements
3) Insert elements in their parent

So no need to construct any meta objects.

You can further optimize by only doing this for custom elements with a
constructor.

/ Jonas



Re: Why can't we just use constructor instead of createdCallback?

2014-02-14 Thread Dimitri Glazkov
On Fri, Feb 14, 2014 at 10:36 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Fri, Feb 14, 2014 at 9:25 AM, Dimitri Glazkov dglaz...@google.com
 wrote:
  On Thu, Feb 13, 2014 at 6:50 PM, Jonas Sicking jo...@sicking.cc wrote:
 
  Dimitri, I'd still love to hear feedback from you on the idea above.
  Seems like it could fix one of the design issues that a lot of people
  have reacted to.
 
 
  I am not sure I fully understand how this will work. Let me try to
 repeat it
  back and see if I got this right.
 
  Basically, we are modifying the tree construction algorithm to be a
 3-pass
  system:
 
  1) Build a meta tree (each node in the tree is a meta object that
 represents
  an element that will be constructed)
  2) Instantiate all elements by calling constructors on them
  3) Build the tree of elements from the meta tree.
 
  Right?

 I'd rather put it as:

 1) Construct the objects, but rather than inserting them in their
 parents, remember which parent they should be inserted in.


Sure, this is the meta tree construction. At the limit, if every element is
a custom element, then you're effectively building a tree of things that
remember where their respective elements need to be.


 2) Call constructors on all elements


Yup.


 3) Insert elements in their parent


Yup.



 So no need to construct any meta objects.


Okay, we don't have to call them meta objects, but we need some storage to
remember where the element should go :)



 You can further optimize by only doing this for custom elements with a
 constructor.


Interesting. What if the element's constructor decides to walk the DOM tree
or mutate it? What does it see? Are there holes for elements that haven't
yet been inserted, or are the elements just appended regardless of their
initial position in the tree?

:DG


Re: Why can't we just use constructor instead of createdCallback?

2014-02-14 Thread Erik Arvidsson
Another alternative is to disallow DOM traversal and DOM mutation inside
these constructors. By disallow I mean throw an error! Here is a rough
outline of what the algorithm might look like.

Let there be a global counter CostomElementConstructionCounter which is
initially set to 0.

1. Parse and build the DOM tree as usual. Keep track of all custom elements
we encounter.
2. At some later point, before any script is run:
3. For each pending custom element (in tree order):
  1. Create the instance objects for the custom element.
  2. Increment CostomElementConstructionCounter
  3. Call the constructructor for the custom element, passing the object
instance as `this`.
  4. Decrement CostomElementConstructionCounter

Then we need to guard all DOM traversal and DOM mutation methods and throw
if the counter is non zero.

The point is that the timing of the constructor invocation is mostly not
observable. If an implementation wants to invoke it as it builds the DOM
that also works since there is no way to traverse the tree at that time.



On Fri, Feb 14, 2014 at 1:50 PM, Dimitri Glazkov dglaz...@google.comwrote:




 On Fri, Feb 14, 2014 at 10:36 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Fri, Feb 14, 2014 at 9:25 AM, Dimitri Glazkov dglaz...@google.com
 wrote:
  On Thu, Feb 13, 2014 at 6:50 PM, Jonas Sicking jo...@sicking.cc
 wrote:
 
  Dimitri, I'd still love to hear feedback from you on the idea above.
  Seems like it could fix one of the design issues that a lot of people
  have reacted to.
 
 
  I am not sure I fully understand how this will work. Let me try to
 repeat it
  back and see if I got this right.
 
  Basically, we are modifying the tree construction algorithm to be a
 3-pass
  system:
 
  1) Build a meta tree (each node in the tree is a meta object that
 represents
  an element that will be constructed)
  2) Instantiate all elements by calling constructors on them
  3) Build the tree of elements from the meta tree.
 
  Right?

 I'd rather put it as:

 1) Construct the objects, but rather than inserting them in their
 parents, remember which parent they should be inserted in.


 Sure, this is the meta tree construction. At the limit, if every element
 is a custom element, then you're effectively building a tree of things that
 remember where their respective elements need to be.


 2) Call constructors on all elements


 Yup.


 3) Insert elements in their parent


 Yup.



 So no need to construct any meta objects.


 Okay, we don't have to call them meta objects, but we need some storage to
 remember where the element should go :)



 You can further optimize by only doing this for custom elements with a
 constructor.


 Interesting. What if the element's constructor decides to walk the DOM
 tree or mutate it? What does it see? Are there holes for elements that
 haven't yet been inserted, or are the elements just appended regardless of
 their initial position in the tree?

 :DG




-- 
erik


Re: Why can't we just use constructor instead of createdCallback?

2014-02-14 Thread Boris Zbarsky

On 2/14/14 2:03 PM, Erik Arvidsson wrote:

Then we need to guard all DOM traversal and DOM mutation methods and
throw if the counter is non zero.


This is a fairly nontrivial whack-a-mole exercise, sadly (starting with 
defining traversal).


-Boris



Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)

2014-02-14 Thread Ryosuke Niwa
On Feb 14, 2014, at 9:00 AM, Erik Arvidsson a...@chromium.org wrote:
 On Thu, Feb 13, 2014 at 9:00 PM, Maciej Stachowiak m...@apple.com wrote:
 On Feb 13, 2014, at 4:01 PM, Alex Russell slightly...@google.com wrote:
 A closure is an iron-clad isolation mechanism for object ownership with 
 regards to the closing-over function object. There's absolutely no iteration 
 of the closed-over state of a function object; any such enumeration would be 
 a security hole (as with the old Mozilla object-as-param-to-eval bug). You 
 can't get the value of foo in this example except with the consent of the 
 returned function:
 
 
 var maybeVendFoo = function() {
   var foo = 1;
   return function(willMaybeCall) {
 if (/* some test */) { willMaybeCall(foo); }
   }
 };
 
 Leakage via other methods can be locked down by the first code to run in an 
 environment (caja does this, and nothing prevents it from doing this for SD 
 as it can pre-process/filter scripts that might try to access internals).
 
 Caja is effective for protecting a page from code it embeds, since the page 
 can have a guarantee that its code is the first to run. But it cannot be used 
 to protect embedded code from a page, so for example a JS library cannot 
 guarantee that objects it holds only in closure variables will not leak to 
 the surrounding page...
 
 That is incorrect. It is definitely possible to write code that does not leak 
 to the environment. It is painful to do because like Ryosuke wrote you cannot 
 use any of the built in functions or objects. You can only use primitives and 
 literals. But with a compile to JS language this can be made less painful and 
 in the days of LLVM to JS compilers this seems like a trivial problem.

While it’s technically the case that one could write a Turing-complete closure 
that doesn’t leak any information, I think we all agree it’s so painful that 
nobody can do this successfully by hand without relying on heavily-weight tools 
such as Caja or a LLVM-to-JS compiler.

Instead of accepting this is as a status quo, we should strive to improve the 
Web platform to provide better encapsulation.

- R. Niwa



Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)

2014-02-14 Thread Jonas Sicking
On Fri, Feb 14, 2014 at 2:02 PM, Ryosuke Niwa rn...@apple.com wrote:
 On Feb 14, 2014, at 9:00 AM, Erik Arvidsson a...@chromium.org wrote:

 On Thu, Feb 13, 2014 at 9:00 PM, Maciej Stachowiak m...@apple.com wrote:

 On Feb 13, 2014, at 4:01 PM, Alex Russell slightly...@google.com wrote:

 A closure is an iron-clad isolation mechanism for object ownership with
 regards to the closing-over function object. There's absolutely no iteration
 of the closed-over state of a function object; any such enumeration would be
 a security hole (as with the old Mozilla object-as-param-to-eval bug). You
 can't get the value of foo in this example except with the consent of the
 returned function:


 var maybeVendFoo = function() {
   var foo = 1;
   return function(willMaybeCall) {
 if (/* some test */) { willMaybeCall(foo); }
   }
 };

 Leakage via other methods can be locked down by the first code to run in
 an environment (caja does this, and nothing prevents it from doing this for
 SD as it can pre-process/filter scripts that might try to access internals).


 Caja is effective for protecting a page from code it embeds, since the
 page can have a guarantee that its code is the first to run. But it cannot
 be used to protect embedded code from a page, so for example a JS library
 cannot guarantee that objects it holds only in closure variables will not
 leak to the surrounding page...


 That is incorrect. It is definitely possible to write code that does not
 leak to the environment. It is painful to do because like Ryosuke wrote you
 cannot use any of the built in functions or objects. You can only use
 primitives and literals. But with a compile to JS language this can be made
 less painful and in the days of LLVM to JS compilers this seems like a
 trivial problem.


 While it's technically the case that one could write a Turing-complete
 closure that doesn't leak any information, I think we all agree it's so
 painful that nobody can do this successfully by hand without relying on
 heavily-weight tools such as Caja or a LLVM-to-JS compiler.

 Instead of accepting this is as a status quo, we should strive to improve
 the Web platform to provide better encapsulation.

Also, I think that the Type 2 encapsulation has the same
characteristics. If the component author does things perfectly and
doesn't depend on any outside code, a Type 2 encapsulation might very
well be equivalent to Type 4.

In practice, I'm not sure that this is an interesting debate though.
In practice everyone does depend on outside code. Even people using
closures. Trying to use closures to enforce security is too brittle.

/ Jonas



Re: Why can't we just use constructor instead of createdCallback?

2014-02-14 Thread Jonas Sicking
On Fri, Feb 14, 2014 at 10:50 AM, Dimitri Glazkov dglaz...@google.com wrote:



 On Fri, Feb 14, 2014 at 10:36 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Fri, Feb 14, 2014 at 9:25 AM, Dimitri Glazkov dglaz...@google.com
 wrote:
  On Thu, Feb 13, 2014 at 6:50 PM, Jonas Sicking jo...@sicking.cc wrote:
 
  Dimitri, I'd still love to hear feedback from you on the idea above.
  Seems like it could fix one of the design issues that a lot of people
  have reacted to.
 
 
  I am not sure I fully understand how this will work. Let me try to
  repeat it
  back and see if I got this right.
 
  Basically, we are modifying the tree construction algorithm to be a
  3-pass
  system:
 
  1) Build a meta tree (each node in the tree is a meta object that
  represents
  an element that will be constructed)
  2) Instantiate all elements by calling constructors on them
  3) Build the tree of elements from the meta tree.
 
  Right?

 I'd rather put it as:

 1) Construct the objects, but rather than inserting them in their
 parents, remember which parent they should be inserted in.

 Sure, this is the meta tree construction. At the limit, if every element is
 a custom element, then you're effectively building a tree of things that
 remember where their respective elements need to be.

I don't think that you need a tree of things. What you need is an
array of objects that need to be inserted, and an array of parents
that they need to insert them into. That's it.

 So no need to construct any meta objects.

 Okay, we don't have to call them meta objects, but we need some storage to
 remember where the element should go :)

Sure. You'll need two arrays. Or really, you'll need one array of
node+parent tuples.

 You can further optimize by only doing this for custom elements with a
 constructor.

 Interesting. What if the element's constructor decides to walk the DOM tree
 or mutate it? What does it see? Are there holes for elements that haven't
 yet been inserted, or are the elements just appended regardless of their
 initial position in the tree?

What I mean is that for nodes that doesn't have a constructor, and
whose parent doesn't have a constructor, no need to add them to the
above arrays. Just insert them into their parent. That means that when
that the constructor of an element runs, the element doesn't have any
parents or children.

So no need to hide parents or children anywhere.

/ Jonas



Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)

2014-02-14 Thread Alex Russell
On Fri, Feb 14, 2014 at 3:56 PM, Ryosuke Niwa rn...@apple.com wrote:

 On Feb 14, 2014, at 2:50 PM, Elliott Sprehn espr...@chromium.org wrote:

 On Fri, Feb 14, 2014 at 2:39 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 2/14/14 5:31 PM, Jonas Sicking wrote:

 Also, I think that the Type 2 encapsulation has the same
 characteristics. If the component author does things perfectly and
 doesn't depend on any outside code


 And never invokes any DOM methods on the nodes in the component's
 anonymous content.  Which is a pretty strong restriction; I'm having a bit
 of trouble thinking of a useful component with this property.


 I think my biggest issue with Type-2 is that unlike the languages cited
 for providing private it's trying to mimic it provides no backdoor for
 tools and frameworks to get at private state and at the same time it
 doesn't add any security benefits.


 Except that JavaScript doesn’t have “private”.


Right, it only has the stronger form (closures) and the weaker form (_
prefixing properties and marking them non-enumerable using defineProperty).
SD as currently defined is the second.

  Ruby, Python, Java, C# and almost all other modern languages that
 provide a private facility for interfaces (as advocated by the Type-2
 design) provide a backdoor through reflection to get at the variables and
 methods anyway. This allowed innovation like AOP, dependency injection,
 convention based frameworks and more.

 So if we provide Type-2 I'd argue we _must_ provide some kind of escape
 hatch to still get into the ShadowRoot from script. I'm fine providing some
 kind of don't let CSS styles enter me feature, but hiding the shadowRoot
 property from the Element makes no sense.


 I don’t see how the above two sentences lead to a consolation that we must
 provide an escape hatch to get shadow root from script given that such an
 escape hatch already exists if the component authors end up using builtin
 DOM functions.


It's the difference between using legit methods and hacking around the
platform. If it's desirable to allow continued access in these situations,
why isn't .shadowRoot an acceptable speed bump? If it's not desirable,
isn't the ability to get around the restriction *at all* a bug to be fixed
(arguing, implicitly, that we should be investigating stronger primitives
that Maciej and I were discussing to enable Type 4)?

 We all agree it's not a security boundary and you can go through great
 lengths to get into the ShadowRoot if you really wanted, all we've done by
 not exposing it is make sure that users include some crazy
 jquery-make-shadows-visible.js library so they can build tools like Google
 Feedback or use a new framework or polyfill.


 I don’t think Google Feedback is a compelling use case since all
 components on Google properties could simply expose “shadow” property
 themselves.


So you've written off the massive coordination costs of adding a uniform to
all code across all of Google and, on that basis, have suggested there
isn't really a problem? ISTM that it would be a multi-month (year?) project
to go patch every project in google3 and then wait for them to all deploy
new code.

Perhaps you can imagine a simpler/faster way to do it that doesn't include
getting owners-LGTMs from nearly every part of google3 and submitting tests
in nearly every part of the tree??


 Since you have preciously claimed that instantiating a template element
 may not be a common pattern for custom elements / web components, I have a
 hard time accepting the claim that you’re certain accessing shadow root is
 a common coding pattern.


Surely as the person asking for the more restricted form, the onus falls to
*you* to make the argument that the added restrictions show their value.

  So given that we should have ShadowRoot.getPrivateType2Root(element) to
 provide a sensible modern api like other languages, is providing the
 shadowRoot property on the Element any different?


 We’re disagreeing on the premise that we should have
 ShadowRoot.getPrivateType2Root.

 I think we need to steer this conversation back to CSS's ability to style
 the ShadowRoot. There's no reason we can't provide a no styles can enter
 me flag while still having the shadowRoot property and the node
 distribution APIs.


 That might be an interesting topic to discuss but www-style discussion
 appears to indicate that we need to settle encapsulation discussion in
 public-webaps regardless.

 - R. Niwa




Re: [manifest] V1 ready for wider review

2014-02-14 Thread Alex Russell
On Wed, Feb 12, 2014 at 5:21 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, Feb 12, 2014 at 12:06 PM, Marcos Caceres mar...@marcosc.com
 wrote:
  The editors of the [manifest] spec have now closed all substantive
 issues for  v1.
 
  The spec defines the following:
 
  * A link relationship for manifests (so they can be used with link
 rel=manifest).
 
  * A standard file name for a manifest resource
 (/.well-known/manifest.json). Works the same as /favicon.ico for when
 link rel=manifest is missing.
 
  * The ability to point to a start-url.
 
  * Basic screen orientation hinting for when launching a web app.
 
  * Launch the app in different display modes: fullscreen, minimal-ui,
 open in browser, etc.
 
  * A way of for scripts to check if the application was launched from a
 bookmark (i.e., similar to Safari's navigator.standalone).
 
  * requestBookmark(), which is a way for a top-level document to request
 it be bookmarked by the user. To not piss-off users, requires explicit user
 action to actually work. Expect buttoninstall my app/button everywhere
 on the Web now :)
 
  If you are wondering where some missing feature is, it's probably slated
 for [v2]. The reason v1 is so small is that it's all we could get agreement
 on amongst implementers (it's a small set, but it's a good set to kick
 things off and get us moving... and it's a small spec, so easy to quickly
 read over).
 
  We would appreciate your feedback on this set of features - please file
 [bugs] on GitHub. We know it doesn't fully realize *the dream* of
 installable web apps - but it gets us a few steps closer.
 
  If we don't get any significant objections, we will request to
 transition to LC in a week or so.

 I still think that leaving out name and icons from a manifest about
 bookmarks is a big mistake. I just made my case here

 http://lists.w3.org/Archives/Public/www-tag/2014Feb/0039.html

 Basically I think we need to make the manifest more self sufficient. I
 think that we're getting Ruby's postulate the wrong way around by
 making the file that describes the bookmark not contain all the data
 about the bookmark. Instead the two most important pieces about the
 bookmark, name and icons, will live in a completely separate HTML
 file, often with no way to find yourself from the manifest to that
 separate HTML file.


I agree. I further think that the marginal utility in bookmarking something
to the homescreen (sorry, yes, I'm focusing on mobile first) is low if it
doesn't have a Service Worker / Appcache associated. It's strictly
second-class-citizen territory to have web bookmarks that routinely don't
do anything meaningful when offline.


Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)

2014-02-14 Thread Ryosuke Niwa
On Feb 14, 2014, at 5:17 PM, Alex Russell slightly...@google.com wrote:

 On Fri, Feb 14, 2014 at 3:56 PM, Ryosuke Niwa rn...@apple.com wrote:
 On Feb 14, 2014, at 2:50 PM, Elliott Sprehn espr...@chromium.org wrote:
 On Fri, Feb 14, 2014 at 2:39 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 2/14/14 5:31 PM, Jonas Sicking wrote:
 Also, I think that the Type 2 encapsulation has the same
 characteristics. If the component author does things perfectly and
 doesn't depend on any outside code
 
 And never invokes any DOM methods on the nodes in the component's anonymous 
 content.  Which is a pretty strong restriction; I'm having a bit of trouble 
 thinking of a useful component with this property.
 
 
 I think my biggest issue with Type-2 is that unlike the languages cited for 
 providing private it's trying to mimic it provides no backdoor for tools 
 and frameworks to get at private state and at the same time it doesn't add 
 any security benefits.
 
 Except that JavaScript doesn’t have “private”.
 
 Right, it only has the stronger form (closures)

I don’t think we have the stronger form in that using any builtin objects and 
their functions would result in leaking information inside the closure.

 Ruby, Python, Java, C# and almost all other modern languages that provide a 
 private facility for interfaces (as advocated by the Type-2 design) provide 
 a backdoor through reflection to get at the variables and methods anyway. 
 This allowed innovation like AOP, dependency injection, convention based 
 frameworks and more.
 
 So if we provide Type-2 I'd argue we _must_ provide some kind of escape 
 hatch to still get into the ShadowRoot from script. I'm fine providing some 
 kind of don't let CSS styles enter me feature, but hiding the shadowRoot 
 property from the Element makes no sense.
 
 I don’t see how the above two sentences lead to a consolation that we must 
 provide an escape hatch to get shadow root from script given that such an 
 escape hatch already exists if the component authors end up using builtin DOM 
 functions.
 
 It's the difference between using legit methods and hacking around the 
 platform. If it's desirable to allow continued access in these situations, 
 why isn't .shadowRoot an acceptable speed bump?

The point is that it’s NOT ALWAYS desirable to allow continued access.  We 
saying that components should have a choice.

 If it's not desirable, isn't the ability to get around the restriction at all 
 a bug to be fixed (arguing, implicitly, that we should be investigating 
 stronger primitives that Maciej and I were discussing to enable Type 4)?

Are you also arguing that we should “fix” closures so that you can safely call 
builtin objects and their methods without leaking information?  If not, I don’t 
see why we need to fix this problem only for web components.

 We all agree it's not a security boundary and you can go through great 
 lengths to get into the ShadowRoot if you really wanted, all we've done by 
 not exposing it is make sure that users include some crazy 
 jquery-make-shadows-visible.js library so they can build tools like Google 
 Feedback or use a new framework or polyfill.
 
 I don’t think Google Feedback is a compelling use case since all components 
 on Google properties could simply expose “shadow” property themselves.
 
 So you've written off the massive coordination costs of adding a uniform to 
 all code across all of Google and, on that basis, have suggested there isn't 
 really a problem? ISTM that it would be a multi-month (year?) project to go 
 patch every project in google3 and then wait for them to all deploy new code.

On the other hand, Google representatives have previously argued that adding 
template instantiation mechanism into browser isn’t helping anyone, because 
framework authors would figure that out better than we can.

I have a hard time understanding why anyone would come to conclusion that 
forcing every single web components that use template to have:

this.createShadowRoot().appendChild(document.importNode(template.contents));

is any less desirable than having components that want to expose shadowRoot to 
write:

this.shadowRoot = createShadowRoot();

 Since you have preciously claimed that instantiating a template element may 
 not be a common pattern for custom elements / web components, I have a hard 
 time accepting the claim that you’re certain accessing shadow root is a 
 common coding pattern.
 
 Surely as the person asking for the more restricted form, the onus falls to 
 you to make the argument that the added restrictions show their value. 

I don’t think it’s fair to say that we’re asking for the more restricted form 
since Apple has never agreed to support the more open form (Type I 
encapsulation) in the first place.

- R. Niwa



Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)

2014-02-14 Thread Elliott Sprehn
On Fri, Feb 14, 2014 at 5:17 PM, Alex Russell slightly...@google.comwrote:

 On Fri, Feb 14, 2014 at 3:56 PM, Ryosuke Niwa rn...@apple.com wrote:

 [...]

  We all agree it's not a security boundary and you can go through great
 lengths to get into the ShadowRoot if you really wanted, all we've done by
 not exposing it is make sure that users include some crazy
 jquery-make-shadows-visible.js library so they can build tools like Google
 Feedback or use a new framework or polyfill.


 I don’t think Google Feedback is a compelling use case since all
 components on Google properties could simply expose “shadow” property
 themselves.


 So you've written off the massive coordination costs of adding a uniform
 to all code across all of Google and, on that basis, have suggested there
 isn't really a problem? ISTM that it would be a multi-month (year?) project
 to go patch every project in google3 and then wait for them to all deploy
 new code.

 Perhaps you can imagine a simpler/faster way to do it that doesn't include
 getting owners-LGTMs from nearly every part of google3 and submitting tests
 in nearly every part of the tree??



Please also note that Google Feedback's screenshot technology works fine on
many non-Google web pages and is used in situations that are not on Google
controlled properties. If we're going to ask the entire web to expose
.shadow by convention so things like Google Feedback or Readability can
work we might as well just expose it in the platform.

- E


Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)

2014-02-14 Thread Daniel Freedman
On Fri, Feb 14, 2014 at 5:39 PM, Ryosuke Niwa rn...@apple.com wrote:

 On Feb 14, 2014, at 5:17 PM, Alex Russell slightly...@google.com wrote:

 On Fri, Feb 14, 2014 at 3:56 PM, Ryosuke Niwa rn...@apple.com wrote:

 On Feb 14, 2014, at 2:50 PM, Elliott Sprehn espr...@chromium.org wrote:

 On Fri, Feb 14, 2014 at 2:39 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 2/14/14 5:31 PM, Jonas Sicking wrote:

 Also, I think that the Type 2 encapsulation has the same
 characteristics. If the component author does things perfectly and
 doesn't depend on any outside code


 And never invokes any DOM methods on the nodes in the component's
 anonymous content.  Which is a pretty strong restriction; I'm having a bit
 of trouble thinking of a useful component with this property.


 I think my biggest issue with Type-2 is that unlike the languages cited
 for providing private it's trying to mimic it provides no backdoor for
 tools and frameworks to get at private state and at the same time it
 doesn't add any security benefits.


 Except that JavaScript doesn’t have “private”.


 Right, it only has the stronger form (closures)


 I don’t think we have the stronger form in that using any builtin objects
 and their functions would result in leaking information inside the closure.

  Ruby, Python, Java, C# and almost all other modern languages that
 provide a private facility for interfaces (as advocated by the Type-2
 design) provide a backdoor through reflection to get at the variables and
 methods anyway. This allowed innovation like AOP, dependency injection,
 convention based frameworks and more.

 So if we provide Type-2 I'd argue we _must_ provide some kind of escape
 hatch to still get into the ShadowRoot from script. I'm fine providing some
 kind of don't let CSS styles enter me feature, but hiding the shadowRoot
 property from the Element makes no sense.


 I don’t see how the above two sentences lead to a consolation that we
 must provide an escape hatch to get shadow root from script given that such
 an escape hatch already exists if the component authors end up using
 builtin DOM functions.


 It's the difference between using legit methods and hacking around the
 platform. If it's desirable to allow continued access in these situations,
 why isn't .shadowRoot an acceptable speed bump?


 The point is that it’s NOT ALWAYS desirable to allow continued access.  We
 saying that components should have a choice.

  If it's not desirable, isn't the ability to get around the restriction *at
 all* a bug to be fixed (arguing, implicitly, that we should be
 investigating stronger primitives that Maciej and I were discussing to
 enable Type 4)?


 Are you also arguing that we should “fix” closures so that you can safely
 call builtin objects and their methods without leaking information?  If
 not, I don’t see why we need to fix this problem only for web components.

  We all agree it's not a security boundary and you can go through great
 lengths to get into the ShadowRoot if you really wanted, all we've done by
 not exposing it is make sure that users include some crazy
 jquery-make-shadows-visible.js library so they can build tools like Google
 Feedback or use a new framework or polyfill.


 I don’t think Google Feedback is a compelling use case since all
 components on Google properties could simply expose “shadow” property
 themselves.


 So you've written off the massive coordination costs of adding a uniform
 to all code across all of Google and, on that basis, have suggested there
 isn't really a problem? ISTM that it would be a multi-month (year?) project
 to go patch every project in google3 and then wait for them to all deploy
 new code.


 On the other hand, Google representatives have previously argued that
 adding template instantiation mechanism into browser isn’t helping anyone,
 because framework authors would figure that out better than we can.

 I have a hard time understanding why anyone would come to conclusion that
 forcing every single web components that use template to have:


 this.createShadowRoot().appendChild(document.importNode(template.contents));


I don't understand how this pertains to encapsulation. Could you elaborate?



 is any less desirable than having components that want to expose
 shadowRoot to write:

 this.shadowRoot = createShadowRoot();


The other hand of this argument is that components that wish to lock
themselves down could write:

this.shadowRoot = undefined;

Of course, this does would not change the outcome of the Shadow Selector
spec, which is why a flag for createShadowRoot or something would be
necessary to configure the CSS engine (unless you're ok with having the
existence of a property on some DOM object control CSS parsing rules).

(Also your example would not handle multiple shadow roots correctly, here's
one that would)
var sr = this.shadowRoot;
var newSr = this.createShadowRoot();
newSr.olderShadowRoot = sr;

Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)

2014-02-14 Thread Tab Atkins Jr.
On Fri, Feb 14, 2014 at 6:12 PM, Daniel Freedman dfre...@google.com wrote:
 The other hand of this argument is that components that wish to lock
 themselves down could write:

 this.shadowRoot = undefined;

 Of course, this does would not change the outcome of the Shadow Selector
 spec, which is why a flag for createShadowRoot or something would be
 necessary to configure the CSS engine (unless you're ok with having the
 existence of a property on some DOM object control CSS parsing rules).

There's nothing wrong with doing that, by the way.  The Selectors data
model is already based on DOM, for DOM-based documents.  I don't
currently specify how you know when an element in the selectors tree
has shadow trees, but I can easily say that it's whatever's
reachable via the DOM properties in DOM-based documents.

~TJ