Re: [WebComponents] Seeking status and plans [Was: [TPAC2014] Creating focused agenda]

2014-10-24 Thread Hajime Morrita
And for HTML Improts:

# Since Last TPAC:
* Minor bug fixes
* Styleshet cascading order clarification

# Next 6 months:
* Script execution order clarification
* ES6 modules integration



On Fri, Oct 24, 2014 at 1:08 PM, Dimitri Glazkov dglaz...@chromium.org
wrote:

 Here's an update on Custom Elements Spec.

 Since the last TPAC:
 * Added informative ES6 section:
 http://w3c.github.io/webcomponents/spec/custom/#es6
 * Minor bug fixes

 Next 6 months:
 * P1: fix bugs, identified by Mozilla folks when implementing Custom
 Elements:
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=27017
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=27016
 * P1: integrate with new microtask processing:
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=25714
 * P2: other bug fixes as time permits.

 :DG




-- 
morrita


Re: [imports] Spec. polishing

2014-08-27 Thread Hajime Morrita
Thanks for the feedback!
i addressed some. I aim to address all of them but some are hard to fix
instantly.

On Tue, Aug 19, 2014 at 6:51 AM, Gabor Krizsanits gkrizsan...@mozilla.com
wrote:

 I've heard complains about the readability of the current import draft,
 and I think the best way to improve it, if we all take some time and point
 out the parts that could benefit from some polishing. Instead of filing a
 dozen of tiny bugs, I just went through the spec. again and took some
 notes. Some of these nits are just personal opinion, so I don't expect all
 of them to be addressed but I guess it helps if I mention them. I'm not a
 native English speaker so I have not tried fixing grammar mistakes.

 - import referrer section does not reflect the fact that there can be more
 referrer for an import (the referrer - one of the referrers)

Added some explanation to clarify, amend some working around that.



 - for master document might be easier defined as the one and only root
 node of the import graph


Right. re-done in this way.


 - what's up with the defaultView these days? is it shared? is it null? is
 it decided?


Updated to make it null. there is no rational way to explain it being
non-null. Closed https://www.w3.org/Bugs/Public/show_bug.cgi?id=23170.


 - imported documents don't have a browsing context - isn't it more
 precise that it's using the master documents browsing context?


Maybe. I'm wondering what is the best way to clarify that the import isn't
rendered. That is the section meant to say.
I agree that it isn't clear what it implies. Filed a bug for that.
https://www.w3.org/Bugs/Public/show_bug.cgi?id=26682

- import dependent is used before defined


Reordered some definition to avoid this.


 - import parent/ancestor : I would define parent first and then extend it
 to ancestor. also worth mentioning that the import link list are the sub
 imports list for clarification


This makes sense. Rewrote the sentence in this way.


 - it's extremly hard to see that script execution order is really defined,
 even when I know how it is defined... figuring it out from the current spec
 without any prior knowledge is... challanging to say the least. I think a
 detailed walk through on the graph would be a HUGE help. By that I mean
 explicitly defining the execution order for the example, and also maybe
 illustrating at some stages, what is blocking what.


I agree that this is hard to see what should happen. As the script
execution is defined as a part of HTML parsing, it isn't trivial to define
import-specific part in isolated, clear way. As you mentioned, giving some
more example-driven informal illustration would be worth having here. Filed
a bug for tracking this:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=26681



 - missing link to 'simple event'


Added a link.


 Gabor




-- 
morrita


Re: [imports] credentials flag bits need to be updated to current fetch terminology

2014-07-28 Thread Hajime Morrita
I encountered a pre-release site that uses credentials to protect it from
public.
Imports in that site failed to load because the UA didn't send credentials.
The current behavior solved this problem.

There are a couple of options that I didn't take:

- Always send credentials: We clearly shouldn't do this as the same reason
why XHR doesn't this.

- Introduce @crossorigin attribute: This seemed plausible, but I worried
that this can be just redundant and hurts brevity
  if the credential-protected sites are the mainstream.
  Once a popular FAQ site recommends to put it all the time, that would
become bad news.

Then send-only-same-origin looked promising way to go.
I think following XHR behavior makes sense because it is well understood as
it's been there for a long time and both imports and XHR load documents.
I'm not super confident about this though.


On Sun, Jul 27, 2014 at 4:18 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Tue, Jul 22, 2014 at 12:36 AM, Hajime Morrita morr...@google.com
 wrote:
  It behaved like that before. I changed it to current one so that it works
  with credential-protected in-house or staged apps.

 You'll need to elaborate a bit, I'm not sure I understand. In any
 event, I think XMLHttpRequest's default behavior of only sending
 credentials same-origin is somewhat confusing. If we only offer one
 mode for rel=import we should either always include credentials (and
 thus require more complicated CORS headers) or never.





 --
 http://annevankesteren.nl/




-- 
morrita


Re: [imports] credentials flag bits need to be updated to current fetch terminology

2014-07-21 Thread Hajime Morrita
It behaved like that before. I changed it to current one so that it works
with credential-protected in-house or staged apps.


Re: [imports] credentials flag bits need to be updated to current fetch terminology

2014-07-16 Thread Hajime Morrita
That's right. Thanks for the catch!
Fixed:
https://github.com/w3c/webcomponents/commit/90da4809a207916486bc7af83a568f3762e780a0


On Tue, Jul 15, 2014 at 10:00 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 In http://w3c.github.io/webcomponents/spec/imports/#fetching-import the
 spec says:

   Fetch a resource from LOCATION with request's origin set to the
   origin of the master document, the mode to CORS and the omit
   credentials mode to CORS.

 There is no omit credentials mode in the current Fetch draft, and the
 mode that _is_ there, credentials mode, doesn't have CORS as a value.
  Presumably this is asking for same-origin?

 -Boris




-- 
morrita


Re: HTML imports: new XSS hole?

2014-06-03 Thread Hajime Morrita
A clarification to make sure people in same page:

On Mon, Jun 2, 2014 at 5:54 AM, James M Snell jasn...@gmail.com wrote:

 So long as they're handled with the same policy and restrictions as the
 script tag, it shouldn't be any worse.

HTML Imports are a bit more strict. They see CORS header and decline if
there is none for cross origin imports.
Also, requests for imports don't send any credentials to other origins.



 On Jun 2, 2014 2:35 AM, Anne van Kesteren ann...@annevk.nl wrote:

 How big of a problem is it that we're making link as dangerous as
 script? HTML imports can point to any origin which then will be able
 to execute scripts with the authority of same-origin.


 --
 http://annevankesteren.nl/




-- 
morrita


[webcomponents] HTML Imports notes

2014-04-07 Thread Hajime Morrita
Hi,

Since last HTML Imports WD is published, I heard some feedback. Most of
them are about the loading order and its sync/async nature. Thanks for
sharing your thought!

At the same time, I found there are some confusion about how it works. As
it's hard for me to capture the underlying thinking in the standard
document, I thought that it might be helpful to informally sketch what's
behind the current design and what we're thinking. So here it is [1]. I
hope this clarifies some of your questions and/or concerns.

If you have any comments, let's talk at coming F2F.

Bests,

[1] https://gist.github.com/omo/9986103

-- 
morrita


Re: [announce] Intent to publish a new WD of HTML Imports on March 6

2014-02-28 Thread Hajime Morrita
On Fri, Feb 28, 2014 at 3:40 PM, Arthur Barstow art.bars...@nokia.comwrote:

 Hajime proposes WebApps publish a new WD of HTML Imports on March 6, based
 on [ED].

 If you have any comments about this proposal, please reply to this thread
 by March 3 at the latest.

 -Thanks, AB

 [ED] https://dvcs.w3.org/hg/webcomponents/raw-file/tip/
 spec/imports/index.html


I'm sorry for my lack of update. The latest ED is now hosted on GitHub:
http://w3c.github.io/webcomponents/spec/imports/



-- 
morrita


Re: Decoupling style scoped from Shadow DOM

2014-02-20 Thread Hajime Morrita
Here is my understanding:

Firefox has already shipped style scoped without Shadow DOM and I guess
there is no dependency from scoped style to shadow DOM as the former is
done before the later is started.

WebKit situation was similar. style scoped was done before Shadow DOM,
and style scoping for Shadow DOM was done on top of style scoped
internals. There was no dependency from style scoped to Shadow DOM. And
it seems both style scoped and non-builtin part of Shadow DOM was removed
since then.

In Blink, Shadow DOM styling and style scoped kind of share the
underlying plumbing. But it is more like that both depend on same
lower-level mechanism for style scoping of DOM subtree. There is no direct
dependency between both.

So these two are almost orthogonal from implementation perspective, as are
their specs.

--
morrita



On Thu, Feb 20, 2014 at 6:05 PM, Erik Bryn e...@erikbryn.com wrote:

 Hi everyone,

 First time caller, long time listener.

 From what I understand, the browser vendors seem to be bundling style
 scoped with the Shadow DOM spec. I'd like to start a discussion around
 decoupling the two and asking that vendors prioritize shipping style
 scoped over Shadow DOM as a whole. As a web developer and JS framework
 author, the single most important feature that I could use immediately and
 I believe is totally uncontroversial is style scoped.

 Thoughts?

 Thanks,
 - Erik




-- 
morrita


Re: [webcomponents] Copying and Pasting a Web Component

2014-02-06 Thread Hajime Morrita
This seems related to discussion around selection [1].

My claim there was that the selection shouldn't cross shadow boundary, at
least from the boundary crossing shouldn't be visible script.

If this invariant is kept, we can model copy-pasting in DOM land, without
thinking about Shadow DOM nor composed tree because the selection sees only
one level of (shadow) subtree.

This means copying/pasting does drop Shadow tree. This might look bad from
pure Shadow-DOM perspective. But it isn't that bad in practice where Shadow
DOM is expected to be used with Custom Elements. Though callbacks, custom
element definition rebuilds shadow trees of copied elements. This is
similar to what built-in elements like input are doing.

This also means that:

- Each Custom Element needs to keep serializing states in non-Shadow DOM if
it wants to be copy-paste ready. If you build x-menu and and want to make
it copy-pasteable, you will have to hold item or something in your
(non-shadow) DOM. input is good example. It holds state in @type and
@value attributes.
- Copying does work only between the document which give appropriate custom
element definitions. This might sound bad but actually is reasonable
consequence. Custom elements are useless without its definitions anyway.
Defining cross-document copying of custom element is too complex to have,
at least for initial version of these standards.

Even though there are limitations, this allows in-place copying of
well-made, shadow-backed custom elements, and it aligns how built-in
elements behave (in man browsers I believe).

That being said, composed-tree based copying might make sense for
inter-document copying and copying into non-browser environments like
mailers and note-taking apps. In this case, people won't expect copied
elements live and it will be OK to use composed-tree without scripting,
that is essentially a frozen snapshot of the elements. I'm not sure if the
spec should cover these though. It seems more like optimization that each
UA possibly offers.

[1] http://lists.w3.org/Archives/Public/public-webapps/2013OctDec/0097.html


On Thu, Feb 6, 2014 at 11:34 AM, Ryosuke Niwa rn...@apple.com wrote:

 Okay, one significant implication of serializing the composed tree is that
 cutting  pasting a component would result in breaking all components
 within cutting and pasting them in place (i.e. cmd/ctl-x + v at the same
 location).

 This would mean that web components are pretty much unusable inside
 content editable regions unless the author add code to fix up and revive
 serialized components.

 But I can't think of a way to work around this issue given we can't tell,
 at the time of copy/cut, whether the content will be pasted in a document
 with a give custom element.

 The most devastating requirement here is that the pasted content can't run
 any script for security reasons.

 - R. Niwa

 On Feb 6, 2014, at 5:03 AM, Hayato Ito hay...@google.com wrote:

 I remember that there was a session to discuss this topic last year's
 blinkon conference.

 -
 https://docs.google.com/a/chromium.org/document/d/1SDBS1BUJHdXQCvcDoXT2-8o6ATYMjEe9PI9U2VmoSlc/edit?pli=1#heading=h.ywom0phsxcmo
   Session: 'Deep Dive on editing/selection'

 However, I couldn't find non-normative notes attached there. I guess no
 one has clear answer for this topic yet unless there is a progress.



 On Thu, Feb 6, 2014 at 6:57 PM, Ryosuke Niwa rn...@apple.com wrote:

 Hi,

 What is expected to happen if a custom element or an element with shadow
 DOM is copied and pasted from one contenteditable area to another?

 Are we expected to serialize the composed tree and paste that?

 We can't keep the shadow DOM structure as there is no serialized form for
 it, and no script could run when it's pasted.

 I understand that there is no normative documentation on how copy and
 paste work to begin with but I'd like to see at least a non-normative note
 as to what UAs are expected to do since this would surely have a huge
 compatibility and usability implications down the road.

 Any thoughts?

 - R. Niwa




 --
 Hayato




-- 
morrita


Re: [webcomponents] Copying and Pasting a Web Component

2014-02-06 Thread Hajime Morrita
On Thu, Feb 6, 2014 at 2:54 PM, Erik Arvidsson a...@google.com wrote:

 All good points. One issue that we should track...

 On Thu, Feb 6, 2014 at 5:20 PM, Hajime Morrita morr...@google.com wrote:
  This seems related to discussion around selection [1].
 
  My claim there was that the selection shouldn't cross shadow boundary, at
  least from the boundary crossing shouldn't be visible script.
 
  If this invariant is kept, we can model copy-pasting in DOM land, without
  thinking about Shadow DOM nor composed tree because the selection sees
 only
  one level of (shadow) subtree.
 
  This means copying/pasting does drop Shadow tree. This might look bad
 from
  pure Shadow-DOM perspective. But it isn't that bad in practice where
 Shadow
  DOM is expected to be used with Custom Elements. Though callbacks, custom
  element definition rebuilds shadow trees of copied elements. This is
 similar
  to what built-in elements like input are doing.
 
  This also means that:
 
  - Each Custom Element needs to keep serializing states in non-Shadow DOM
 if
  it wants to be copy-paste ready. If you build x-menu and and want to
 make
  it copy-pasteable, you will have to hold item or something in your
  (non-shadow) DOM. input is good example. It holds state in @type and
  @value attributes.

 input is actually bad because the value attribute maps to the
 defaultValue property. The value property is not reflected as an
 attribute.

 We should add a hook to custom elements to handle cloning of internal data.

 https://www.w3.org/Bugs/Public/show_bug.cgi?id=24570


I think is isn't about cloning but about serialization.
Copying HTML into clipboard means it needs to be serialized into byte
stream and be read back as another tree.
Well, such hooks could be used to ensure the sate being serialized into DOM.
It'd be great if the hook supports serialization scenario as well.



  - Copying does work only between the document which give appropriate
 custom
  element definitions. This might sound bad but actually is reasonable
  consequence. Custom elements are useless without its definitions anyway.
  Defining cross-document copying of custom element is too complex to
 have, at
  least for initial version of these standards.
 
  Even though there are limitations, this allows in-place copying of
  well-made, shadow-backed custom elements, and it aligns how built-in
  elements behave (in man browsers I believe).
 
  That being said, composed-tree based copying might make sense for
  inter-document copying and copying into non-browser environments like
  mailers and note-taking apps. In this case, people won't expect copied
  elements live and it will be OK to use composed-tree without scripting,
  that is essentially a frozen snapshot of the elements. I'm not sure if
 the
  spec should cover these though. It seems more like optimization that
 each UA
  possibly offers.
 
  [1]
 http://lists.w3.org/Archives/Public/public-webapps/2013OctDec/0097.html
 
 
  On Thu, Feb 6, 2014 at 11:34 AM, Ryosuke Niwa rn...@apple.com wrote:
 
  Okay, one significant implication of serializing the composed tree is
 that
  cutting  pasting a component would result in breaking all components
  within cutting and pasting them in place (i.e. cmd/ctl-x + v at the same
  location).
 
  This would mean that web components are pretty much unusable inside
  content editable regions unless the author add code to fix up and revive
  serialized components.
 
  But I can't think of a way to work around this issue given we can't
 tell,
  at the time of copy/cut, whether the content will be pasted in a
 document
  with a give custom element.
 
  The most devastating requirement here is that the pasted content can't
 run
  any script for security reasons.
 
  - R. Niwa
 
  On Feb 6, 2014, at 5:03 AM, Hayato Ito hay...@google.com wrote:
 
  I remember that there was a session to discuss this topic last year's
  blinkon conference.
 
  -
 
 https://docs.google.com/a/chromium.org/document/d/1SDBS1BUJHdXQCvcDoXT2-8o6ATYMjEe9PI9U2VmoSlc/edit?pli=1#heading=h.ywom0phsxcmo
Session: 'Deep Dive on editing/selection'
 
  However, I couldn't find non-normative notes attached there. I guess no
  one has clear answer for this topic yet unless there is a progress.
 
 
 
  On Thu, Feb 6, 2014 at 6:57 PM, Ryosuke Niwa rn...@apple.com wrote:
 
  Hi,
 
  What is expected to happen if a custom element or an element with
 shadow
  DOM is copied and pasted from one contenteditable area to another?
 
  Are we expected to serialize the composed tree and paste that?
 
  We can't keep the shadow DOM structure as there is no serialized form
 for
  it, and no script could run when it's pasted.
 
  I understand that there is no normative documentation on how copy and
  paste work to begin with but I'd like to see at least a non-normative
 note
  as to what UAs are expected to do since this would surely have a huge
  compatibility and usability implications down the road.
 
  Any thoughts

Re: [HTML imports]: Imports and Content Security Policy

2014-02-04 Thread Hajime Morrita
Good point.

My thinking is that I want somehow make HTML imports more like script than
HTML. What we need might be a separate content type than text/html for HTML
imports.  It will prevent accidental inclusion of non-import HTML that is
more likely to have XSS hole.

We already has CORS to prevent that kind of thing.  But owning different
content type will be stronger protection.





On Tue, Feb 4, 2014 at 12:22 AM, Frederik Braun fbr...@mozilla.com wrote:

 On 03.02.2014 21:58, Hajime Morrita wrote:
  Parser-made script means the script tags and its contents that are
  written in HTML bytestream, not given by DOM mutation calls from
  scripts.  As HTML Imports doesn't allow document.write(), it seems safe
  to assume that these scripts are statically given by the author, not an
  attacker.
 

 I don't see how this mitigates XSS concerns. If we allow inline script
 there's no way to tell if the imported document has intended or injected
 inline scripts.

 Imagine an import that includes something like
 import.php?userName=scriptalert(1)/script.




-- 
morrita


Re: [HTML imports]: Imports and Content Security Policy

2014-02-03 Thread Hajime Morrita
On Mon, Feb 3, 2014 at 2:23 AM, Frederik Braun fbr...@mozilla.com wrote:

 On 31.01.2014 06:43, Hajime Morrita wrote:
  Generally I prefer master-CSP model than the own CSP model due to its
  simplicity but I agree that unsafe-script kills the conciseness of
 Imports.
 
  To make inline scripts work with imports, we might want another CSP
  directive like safe-script, which allows parser-made script but
  doesn't allow dynamic ones. There is some room to talk what should be
  allowed as safe-script though. My gut feeling is A) script: Allowed,
  but B) inline event handlers: Not allowed.

 What is a safe script? What do you mean by parser-made script tags?
 We must be careful not to allow bypassing CSP with a simple XSS.


Forget about safe. I tried to give some name to the notion.

Parser-made script means the script tags and its contents that are
written in HTML bytestream, not given by DOM mutation calls from scripts.
 As HTML Imports doesn't allow document.write(), it seems safe to assume
that these scripts are statically given by the author, not an attacker.

I agree that we should be careful here though. We need to take care of
innerHTML somehow for example.


 
  Does this make sense?




-- 
morrita


Re: [HTML imports]: Imports and Content Security Policy

2014-01-30 Thread Hajime Morrita
Generally I prefer master-CSP model than the own CSP model due to its
simplicity but I agree that unsafe-script kills the conciseness of Imports.

To make inline scripts work with imports, we might want another CSP
directive like safe-script, which allows parser-made script but doesn't
allow dynamic ones. There is some room to talk what should be allowed as
safe-script though. My gut feeling is A) script: Allowed, but B) inline
event handlers: Not allowed.

Does this make sense?




On Fri, Jan 31, 2014 at 4:32 AM, Gabor Krizsanits
gkrizsan...@mozilla.comwrote:

  The security objection to the original own CSP design was never fully
  developed - I'm not sure it's necessarily a show-stopper.
 
  Nick

 Well, consider the case when we have the following import tree:

   I1
  |  |
 I2  I3
  |  |
   I4


 Respectively CSP1, CSP2, CSP3. CSP2 allows I4 to be loaded but
 CSP3 does not. So what should we do with I4? If I2 comes first
 it will allow I4 to be loaded and then I3 will get it as well,
 even though it should not. If I3 comes first then it won't be
 loaded...

 But let's say we eliminate the ordering problem by loading I4
 and for I3 we just return null for the import something.
 What about:

   I1
  |  |
 I2  I3
  |  |
   I4
  |  |
 I5  I6


 Now let's say CSP2 allows I5 but not I6 and CSP3 allows both
 I5 and I6 (or even worse allows I6 but not I5). Now it we look
 at I5 from I2 we should get a different imported document than
 looking at it from I3... To fix this problem we can just completely
 ignore the parents CSP when we determine if a sub import should be
 loaded or not. But I think that would kind of defeat the purpose
 of having CSP on the first place...

 Anyway, maybe I'm missing something but I don't see how the original
 own CSP could work.




-- 
morrita


Re: [HTML imports]: Removing imports

2014-01-30 Thread Hajime Morrita
I don't want to make it removable from the cache/manager. Once it is
loaded, it should be there until the page ends. Allowing
removal/cancellation has big implication that will introduce many
complicated race conditions and edge cases. For example, what happens if
link is removed before/while its imports are loaded?

Also, dependency resolution will become much more tricky. What happens when
the author swaps the order of successive links? I'd rather think link
import as a one shot directive through the parser to the engine, than the
reflection of internal representation. This mental model isn't perfect
either (think about styles in import) but it's much simpler than supporting
fully-dynamic behavior.

I agree that network error and retry issue is a valid concern. However I
don't think it is good layering design to handle it in import level. It
should be addressed by lower level primitives like Service Worker, IMO.






On Fri, Jan 31, 2014 at 2:13 AM, Gabor Krizsanits
gkrizsan...@mozilla.comwrote:

 I've already opened a bug that import removal is not clear to me
 (https://www.w3.org/Bugs/Public/show_bug.cgi?id=24003), but there
 is more...

 So in one way or another imports are cached per master documents
 so if another link refers to the same import in the import tree it
 does not have to be reloaded. Question is, when to remove the import
 from the cache (let's call it import manager).

 One version is to release import documents when the master document
 is released. Another would be to wait until all the link elements
 that refers to it are released. But maybe it should be released the
 moment the last referring link element is detached from the
 tree... this way users could force re-try import loading. Because
 right now, if import loading failed, there is no way to re-try it.
 Any thoughts?




-- 
morrita


Re: [HTML imports]: Imports and Content Security Policy

2014-01-10 Thread Hajime Morrita
On Fri, Jan 10, 2014 at 5:30 PM, Frederik Braun fbr...@mozilla.com wrote:

 On 10.01.2014 03:52, Hajime Morrita wrote:
  Hi Frederik,
  Thanks for bringing it up!
 
  As you pointed out, CSP of imported documents essentially extends the
  set of allowed domains. I thought I was useful for component authors to
  specify their own domains, like one of their own CDN.

 Well the loss of convenience is indeed unfortunate.
 
  I'm not sure how it is threatening because components won't have any
  sensitive state in it
  because HTML Imports doesn't have any isolation mechanism after all. It
  however might be an optimistic view.
 

 I'm not concerned about state, but it shouldn't be allowed to bypass a
 CSP (which is stated in a header, after all) by a simple content
 injection that triggers an HTML Import (XSS is very prevalent and the
 main reason we're pushing for CSP is to prevent XSS :))

  Being conservative, it could be better to apply master document's CSP to
  whole import tree
  and ignore CSPs on imports. It is less flexible and page authors need to
  list all domains for
  possibly imported resources, but this flat model looks what Web is
  relying today.
 
 Yes, just to re-emphasize: I think this is the way to go.


Filed: https://www.w3.org/Bugs/Public/show_bug.cgi?id=24268

Although we might come up with better idea,
I agree that we should start from safer option.


  I'd appreciate any feedback and/or suggestions here. It seems there is
  some progress on CSP side.
  It would be great if there is some new mechanism to handle CSP of
  subresources.
  Things like ES6 modules might get benefit from it as well.





-- 
morrita


Re: [HTML imports]: Imports and Content Security Policy

2014-01-09 Thread Hajime Morrita
Hi Frederik,
Thanks for bringing it up!

As you pointed out, CSP of imported documents essentially extends the
set of allowed domains. I thought I was useful for component authors to
specify their own domains, like one of their own CDN.

I'm not sure how it is threatening because components won't have any
sensitive state in it
because HTML Imports doesn't have any isolation mechanism after all. It
however might be an optimistic view.

Being conservative, it could be better to apply master document's CSP to
whole import tree
and ignore CSPs on imports. It is less flexible and page authors need to
list all domains for
possibly imported resources, but this flat model looks what Web is relying
today.

I'd appreciate any feedback and/or suggestions here. It seems there is some
progress on CSP side.
It would be great if there is some new mechanism to handle CSP of
subresources.
Things like ES6 modules might get benefit from it as well.



On Fri, Jan 10, 2014 at 12:19 AM, Frederik Braun fbr...@mozilla.com wrote:

 Hi,

 I have subscribed to this list because my colleague Gabor (CC) and I
 found a few issues with Content Security Policies applied to HTML imports.

 The current draft
 (http://w3c.github.io/webcomponents/spec/imports/#imports-and-csp, Jan
 9) suggests that import loading is restricted through a script-src
 attribute, which is probably fine.

 Our issue, however is with the next layer: Each import is restricted by
 its own Content Security Policy.

 Let's discuss how this could work: The document subtree of the imported
 document has its own CSP, just like the parent document. Hence, there
 are two CSPs guarding one browsing context (i.e. the window object).

 This brings the issue that a restricted imported document can ask the
 non-restricted parent document to do its dirty work:
 Imagine a CSP on the parent document (example.com) that only allows
 example.com and imported.com. The CSP on imported.com allows only
 imported.com ('self'). The imported document could easily contain a
 script that attaches a script node to the parent document's body (other
 subtree) which loads further resources from example.com. The restriction
 against imported.com has been subverted, hasn't it?

 If we interpreted this more freely, we could also consider combining the
 two CSPs somehow, let's see how this could work:

 Scenario A) The CSP of the main and the imported document are loosely
 combined, i.e. the union set is built. The whole document that formerly
 had a policy of default-src: example.com gets a new policy of
 default-src: imported.com, because the import has this policy specified.

 This case brings multiple issues: Allowing resource loads
 depends on whether the web server at imported.com serves its http
 response quickly. This issue is only getting worse with nested imports
 in which imported documents A and B both import another document C -
 which chain up to the parent is the correct one? Via A or B?


 Scenario B) The CSP of the main and the imported document are forming a
 strict combination (intersection): The CSP on the imported document
 (imported.com) would then have to explicitly allow example.com,
 otherwise including it would break the importing site. This is unlikely
 to be intended. The previous example with imported documents A and B
 that both import C applies makes it impossible to solve.


 Now, what seems to make sense instead is that the CSP of the imported
 website is completely *disregarded*. This would lead to the sad fact
 that the importer (example.com) cannot treat imported.com entirely as a
 black box, as it has to inspect which resources it intends to load and
 selectively allow them in their own CSP. On the other hand, this
 wouldn't bring any of the previously mentioned side-effects. The actual
 owner of the document is fully in charge.



 What do you think?
 Frederik





-- 
morrita


Re: [webcomponents] Inheritance in Custom Elements (Was Proposal for Cross Origin Use Case and Declarative Syntax)

2013-12-05 Thread Hajime Morrita
On inheritance around HTMLElement family, there seems to be a confusion
between interface side inheritance and implementation side inheritance.

In WebIDL, interfaces of HTML elements have inherited only from HTMLElement
interface. This is fine just because it cares only about interface (API
signature) but not about
implementation. Even though there are some duplication on API surface of
these elements, we can live with it as it is relatively stable and we have
other reuse mechanisms like
partial interface.

For Custom Elements, the inheritance is not only about interface, but also
about implementation. The implementation is more complex and flux in
detail, thus worth being shared between different elements. Actually, the
implementation of built-in HTML elements, which are usually done in C++,
uses inheritance heavily, at least Blink and (I believe) WebKit.




On Fri, Dec 6, 2013 at 2:53 PM, Ryosuke Niwa rn...@apple.com wrote:

 On Dec 5, 2013, at 9:30 PM, Dimitri Glazkov dglaz...@chromium.org wrote:

 On Thu, Dec 5, 2013 at 8:50 PM, Ryosuke Niwa rn...@apple.com wrote:

 On Dec 5, 2013, at 8:30 PM, Dimitri Glazkov dglaz...@chromium.org
 wrote:

 On Thu, Dec 5, 2013 at 7:55 PM, Ryosuke Niwa rn...@apple.com wrote:

 On Nov 11, 2013, at 4:12 PM, Dimitri Glazkov dglaz...@chromium.org
 wrote:
  3) The approach pollutes global name space with constructors. This had
 been voiced many times as unacceptable by developers.
 
  4) How does build a custom element that uses name-card as its base
 element? What about div or any other HTML element?
 
  The last one remains to be the hardest. The tortured inheritance
 support is what killed element in the first place. We can't ignore the
 inheritance, since it is clearly present, in both DOM and JS. If we attempt
 to punt on supporting it, our decisions cut off the opportunities to evolve
 this right in the future, and will likely leave us with boogers like
 multiple syntaxes for inheritance vs. non-inheritance use cases.

 What exactly are you referring to by inheritance support?


 Inheritance from all builtin elements (e.g. subclasses of HTMLElement)?

 Or inheritance from custom elements authors have defined?


 Sure, both are fine. Why should we treat them differently?


 For the following reasons to list a few:

1. We don't have any subclass of HTMLElement that can be instantiated
and has subclasses.

 Not sure why this matters?


 It means that inheritance is NOT integral part of the platform; at least
 for builtin elements.


1. Methods and properties of builtin elements are not designed to be
overridden by subclasses.

 Okay, that seems like an implementation detail of HTML. Again, doesn't
 preclude seeking consistency between inheritance from custom elements or
 HTML elements.


 That's not an implementation detail.   The key observation is that HTML
 elements aren't designed to be subclassed.


1. Behaviors of builtin elements will be changed by UAs in the future
without author-defined subclasses being upgraded accordingly, thereby
violating Liskov substitution principle.

 Sure, it's probably a bad idea to inherit from any HTML element (other
 than HTMElement).


 Great, then let's not support that.

 The Custom Elements spec is deeply steeped in the idea that we bring new
 functionality by explaining how the platform works (see
 http://extensiblewebmanifesto.org/). If we just side-load some new
 platform magic to accommodate a few specific use cases, we're not doing the
 right thing.


 Without getting into a philosophical debate, supporting inheritance from
 subclasses of HTML elements doesn't explain how the Web platform works at
 all so let's not add that.

 If we removed this, we don't have to worry about attaching shadow DOM to
 builtin elements (other than HTMLElement itself) for custom elements.

 - R. Niwa




-- 
morrita


Re: [webcomponents] Inheritance in Custom Elements (Was Proposal for Cross Origin Use Case and Declarative Syntax)

2013-12-05 Thread Hajime Morrita
I agree that it isn't trivial to inherit from a built-in element as if it
is an author-defined element.
My point was that mentioning relationship between HTMLElement and built-in
elements on WebIDL doesn't matter in this discussion and we should focus on
other reasoning.


On Fri, Dec 6, 2013 at 3:34 PM, Ryosuke Niwa rn...@apple.com wrote:

 On Dec 5, 2013, at 10:09 PM, Hajime Morrita morr...@google.com wrote:
  On inheritance around HTMLElement family, there seems to be a confusion
 between interface side inheritance and implementation side inheritance.

 Right.  Differentiating the two is very important.

  For Custom Elements, the inheritance is not only about interface, but
 also about implementation. The implementation is more complex and flux in
 detail, thus worth being shared between different elements. Actually, the
 implementation of built-in HTML elements, which are usually done in C++,
 uses inheritance heavily, at least Blink and (I believe) WebKit.

 The reason we can use inheritance heavily is because we have control over
 both parent and child classes, and there are careful distinctions between
 public, protected, and private member functions and member variables in
 those classes.

 Unfortunately, custom elements can't easily inherit from these builtin
 HTML elements because builtin elements do not provide protected
 functions, and they don't expose necessary hooks to override internal
 member functions to modify behaviors.

 Evidently, in the current Custom Elements specification, the interface is
 inherited but the implementation (DOM) is composed via shadow element.

 In fact, we've come to a conclusion that shadow element is an unnecessary
 complexity because composition is the preferred mechanism to build complex
 objects on the Web, and if someone is using inheritance, then the subclass
 should have access to the shadow DOM of the superclass directly instead of
 it being magically projected into the shadow element.  In fact, this is how
 inheritance works in other GUI frameworks such as Cocoa, .net, etc…

 Additionally, we don't have something like shadow element inside Blink or
 WebKit for our builtin elements.  If our goal is to explain and describe
 the existing features on the Web platform, then we sure shouldn't be
 introducing this magical mechanism.

 If we had a subclass that truly wanted to contain its superclass in some
 non-trivial fashion, then we should be using composition instead.  If the
 container class needed to forward some method calls and property accesses
 to the contained element, then it should just do.  And providing a
 convenient mechanism to forward method calls and property access in
 JavaScript is an orthogonal problem we should be solving anyway.

 - R. Niwa




-- 
morrita


Re: [HTML Imports]: what scope to run in

2013-11-20 Thread Hajime Morrita
I'd frame the problem in a slightly different way.

Seems like almost everyone agrees that we need better way to
modularize JavaScript, and ES6 modules are one of the most promising
way to go. And we also agree (I think) that we need a way to connect
ES6 modules and the browser.

What we don't consent to is what is the best way to do it. One option
is to introduce new primitive like jorendorff's module element.
People are also seeing that HTML imports could be another option. So
the conversation could be about which is better, or whether we need
both or not.

If you just want to namespace the script, HTML Imports might be overkill because

 * An import is HTML and HTML isn't JS developer friendly in some
cases. Think about editor integration for example. Application
developers will prefer .js rather than .html as the container of their
code.
 * Given above, HTML imports introduces an indirection with script
src=... and will be slower than directly loading .js files.
 * HTML imports will work well with module-ish thing and it makes
the spec small as it gets off-loaded module loading responsibility.
This seems good modularization of the feature.

HTML Imports make sense only if you need HTML fragments and/or
stylesheets, but people need modularization regardless they develop
Web Components or plain JS pieces. I think the web standard should
help both cases and module or something similar serves better for
that purpose.

Does this make sense?

--
morrita



On Thu, Nov 21, 2013 at 7:00 AM, Rick Waldron waldron.r...@gmail.com wrote:



 On Wed, Nov 20, 2013 at 12:38 PM, Brian Di Palma off...@gmail.com wrote:

 On Tue, Nov 19, 2013 at 10:16 PM, Rick Waldron waldron.r...@gmail.com
 wrote:
 
 
 
  On Mon, Nov 18, 2013 at 11:26 PM, Ryosuke Niwa rn...@apple.com wrote:
 
  We share the concern Jonas expressed here as I've repeatedly mentioned
  on
  another threads.
 
  On Nov 18, 2013, at 4:14 PM, Jonas Sicking jo...@sicking.cc wrote:
 
  This has several downsides:
  * Libraries can easily collide with each other by trying to insert
  themselves into the global using the same property name.
  * It means that the library is forced to hardcode the property name
  that it's accessed through, rather allowing the page importing the
  library to control this.
  * It makes it harder for the library to expose multiple entry points
  since it multiplies the problems above.
  * It means that the library is more fragile since it doesn't know what
  the global object that it runs in looks like. I.e. it can't depend on
  the global object having or not having any particular properties.
 
 
  Or for that matter, prototypes of any builtin type such as Array.
 
  * Internal functions that the library does not want to expose require
  ugly anonymous-function tricks to create a hidden scope.
 
 
  IMO, this is the biggest problem.
 
  Many platforms, including Node.js and ES6 introduces modules as a way
  to address these problems.
 
 
  Indeed.
 
  At the very least, I would like to see a way to write your
  HTML-importable document as a module. So that it runs in a separate
  global and that the caller can access exported symbols and grab the
  ones that it wants.
 
  Though I would even be interested in having that be the default way of
  accessing HTML imports.
 
 
  Yes!  I support that.
 
  I don't know exactly what the syntax would be. I could imagine
  something
  like
 
  In markup:
  link rel=import href=... id=mylib
 
  Once imported, in script:
  new $('mylib').import.MyCommentElement;
  $('mylib').import.doStuff(12);
 
  or
 
  In markup:
  link rel=import href=... id=mylib import=MyCommentElement
  doStuff
 
  Once imported, in script:
  new MyCommentElement;
  doStuff(12);
 
 
  How about this?
 
  In the host document:
  link ref=import href=foo.js import=foo1 foo2
  script
  foo1.bar();
  foo2();
  /script
 
  In foo.js:
  module foo1 {
  export function bar() {}
  }
  function foo2() {}
 
 
 
  Inline module syntax was removed and will not be included in the ES6
  module
  specification. Furthermore, the example you've illustrated here isn't
  entirely valid, but the valid parts are already covered within the scope
  of
  ES6 modules:
 
  // in some HTML...
  script
  import {foo1, foo2} from foo;
 
  foo1();
  foo2();
  /script
 
  // foo.js
  export function foo1() {}
  export function foo2() {}
 
 
  (note that foo2 isn't exported in your example, so would be undefined)
 
  Rick
 

 I thought if an identifier wasn't exported trying to import it would
 result in fast failure (SyntaxError)?


 Yes, my statement above is incorrect.

 Rick






-- 
morrita



Re: [webcomponents] Cross origin HTML imports

2013-10-24 Thread Hajime Morrita
Oh I see. Thanks for the clarification.


On Thu, Oct 24, 2013 at 5:54 PM, Anne van Kesteren ann...@annevk.nl wrote:

 On Thu, Oct 24, 2013 at 1:38 AM, Hajime Morrita morr...@google.com
 wrote:
  OK, I will refer the fetch section in HTML spec then.

 I think you misunderstood, http://fetch.spec.whatwg.org/#fetching is
 the entry point.


 --
 http://annevankesteren.nl/




-- 
morrita


Re: [webcomponents] Cross origin HTML imports

2013-10-23 Thread Hajime Morrita
Hi Joe,

Thanks for trying HTML Imports and looking into the spec!

It's a spec bug. The intention of the spec is to allow CORS-aware cross
origin resources. It seems that something wrong happened during editing. I
filed a bug [1] for revising it.

I've been trying to define the import loading behavior on top of the basic
fetch algorithm of the fetch standard. So feedback like yours are really
appreciated.

Thanks,

[1] https://www.w3.org/Bugs/Public/show_bug.cgi?id=23606

--
morrita


On Wed, Oct 23, 2013 at 4:10 AM, Joe Walnes j...@walnes.com wrote:

 Hi

 I'm experimenting with HTML Imports to simplify a collection of
 complicated web-apps. I'm really impressed with the functionality -
 it's greatly simplified things. I'm currently using a polyfill but
 looking forward to being able to use this natively.

 I've hit a limitation though - I'd really like to be able to share
 imports across origins. This makes it easy to share components across
 web-apps and I see this as a powerful way to piece together web-apps
 from different service providers.

 The current HTML Imports draft says: On getting, the import attribute
 must return null, if: ... the resource is CORS-cross-origin.. If I
 understand correctly, that means HTML Imports will not work cross
 origin.

 Importing CSS, JavaScript and other media do not have this constraint.
 What is the reason behind having this constraint in HTML imports?

 From the container page, it seems no riskier than linking to
 JavaScript on another domain. From the contained page, appropriate use
 of CORS headers should be able to prevent malicious pages grabbing
 their content. In fact the polyfill I'm using already allows cross
 origin imports, so even if the spec forbids it, the polyfills can get
 around it.

 Is this a deliberate design decision, or just something that hasn't
 been discussed in the draft yet?

 Thanks
 -Joe Walnes

 p.s. As I was writing this I just saw the Fetch spec proposal. This
 looks great and I hope it will help address the issue.





-- 
morrita


Re: [webcomponents] Cross origin HTML imports

2013-10-23 Thread Hajime Morrita
OK, I will refer the fetch section in HTML spec then.



On Wed, Oct 23, 2013 at 9:29 PM, Anne van Kesteren ann...@annevk.nl wrote:

 On Wed, Oct 23, 2013 at 1:16 PM, Hajime Morrita morr...@google.com
 wrote:
  I've been trying to define the import loading behavior on top of the
 basic
  fetch algorithm of the fetch standard. So feedback like yours are really
  appreciated.

 basic fetch is not an entry point. It's an internal algorithm. You
 should only ever invoke fetch from other specifications.


 --
 http://annevankesteren.nl/




-- 
morrita


Re: Shadow DOM and Fallback contents for images

2013-10-17 Thread Hajime Morrita
On Thu, Oct 17, 2013 at 5:01 PM, Ryosuke Niwa rn...@apple.com wrote:

 On Oct 16, 2013, at 9:47 PM, Hajime Morrita morr...@google.com wrote:

 D. H[ello Shado]w World - selection spans outside to inside.


 Suppose we allowed this selection. Then how does one go about pasting this
 content elsewhere?

 Most likely, whatever other content editable area, mail client, or some
 random app (e.g. MS Word) into which the user is going to paste wouldn’t
 support shadow DOM and most certainly wouldn’t have the component the
 original page happened to have.


We have to define how Range.cloneContents() or extractContents() works
against shadow-selecting range, assuming selection is built on top of DOM
Range. This doesn't seem that complicated. As cloned element drops its
shadow, these range-generated nodes won't have shadow either.



 Or are you suggesting to copy/serialize the composed “DOM tree?



No, it's same as copying on document tree we have today. Only difference is
that its tree scope is ShadowRoot instead of Document. I don't think this
difference is significant.

Actually, textarea can be (and is) implemented on top of UA shadow DOM.
What we need is a way to explain this.


 What unclear for me is:

 - For case of C, should we consider Shadow being selected? Naively
 thinking yes, but the implication isn't that clear.

 - Should we have a way to forbid B to ensure the selection atomicity?
 Probably yes, but I don't think we have one. The spec could clarify this
 point. My feeling is that this is tangential to Shadow DOM and is more
 about generic selection atomicity concept. It could be covered by a CSS
 property for example.


 Doesn't -moz-user-select/-ms-user-select: element do that already?


Oh right. This is exactly what I wanted. Thanks for pointing this out!



 On Oct 16, 2013, at 9:25 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Oct 16, 2013 8:04 PM, Ryosuke Niwa rn...@apple.com wrote:
  In that case, the entire Note will be selected as an atomic unit.  Is
 there a use case in which you want to select a part of Note and a part of
 Text here?

 What if the text Note was longer? Is there ever a use case for selecting
 part of a word? If not, why does all browsers allow it?

 Then the user can select a part of “Long Note Title”.

 - R. Niwa




-- 
morrita


Re: Shadow DOM and Fallback contents for images

2013-10-17 Thread Hajime Morrita
On Fri, Oct 18, 2013 at 3:56 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Thu, Oct 17, 2013 at 1:55 AM, Hajime Morrita morr...@google.com
 wrote:
 
 
 
  On Thu, Oct 17, 2013 at 5:01 PM, Ryosuke Niwa rn...@apple.com wrote:
 
  On Oct 16, 2013, at 9:47 PM, Hajime Morrita morr...@google.com wrote:
 
  D. H[ello Shado]w World - selection spans outside to inside.
 
 
  Suppose we allowed this selection. Then how does one go about pasting
 this
  content elsewhere?
 
  Most likely, whatever other content editable area, mail client, or some
  random app (e.g. MS Word) into which the user is going to paste wouldn’t
  support shadow DOM and most certainly wouldn’t have the component the
  original page happened to have.
 
 
  We have to define how Range.cloneContents() or extractContents() works
  against shadow-selecting range, assuming selection is built on top of DOM
  Range. This doesn't seem that complicated. As cloned element drops its
  shadow, these range-generated nodes won't have shadow either.

 Oh, sounds like you are suggesting that we expand the Range API such
 that it can have awareness of spanning parts of a shadow DOM tree.
 While still keeping the existing endpoints in the same root?

 That like that idea!


No, that isn't what I meant. I just want to make it explicit that Range API
works well with shadow trees.
However it happened to be taken by you in some interesting way, and I'm
fine with this  ;-)

So what does this mean?

The subranges property exposes the selections of containing ShadowRoots?
That sounds reasonable.

Although I don't think Blink supports this in short term because it only
has per-document selection and doesn't have per-tree ones,
it is rather a lack of features which can be improved eventually than
something totally incompatible and hurting the Web.

Let me confirm what we're talking about.

1. Each shadow tree has its own selection. Each selection spans nodes in
the owning tree only.
2. User initiated selection operation (like dragging) can operate multiple
selections in different trees seamlessly
as if it is single selection crossing boundaries.
3. Range and/or Selection API could expose sub-selections/sub-ranges of
containing trees.

Going back to the example,

 C. H[ello Shadow Worl]d - selection stays outside shadow, but its range
contains a node with shadow.

This turns something like

C2: H[ello (Shadow) Worl]d - There are two selections - [] for outside
tree and () for inside tree.

And this this one

 D. H[ello Shado]w World - selection spans outside to inside.

turns

D2: H[ello] (Shado)w World

Is is correct?




 Though I'd also be interested to hear how other implementations feel
 about the Gecko solution of allowing selection to be comprised of
 multiple DOM Ranges.

 / Jonas




-- 
morrita


Re: Shadow DOM and Fallback contents for images

2013-10-16 Thread Hajime Morrita
Talking over an example might help...


div
Hello 
span
#shadowroot
Shadow
/span
 World
/div


This renders text Hello Shadow World.

In my understanding, following patterns of selections are allowed:

A. H[ell]o Shadow World - selection stays outside shadow
B. Hello S[had]ow world - selection stays inside shadow
C. H[ello Shadow Worl]d - selection stays outside shadow, but its range
contains a node with shadow.

This is not allowed:

D. H[ello Shado]w World - selection spans outside to inside.

What unclear for me is:

- For case of C, should we consider Shadow being selected? Naively
thinking yes, but the implication isn't that clear.

- Should we have a way to forbid B to ensure the selection atomicity?
Probably yes, but I don't think we have one. The spec could clarify this
point. My feeling is that this is tangential to Shadow DOM and is more
about generic selection atomicity concept. It could be covered by a CSS
property for example.




On Thu, Oct 17, 2013 at 1:25 PM, Jonas Sicking jo...@sicking.cc wrote:


 On Oct 16, 2013 8:04 PM, Ryosuke Niwa rn...@apple.com wrote:
 
  On Oct 16, 2013, at 5:14 PM, Jonas Sicking jo...@sicking.cc wrote:
 
   On Tue, Oct 15, 2013 at 12:04 PM, Ryosuke Niwa rn...@apple.com
 wrote:
   On Oct 13, 2013, at 9:19 PM, Jonas Sicking jo...@sicking.cc wrote:
  
   On Sun, Oct 13, 2013 at 9:11 PM, Ryosuke Niwa rn...@apple.com
 wrote:
   On Oct 11, 2013, at 10:52 PM, Jonas Sicking jo...@sicking.cc
 wrote:
  
   On Fri, Oct 11, 2013 at 10:23 PM, Ryosuke Niwa rn...@apple.com
 wrote:
   On Oct 7, 2013, at 1:38 PM, Jonas Sicking jo...@sicking.cc
 wrote:
  
   On Oct 7, 2013 6:56 AM, Hajime Morrita morr...@google.com
 wrote:
  
   Hi,
  
   I'm sorry that I didn't notice that you were talking about UA
 shadow DOM.
   It's an implementation detail and the standard won't care about
 that.
  
   That being said, it might be a good exercise to think about
 feasibility to
   implement img-like custom element which supports alternate
 text.
  
   1. Selections – Which the specification is clear. It will not
 allow the
   selection range cannot span across different node trees. This
 again would
   hinder seamless experience with selection.
  
  
   Is this needed to implement alt text? Try this on Firefox:
   http://jsfiddle.net/8gkAV/2/ .
   The caret just skips the alt-text.
  
   I think we generally consider that a bug.
  
   I think this is a desirable behavior since the img element is
 atomic.  I
   don't think we want to let the user start editing the alt text
 since the
   user can't stylize the alt anyway.
  
   Note that this part is about selection, not editing. I don't see
 any
   reason to treat the alt text different from any other text. I.e.
 that
   the user can select character-by-character by default, but that
 this
   can be overridden by the author if desired.
  
   If I'm not mistaken, how alternative text is presented is up to UA
 vendors.  Given that, I don't think we should mandate one way or another
 with respect to this behavior.
  
   A more important question is what happens to selection inside a
 shadow DOM created by the author.
  
   2. Editing – The spec says that the contenteditable attribute
 should not
   be propagated from the shadow host to the shadow root. Does
 this mean that
   and Shadow DOM cannot participate in editing? This I find is
 limiting to use
   shadow DOM to represent fallback content
  
   This is same as 1) above. The caret skips the alt-text.
  
   I think these are desirable behavior because, if Shadow DOM is
 editable in
   @contentEditable subtree by default, the component author (who
 added the
   shadow DOM) has to make the element definition ready for editing.
  
   Selection and editing are related but different.
  
   Text displayed on screen should by default always be selectable.
 The fact
   that it isn't in canvas for example is a bad thing.
  
   It is fine to enable the author to opt out of making the selection
   selectable, but the default should be that it is selectable
  
   Ugh, my text here is gibberish :)
  
   I think I intended to say:
  
   It is fine to enable the author to opt out of making the shadow
   content selectable, but the default should be that it is
 selectable.
  
   I don't think that's what the spec currently says.  The way I
 interpret it,
   selection should work as if it's in a separate iframe. So the
 text will be
   selectable but the selection can only extend within the shadow
 DOM inside
   the shadow DOM, and selection will treat its shadow host as an
 atomic unit
   outside of it.
  
   Sounds like we should change the spec. Unless we have a good
 reason to
   treat selection as atomic?
  
   One good reason is that the editing specification needs to be aware
 of shadow DOM and various operations such as deletion and copy needs to
 take care of the shadow DOM boundaries.  e.g. what should UA copy if
 selection ends points are in two

Re: Shadow DOM and Fallback contents for images

2013-10-15 Thread Hajime Morrita
The text in shadows are select-able, and it behaves kind of like iframes as
Ryosuke mentioned.

I'm not sure if these are clear from the current draft, but points are:

- Users can select a part of the shadow tree.

- If selected range is in a shadow tree, it isn't accessible (in API sense)
outside the shadow. For one outside the shadow, that is
document.getSelection(), the selection is represented as a range which
selects the shadow host. Inside shadow, that is ShadowRoot.getSelection(),
the selection points the node inside the shadow. This is how encapsulation
works with Shadow DOM.
  It's just about API. The UA knows what part of the text is selected. Thus
things like clipboards should work as expected.

- The selection cannot cross the shadow boundary. Both ends of the
selection should be in a same tree. I think this restriction is mainly for
simplicity.

Does this make sense?




On Mon, Oct 14, 2013 at 1:19 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Sun, Oct 13, 2013 at 9:11 PM, Ryosuke Niwa rn...@apple.com wrote:
  On Oct 11, 2013, at 10:52 PM, Jonas Sicking jo...@sicking.cc wrote:
 
  On Fri, Oct 11, 2013 at 10:23 PM, Ryosuke Niwa rn...@apple.com wrote:
  On Oct 7, 2013, at 1:38 PM, Jonas Sicking jo...@sicking.cc wrote:
 
  On Oct 7, 2013 6:56 AM, Hajime Morrita morr...@google.com wrote:
 
  Hi,
 
  I'm sorry that I didn't notice that you were talking about UA shadow
 DOM.
  It's an implementation detail and the standard won't care about that.
 
  That being said, it might be a good exercise to think about
 feasibility to
  implement img-like custom element which supports alternate text.
 
  1. Selections – Which the specification is clear. It will not allow
 the
  selection range cannot span across different node trees. This again
 would
  hinder seamless experience with selection.
 
 
  Is this needed to implement alt text? Try this on Firefox:
  http://jsfiddle.net/8gkAV/2/ .
  The caret just skips the alt-text.
 
  I think we generally consider that a bug.
 
  I think this is a desirable behavior since the img element is
 atomic.  I
  don't think we want to let the user start editing the alt text since
 the
  user can't stylize the alt anyway.
 
  Note that this part is about selection, not editing. I don't see any
  reason to treat the alt text different from any other text. I.e. that
  the user can select character-by-character by default, but that this
  can be overridden by the author if desired.
 
  If I'm not mistaken, how alternative text is presented is up to UA
 vendors.  Given that, I don't think we should mandate one way or another
 with respect to this behavior.
 
  A more important question is what happens to selection inside a shadow
 DOM created by the author.
 
  2. Editing – The spec says that the contenteditable attribute should
 not
  be propagated from the shadow host to the shadow root. Does this
 mean that
  and Shadow DOM cannot participate in editing? This I find is
 limiting to use
  shadow DOM to represent fallback content
 
  This is same as 1) above. The caret skips the alt-text.
 
  I think these are desirable behavior because, if Shadow DOM is
 editable in
  @contentEditable subtree by default, the component author (who added
 the
  shadow DOM) has to make the element definition ready for editing.
 
  Selection and editing are related but different.
 
  Text displayed on screen should by default always be selectable. The
 fact
  that it isn't in canvas for example is a bad thing.
 
  It is fine to enable the author to opt out of making the selection
  selectable, but the default should be that it is selectable
 
  Ugh, my text here is gibberish :)
 
  I think I intended to say:
 
  It is fine to enable the author to opt out of making the shadow
  content selectable, but the default should be that it is selectable.
 
  I don't think that's what the spec currently says.  The way I
 interpret it,
  selection should work as if it's in a separate iframe. So the text
 will be
  selectable but the selection can only extend within the shadow DOM
 inside
  the shadow DOM, and selection will treat its shadow host as an
 atomic unit
  outside of it.
 
  Sounds like we should change the spec. Unless we have a good reason to
  treat selection as atomic?
 
  One good reason is that the editing specification needs to be aware of
 shadow DOM and various operations such as deletion and copy needs to take
 care of the shadow DOM boundaries.  e.g. what should UA copy if selection
 ends points are in two different shadow trees.
 
  Requiring each selection end to be in the same shadow tree solves this
 problem.

 Again, most selections are not related to editing.

 I think that if we made all text inside shadow content only selectable
 as a whole, that would make shadow content a second class citizen in
 the page. The result would be that authors actively avoid putting text
 inside shadow content since the user experience would be so
 surprising.

 As a user, I'm always annoyed

Re: Shadow DOM and Fallback contents for images

2013-10-07 Thread Hajime Morrita
Hi,

I'm sorry that I didn't notice that you were talking about UA shadow DOM.
It's an implementation detail and the standard won't care about that.

That being said, it might be a good exercise to think about feasibility to
implement img-like custom element which supports alternate text.

1. Selections – Which the specification is clear. It will not allow the
 selection range cannot span across different node trees. This again would
 hinder seamless experience with selection.


Is this needed to implement alt text? Try this on Firefox:
http://jsfiddle.net/8gkAV/2/ .
The caret just skips the alt-text.

 2. Editing – The spec says that the contenteditable attribute should not
 be propagated from the shadow host to the shadow root. Does this mean that
 and Shadow DOM cannot participate in editing? This I find is limiting to
 use shadow DOM to represent fallback content

This is same as 1) above. The caret skips the alt-text.

I think these are desirable behavior because, if Shadow DOM is editable in
@contentEditable subtree by default, the component author (who added the
shadow DOM) has to make the element definition ready for editing.
Otherwise, the editing breaks invariants of the element. In this x-img
case, the author need to update the @alt attribute for each Shadow DOM
modification. This is big burden for authors. Generally speaking, seamless
editing will easily break the Shadow DOM encapsulation.

In an ideal world, each custom element can optionally provide its own way
to edit it. For example, x-img might want to have draggable handle to
change the image size. But such facility would be complex and is far beyond
the current spec.


  

 3. Accessibility – Since we will be changing the final composition of the
 DOM, should accessibility be affected by it?

This should just work. And I believe it works on Blink. It implements
accessibility in terms of rendering tree, where shadow DOM disappears.

-- 
morrita


webcomponents: import instead of link

2013-05-14 Thread Hajime Morrita
Just after started prototyping HTML Imports on Blink, this idea comes to my
mind: Why not have import for HTML Imports?

The good old link has its own semantics which allows users to change the
attributes dynamically. For example, you can change @href to load other
stylesheets. @type can be dynamically changed as well.

In contrast, importing HTML document is one-shot action. We don't allow
updating @href to load another HTML. (We cannot do that anyway since there
is no de-register of custom elements.) This difference will puzzle page
authors.

And an implementer (me) is also having troubles... Current link
implementation is all about dynamic attribute changes. disabling its
dynamic nature only with @rel=import seems tricky.

Well, yes. I can just refactor the code. But this complication implies that
making it interoperable will be a headache. There will be many hidden
assumptions which come from underlying link implementation. For example,
what happens if we update @rel from import to style after the element
imported document or vice versa? We need to clarify all these cases if we
choose link as our vehicle. It seems burden for me.

Using new element like import doesn't have such issues. We can just
define what we need. Also,
we'll be able to introduce import-specific attributes like @defer, @async
or even something like @sandbox without polluting link vocabulary.

One downside is that we'll lose the familiarity of link.  But having
indirection like the Import interface smells like we're just abusing it.

What do you think? Is this reasonable change or am I just restarting
something discussed before?

-- 
morrita


Re: webcomponents: import instead of link

2013-05-14 Thread Hajime Morrita
Thanks for your feedback, folks.

I presumed that link rel=import is one-shot just because it is how
element works and I felt both are analogous, but apparently this is not a
common perception.

It seems like making link rel=import dynamically-updatable isn't that
controversial. I'll try that way.
Filed https://www.w3.org/Bugs/Public/show_bug.cgi?id=20683


Re: [webcomponents]: Invocation order of custom element readyCallback

2013-03-23 Thread Hajime Morrita
On Sat, Mar 23, 2013 at 3:25 AM, Dimitri Glazkov dglaz...@chromium.orgwrote:

 On Fri, Mar 22, 2013 at 9:35 AM, Scott Miles sjmi...@google.com wrote:
  In our work, we adopt a composition rule that a node knows about it's own
  children and can have expectations of them, but can make no assumptions
  about it's parent or siblings. As a coding model we've found it to be
  deterministic enough to build sane constructions. For example, you can
 use
  methods/properties on your children, but should only fire events upward.


Yup. My assumption was that for a custom element, its children matter more
than its parent or siblings.


  Therefore, our polyfills have an invariant that custom elements are
 prepared
  bottom-up (i.e. my children are all upgraded before I am).

 Intuitively, I agree with Scott. Which means that the callback queue
 is LIFO, right?


Ah right. This sounds simple enough to be explained... and to be
implemented :-)


 :DG




-- 
morrita


[webcomponents]: Invocation order of custom element readyCallback

2013-03-22 Thread Hajime Morrita
Hi folks,

I'm implementing readyCallback of custom elements in WeKit.
I'm following what MutationObserver is doing, with small modification. That
is to make such calls as lazily as possible, and do it in batch. After some
trial, I started wondering what is the most desirable invocation order of
such delayed callbacks.

For example, think about following snippet:

div.innerHTML = x-parentx-child/x-child/x-parent;

In our curent implementation, the readyCallbacks of x-parent and x-child
are called (1) just before returning form native innerHTML implementation
to JS world, instead of (2) exactly when C++ backing objects are created.

We don't want (1): running scripts while parsing is a well-known disaster
pattern.

However, (2) seems to have its own problem:

If you (A): Call x-parent first and x-child next, does it work? This
looks straightforward and matches the order of native constructor
invocations.

One problem here is that the x-parent constructor can see the
uninitialized, or un-read-ified x-child, because x-child is already
inserted into x-parent. This breaks the illusion that says the
readyCallback is called just after the element is created.

So what about (B): Call x-child first and x-parent second? This solves
the unreadified x-child problem.

But is this what we want? For me, it seems readyCallback with
(B) unintentionally becomes more powerful than originally expected. It gets
more than just a constructor alternative. And this fact could be abused
easily. Also, the order is no longer simple enough to be explained within a
few lines.

For implementation simplicity, I prefer to pick (1) and to ask people not
to assume the children is available in readyCallback. This might make
sense; When the element is created through createElement(), we don't have
any children in readyCallback anyway. But I'm not super confident about
this.

So I would like to hear your opinion.

-- 
morrita


Re: [webcomponents]: Moving custom element callbacks to prototype/instance

2013-03-06 Thread Hajime Morrita
From an implementors perspective:

Putting such callbacks on the prototype object means that it can be
changed, or even can be added after the element registration. This is
different from a function parameter, which we can consider as a snapshot.

One consequence is that it will become harder to cache (including negative
cache) these values. We need to traverse the prototype chain in C++, which
is typically slower than doing it in JS, on every lifecycle event. Or we
need to invent something cool to make it fast.




On Thu, Mar 7, 2013 at 7:29 AM, Dimitri Glazkov dglaz...@google.com wrote:

 On Wed, Mar 6, 2013 at 2:20 PM, Scott Miles sjmi...@google.com wrote:

  That's the ultimate goal IMO, and when I channel Alex Russell (without
  permission). =P

 Don't we already have Fake Alex for that (
 https://twitter.com/FakeAlexRussell)?

 :DG




-- 
morrita


Re: [webcomponents]: Moving custom element callbacks to prototype/instance

2013-03-06 Thread Hajime Morrita
On Thu, Mar 7, 2013 at 9:13 AM, Erik Arvidsson a...@chromium.org wrote:

 Inline

 On Wed, Mar 6, 2013 at 7:03 PM, Hajime Morrita morr...@google.com wrote:

 One consequence is that it will become harder to cache (including
 negative cache) these values. We need to traverse the prototype chain in
 C++, which is typically slower than doing it in JS, on every lifecycle
 event. Or we need to invent something cool to make it fast.


 There is no reason to walk the prototype chain from C++ (speaking from
 WebCore+V8/JS experience). You can invoke the method using the V8/JSC APIs.


Right. We can just Get() can Call() it. Possible saving by caching these
functions in C++ side is the Get() call then.



 --
 erik





-- 
morrita


Re: HTML element content models vs. components

2011-09-28 Thread Hajime Morrita
On Wed, Sep 28, 2011 at 3:39 PM, Roland Steiner
rolandstei...@chromium.orgwrote:

 Expanding on the general web component discussion, one area that hasn't
 been touched on AFAIK is how components fit within the content model of HTML
 elements.
 Take for example a list (
 http://www.whatwg.org/specs/web-apps/current-work/multipage/grouping-content.html#the-ul-element
 ):

 ol and ul have Zero or more li elements as content model, while
 li is specified to only be usable within ol, ul and menu.

 Now it is not inconceivable that someone would like to create a component
 x-li that acts as a list item, but expands on it. In order to allow this,
 the content model for ol, ul, menu would need to be changed to
 accomodate this. I can see this happening in a few ways:


In my understanding, this is what its style should care about.
If the element (in a component case, the host element) is styled as
display: list-item,
It can behave as a part of the list.

In general, we can interpret its rendering result by thinking about
flattened DOM in XBL term.
After flattening, there is (should be) nothing special for component
rendering.

Does this make sense?




 A.) allow elements derived from a certain element to always take their
 place within element content models.

 In this case, only components whose host element is derived from li would
 be allowed within ol, ul, menu, whether or not it is rendered (q.v.
 the Should the shadow host element be rendered? thread on this ML).


 B.) allow all components within all elements.

 While quite broad, this may be necessary in case the host element isn't
 rendered and perhaps derivation isn't used. Presumably the shadow DOM in
 this case contains one - or even several - li elements as topmost elements
 in the tree.


 C.) Just don't allow components to be used in places that have a special
 content model.


 Thoughts?

 - Roland




-- 
morrita