[Bug 28086] New: [Shadow] (assuming iframes should work inside shadow DOM) Should the contentWindow objects of iframes in shadow DOM show up in window.frames?

2015-02-23 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28086

Bug ID: 28086
   Summary: [Shadow] (assuming iframes should work inside shadow
DOM) Should the contentWindow objects of iframes in
shadow DOM show up in window.frames?
   Product: WebAppsWG
   Version: unspecified
  Hardware: PC
OS: Linux
Status: NEW
  Severity: normal
  Priority: P2
 Component: Component Model
  Assignee: dglaz...@chromium.org
  Reporter: b...@pettay.fi
QA Contact: public-webapps-bugzi...@w3.org
CC: m...@w3.org, public-webapps@w3.org

.

-- 
You are receiving this mail because:
You are on the CC list for the bug.



Re: Custom elements: synchronous constructors and cloning

2015-02-23 Thread Ryosuke Niwa

> On Feb 23, 2015, at 6:42 AM, Boris Zbarsky  wrote:
> 
> On 2/23/15 4:27 AM, Anne van Kesteren wrote:
> 
>> 1) If we run the constructor synchronously, even during cloning. If
>> the constructor did something unexpected, is that actually
>> problematic? It is not immediately clear to me what invariants we
>> might want to preserve. Possibly it's just that the code would get
>> more complicated when not self-hosted? Similar to mutation events? If
>> someone has a list of reasons in mind that would be appreciated. This
>> type of question keeps popping up.
> 
> So these are the things that come to mind offhand for me, which may or may 
> not be problems:
> 
> 1)  If cloning can have sync side-effects, then we need to either accept that 
> cloneNode can go into an infinite loop or ... I'm not sure what. And yes, 
> non-self-hosted implementation gets more complicated.
> 
> 2)  There are various non-obvious cloning operations UAs can perform right 
> now because cloning is side-effect free.  For example, when you print Gecko 
> clones the document and then does the printing stuff async on the cloned 
> document instead of blocking the UI thread while the (fairly long-running) 
> print operation completes.  If cloning became observable, we'd need to figure 
> out what to do here internally (e.g. introduce a new sort of cloning that 
> doesn't run the constructors?).

It seems like this would be an issue regardless of whether callbacks are 
synchronous or not.  Because even if created callback/constructor were to run 
asynchronously, it would still be observable.

In that regard, perhaps what we need another option (although 4 might be a 
developer friendly superset of this):
5) Don't do anything.  Custom elements will be broken upon cloning if there are 
internal states other than attributes just like cloning a canvas element will 
lose its context.

- R. Niwa




Re: CORS performance

2015-02-23 Thread Jonas Sicking
On Mon, Feb 23, 2015 at 11:06 AM, Anne van Kesteren  wrote:
> On Mon, Feb 23, 2015 at 7:55 PM, Jonas Sicking  wrote:
>> A lot websites accidentally enabled cross-origin requests with
>> cookies. Not realizing that that enabled attackers to make requests
>> that had side-effects as well as read personal user data without user
>> permission.
>>
>> In short, it was very easy to misconfigure a server, and people did.
>>
>> This is why I would feel dramatically more comfortable if we only
>> enabled server-wide opt-in for credential-less requests. Those are
>> many orders of magnitude easier to make secure.
>
> Why is that not served by requiring an additional header that
> explicitly opts into that case?

I don't think an extra header is that much harder to deploy than
crosssite.xml is. I.e. I don't see strong reasons to think that people
won't misconfigure.

> That combined with requiring to list
> the explicit origin has worked well for CORS so far.

This could potentially help.

I don't remember the details of how/why people screwed up with
crosssite.xml. But if the problem was that people hosted multiple
services on the same server and only thought of one of them when
writing a policy, then this won't really help very much.

Do we have any data on how common it is for people to use CORS with
credentials? My impression is that it's far less common than CORS
without credentials.

If that's the case then I think we'd get most of the functionality,
with essentially none of the risk, by only allowing server-wide
cookie-less preflights.

But data would help for sure.

/ Jonas



Re: CORS performance

2015-02-23 Thread Anne van Kesteren
On Mon, Feb 23, 2015 at 7:55 PM, Jonas Sicking  wrote:
> A lot websites accidentally enabled cross-origin requests with
> cookies. Not realizing that that enabled attackers to make requests
> that had side-effects as well as read personal user data without user
> permission.
>
> In short, it was very easy to misconfigure a server, and people did.
>
> This is why I would feel dramatically more comfortable if we only
> enabled server-wide opt-in for credential-less requests. Those are
> many orders of magnitude easier to make secure.

Why is that not served by requiring an additional header that
explicitly opts into that case? That combined with requiring to list
the explicit origin has worked well for CORS so far.


-- 
https://annevankesteren.nl/



Re: CORS performance proposal

2015-02-23 Thread Jonas Sicking
On Fri, Feb 20, 2015 at 11:43 PM, Anne van Kesteren  wrote:
> On Fri, Feb 20, 2015 at 9:38 PM, Jonas Sicking  wrote:
>> On Fri, Feb 20, 2015 at 1:05 AM, Anne van Kesteren  wrote:
>>> An alternative is that we attempt to introduce
>>> Access-Control-Policy-Path again from 2008. The problems you raised
>>> https://lists.w3.org/Archives/Public/public-appformats/2008May/0037.html
>>> seem surmountable. URL parsing is defined in more detail these days
>>> and we could simply ban URLs containing escaped \ and /.
>>
>> I do remember that another issue that came up back then was that
>> servers would treat more than just '\', or the escaped version
>> thereof, as a /. But also any character whose low-byte was equal to
>> the ascii code for '\' or '/'. I.e. the server would just cut the
>> high-byte when doing some internal 2byte-string to 1byte-string
>> conversion. Potentially this conversion is affected by what character
>> encodings the server is configured for too, but i'm less sure about
>> that.
>
> High-byte of what? A URL is within ASCII range when it reaches the
> server. This is the first time I hear of this.

I really don't remember the details. I'd recommend talking to
microsoft since I believe they had done most research into this at the
time.

Keep in mind though that just because URL parsing is defined a
particular way, doesn't mean that software implements it that way.

/ Jonas



Re: CORS performance proposal

2015-02-23 Thread Jonas Sicking
On Sat, Feb 21, 2015 at 11:18 PM, Anne van Kesteren  wrote:
> On Sat, Feb 21, 2015 at 10:17 AM, Martin Thomson
>  wrote:
>> On 21 February 2015 at 20:43, Anne van Kesteren  wrote:
>>> High-byte of what? A URL is within ASCII range when it reaches the
>>> server. This is the first time I hear of this.
>>
>> Apparently, all sorts of muck floats around the Internet.  When we did
>> HTTP/2 we were forced to accept that header field values (URLs in
>> particular) were a sequence of octets.  Those are often interpreted as
>> strings in various interesting ways.
>
> But in this particular case it must be the browser that generates said
> muck, no? Other than Internet Explorer (and that's a couple versions
> ago, so wouldn't support this protocol anyway), there's no browser
> that does this as far as I know.

All browsers support sending %xx stuff to the server. Decoding those
is likely more often than not happening in a server-specific way
still. Despite specs defining how they should do it.

/ Jonas



Re: CORS performance

2015-02-23 Thread Jonas Sicking
On Mon, Feb 23, 2015 at 7:15 AM, Henri Sivonen  wrote:
> On Tue, Feb 17, 2015 at 9:31 PM, Brad Hill  wrote:
>> I think it is at least worth discussing the relative merits of using a
>> resource published under /.well-known for such use cases, vs. sending
>> "pinned" headers with every single resource.
>
> FWIW, when CORS was designed, the Flash crossdomain.xml design (which
> uses a well-known URL though not under /.well-known) already existed
> and CORS deliberately opted for a different design.
>
> It's been a while, so I don't recall what the reasons against adopting
> crossdomain.xml or something very similar to it were, but considering
> that the crossdomain.xml design was knowingly rejected, it's probably
> worthwhile to pay attention to why.

A lot websites accidentally enabled cross-origin requests with
cookies. Not realizing that that enabled attackers to make requests
that had side-effects as well as read personal user data without user
permission.

In short, it was very easy to misconfigure a server, and people did.

This is why I would feel dramatically more comfortable if we only
enabled server-wide opt-in for credential-less requests. Those are
many orders of magnitude easier to make secure.

/ Jonas



Re: CORS performance

2015-02-23 Thread Henri Sivonen
On Tue, Feb 17, 2015 at 9:31 PM, Brad Hill  wrote:
> I think it is at least worth discussing the relative merits of using a
> resource published under /.well-known for such use cases, vs. sending
> "pinned" headers with every single resource.

FWIW, when CORS was designed, the Flash crossdomain.xml design (which
uses a well-known URL though not under /.well-known) already existed
and CORS deliberately opted for a different design.

It's been a while, so I don't recall what the reasons against adopting
crossdomain.xml or something very similar to it were, but considering
that the crossdomain.xml design was knowingly rejected, it's probably
worthwhile to pay attention to why.

-- 
Henri Sivonen
hsivo...@hsivonen.fi
https://hsivonen.fi/



Re: Custom elements: synchronous constructors and cloning

2015-02-23 Thread Boris Zbarsky

On 2/23/15 4:27 AM, Anne van Kesteren wrote:


1) If we run the constructor synchronously, even during cloning. If
the constructor did something unexpected, is that actually
problematic? It is not immediately clear to me what invariants we
might want to preserve. Possibly it's just that the code would get
more complicated when not self-hosted? Similar to mutation events? If
someone has a list of reasons in mind that would be appreciated. This
type of question keeps popping up.


So these are the things that come to mind offhand for me, which may or 
may not be problems:


1)  If cloning can have sync side-effects, then we need to either accept 
that cloneNode can go into an infinite loop or ... I'm not sure what. 
And yes, non-self-hosted implementation gets more complicated.


2)  There are various non-obvious cloning operations UAs can perform 
right now because cloning is side-effect free.  For example, when you 
print Gecko clones the document and then does the printing stuff async 
on the cloned document instead of blocking the UI thread while the 
(fairly long-running) print operation completes.  If cloning became 
observable, we'd need to figure out what to do here internally (e.g. 
introduce a new sort of cloning that doesn't run the constructors?).


3)  As you note, we'd need to figure out what to do with current clone 
consumers.  This includes not just range stuff but things built on top 
of said range stuff or on top of cloning directly.  Things like editing 
functionality, for example.  Not that we have a real spec for that 
anyway


-Boris



Custom elements: synchronous constructors and cloning

2015-02-23 Thread Anne van Kesteren
I've been continuing to explore synchronous constructors for custom
elements as they explain the parser best. After reading through
https://speakerdeck.com/vjeux/oscon-react-architecture I thought there
might be a performance concern, but Yehuda tells me that innerHTML
being faster than DOM methods is no longer true.

Dimitri pointed out that cloning might be problematic. The
https://dom.spec.whatwg.org/#concept-node-clone operation is invoked
from numerous range operations that might not expect side effects.
Here are the alternatives to consider:

1) If we run the constructor synchronously, even during cloning. If
the constructor did something unexpected, is that actually
problematic? It is not immediately clear to me what invariants we
might want to preserve. Possibly it's just that the code would get
more complicated when not self-hosted? Similar to mutation events? If
someone has a list of reasons in mind that would be appreciated. This
type of question keeps popping up.

2) When running the constructor DOM methods that cause mutations
throw. They are locked. This might not work well due to shadow DOM.

3) We use structured cloning. Custom state stored behind an
Element.state symbol.

4) We use structured cloning, but for custom state a callback is
invoked with similar timing to the callbacks part of custom elements
today.

Both 3 and 4 seem the least invasive to the current state of play, but
it would be good to know why we want to preserve the status quo for 1
and 2.


-- 
https://annevankesteren.nl/