Re: WebApp installation via the browser

2014-06-02 Thread Jonas Sicking
On Fri, May 30, 2014 at 5:40 PM, Jeffrey Walton  wrote:
> Are there any platforms providing the feature? Has the feature gained
> any traction among the platform vendors?

The webapps platform that we use in FirefoxOS and Firefox Desktop
allows any website to be an app store. I *think*, though I'm not 100%
sure, that this works in Firefox for Android as well.

I'm not sure what you mean by "side loaded", but we're definitely
trying to allow normal websites to provide the same experience as the
firefox marketplace. The user doesn't have to turn on any "developer
mode" or otherwise do anything otherwise "special" to use such a
marketplace. The user simply needs to browse to the website/webstore
and start using it.

The manifest spec that is being developed in this WG is the first step
towards standardizing the same capability set. It doesn't yet have the
concept of an "app store", instead any website can self-host itself as
an app.

It's not clear to me if there's interest from other browser vendors
for allowing websites to act as app stores, for now we're focusing the
standard on simpler use cases.

/ Jonas



Re: Data URL Origin (Was: Blob URL Origin)

2014-06-02 Thread Anne van Kesteren
On Fri, May 30, 2014 at 2:07 AM, Jonas Sicking  wrote:
> On Thu, May 29, 2014 at 9:21 AM, Anne van Kesteren  wrote:
>> Given that workers execute script in a fairly contained way, it might be 
>> okay?
>
> Worker scripts aren't going to be very contained as we add more APIs
> to workers. They can already read any data from the server (through
> XHR) and much local data (through IDB).
>
> I'd definitely want them not to inherit the origin, the question is if
> that's web compatible at this point. Maybe we can allow them to
> execute but as a sandboxed origin?

Good point. We'll have to investigate how much we can do there. I
followed up on the WHATWG list with regards to aligning Fetch and HTML
with the new policy. I also filed a bug on Gecko.

*  http://lists.w3.org/Archives/Public/public-whatwg-archive/2014Jun/0002.html
* https://bugzilla.mozilla.org/show_bug.cgi?id=1018872


-- 
http://annevankesteren.nl/



HTML imports: new XSS hole?

2014-06-02 Thread Anne van Kesteren
How big of a problem is it that we're making  as dangerous as

Re: WebApp installation via the browser

2014-06-02 Thread David Rajchenbach-Teller
On 02/06/14 11:06, Jonas Sicking wrote:
> On Fri, May 30, 2014 at 5:40 PM, Jeffrey Walton  wrote:
>> Are there any platforms providing the feature? Has the feature gained
>> any traction among the platform vendors?
> 
> The webapps platform that we use in FirefoxOS and Firefox Desktop
> allows any website to be an app store. I *think*, though I'm not 100%
> sure, that this works in Firefox for Android as well.

I confirm that it works on Android.

Best regards,
 David

-- 
David Rajchenbach-Teller, PhD
 Performance Team, Mozilla



signature.asc
Description: OpenPGP digital signature


Re: Fetch API

2014-06-02 Thread Anne van Kesteren
On Thu, May 29, 2014 at 4:25 PM, Takeshi Yoshino  wrote:
> http://fetch.spec.whatwg.org/#dom-request
> Add steps to set client and context?

That happens as part of the "restricted copy". However, that might
still change around a bit.


> http://fetch.spec.whatwg.org/#cors-preflight-fetch-0
> Add steps to set client and context?

That's an internal algorithm never directly used. You can only get
there from http://fetch.spec.whatwg.org/#concept-fetch and that can
only be reached through an API such as fetch().


> http://fetch.spec.whatwg.org/#concept-legacy-fetch
> http://fetch.spec.whatwg.org/#concept-legacy-potentially-cors-enabled-fetch
> Steps to set url, client and context are missing here too. But not worth
> updating this section anymore?

Yeah. I need to work with Ian at some point to rework HTML.


> Termination reason is not handled intentionally (only supposed to be used by
> XHR's functionalities and nothing would be introduced for Fetch API?)?

It's a bit unclear yet how we are supposed to deal with that.


> The promise is rejected with a TypeError which seems inconsistent with XHR.
> Is this intentional?

Yes. I wanted to stick to JavaScript exceptions. However, I suspect at
some point once we have FormData integration and such there might be
quite a bit of dependencies on DOM in general, so maybe that is moot.


-- 
http://annevankesteren.nl/



File API - Writer suspension

2014-06-02 Thread Julian Ladbury
I fail to understand why work on this API has been suspended. HTML5,
JavaScript and CSS together are becoming a natural platform of choice on
which to write portable applications. Indeed, I have just started work on
just such a project, welcoming the chance it gives to break away from
proprietary solutions.

An essential part of any mature application is the ability to share data
with other applications. As an example, JSON provides an ideal way to do
this: but to be useful, it has to be possible to save a JSON file on a
local system (internet access cannot, and should not, be taken for granted)
which can be transmitted by email or other simple means.


Re: HTML imports: new XSS hole?

2014-06-02 Thread James M Snell
So long as they're handled with the same policy and restrictions as the
script tag, it shouldn't be any worse.
On Jun 2, 2014 2:35 AM, "Anne van Kesteren"  wrote:

> How big of a problem is it that we're making  as dangerous as
> 

Re: HTML imports: new XSS hole?

2014-06-02 Thread Anne van Kesteren
On Mon, Jun 2, 2014 at 2:54 PM, James M Snell  wrote:
> So long as they're handled with the same policy and restrictions as the
> script tag, it shouldn't be any worse.

Well, 

Re: HTML imports: new XSS hole?

2014-06-02 Thread Boris Zbarsky

On 6/2/14, 8:54 AM, James M Snell wrote:

So long as they're handled with the same policy and restrictions as the
script tag, it shouldn't be any worse.


It's worse for sites that have some sort of filtering on user-provided 
content but don't catch this case right now, no?


-Boris



Re: HTML imports: new XSS hole?

2014-06-02 Thread James M Snell
Yup, like I said, it shouldn't be any worse. From what I've seen with
chrome, at the very least, import links are handled with the same CSP as
script tags. Which is certainly a good thing. I suppose that If you needed
the ability to sandbox them further, just wrap them inside a sandboxed
iframe. It's a bit ugly but it works.
On Jun 2, 2014 5:56 AM, "Anne van Kesteren"  wrote:

> On Mon, Jun 2, 2014 at 2:54 PM, James M Snell  wrote:
> > So long as they're handled with the same policy and restrictions as the
> > script tag, it shouldn't be any worse.
>
> Well, 

Re: HTML imports: new XSS hole?

2014-06-02 Thread Boris Zbarsky

On 6/2/14, 9:02 AM, James M Snell wrote:

I suppose that If you
needed the ability to sandbox them further, just wrap them inside a
sandboxed iframe.


The worry here is sites that currently have html filters for 
user-provided content that don't know about  being able to run 
scripts.  Clearly once a site knows about this they can adopt various 
mitigation strategies.  The question is whether we're creating XSS 
vulnerabilities in sites that are currently not vulnerable by adding 
this functionality.


-Boris

P.S. A correctly written whitelist filter will filter these things out. 
 Are we confident this is standard practice now?




Re: HTML imports: new XSS hole?

2014-06-02 Thread James M Snell
Yes, that's true. Content filters are likely to miss the links themselves.
Hopefully, the imported documents themselves get filtered, but there's no
guarantee. One assumption we can possibly make is that any implementation
that knows how to follow import links ought to know that they need to be
filtered. Im not aware of any current user agents that are not import aware
that automatically follow and execute link tags.
On Jun 2, 2014 6:12 AM, "Boris Zbarsky"  wrote:

> On 6/2/14, 9:02 AM, James M Snell wrote:
>
>> I suppose that If you
>> needed the ability to sandbox them further, just wrap them inside a
>> sandboxed iframe.
>>
>
> The worry here is sites that currently have html filters for user-provided
> content that don't know about  being able to run scripts.  Clearly
> once a site knows about this they can adopt various mitigation strategies.
>  The question is whether we're creating XSS vulnerabilities in sites that
> are currently not vulnerable by adding this functionality.
>
> -Boris
>
> P.S. A correctly written whitelist filter will filter these things out.
>  Are we confident this is standard practice now?
>
>


Re: HTML imports: new XSS hole?

2014-06-02 Thread Boris Zbarsky

On 6/2/14, 9:22 AM, James M Snell wrote:

Yes, that's true. Content filters are likely to miss the links
themselves. Hopefully, the imported documents themselves get filtered


By what, exactly?  I mean, CSP will apply to them, but not website 
content filters...



One assumption we can possibly make is that
any implementation that knows how to follow import links ought to know
that they need to be filtered.


Sure, but that assumes the filtering we're talking about is being done 
by the UA to start with.


-Boris



Re: HTML imports: new XSS hole?

2014-06-02 Thread James M Snell
Im not saying it's perfect. Not by any stretch. I'm saying it shouldn't be
worse. Any impl that supports the mechanism will need to be aware of the
risk and content filters will need to evolve. Perhaps an additional
strongly worded warning in the spec would be helpful.
On Jun 2, 2014 6:43 AM, "Boris Zbarsky"  wrote:

> On 6/2/14, 9:22 AM, James M Snell wrote:
>
>> Yes, that's true. Content filters are likely to miss the links
>> themselves. Hopefully, the imported documents themselves get filtered
>>
>
> By what, exactly?  I mean, CSP will apply to them, but not website content
> filters...
>
>  One assumption we can possibly make is that
>> any implementation that knows how to follow import links ought to know
>> that they need to be filtered.
>>
>
> Sure, but that assumes the filtering we're talking about is being done by
> the UA to start with.
>
> -Boris
>


Re: HTML imports: new XSS hole?

2014-06-02 Thread Boris Zbarsky

On 6/2/14, 9:54 AM, James M Snell wrote:

Im not saying it's perfect. Not by any stretch. I'm saying it shouldn't
be worse.


I don't understand why you think it's not worse.


and content filters will need to evolve.


And until they do, we may have vulnerable pages, right?  How is that not 
worse?


Say an OS added some new functionality that meant software running on 
that OS would be insecure unless it got patched.  Would you consider 
that acceptable?  This is a pretty similar situation.


The only thing that might make this OK is if good whitelist-based 
filters are overwhelmingly used in practice.



Perhaps an additional
strongly worded warning in the spec would be helpful.


By what mechanism would someone who created a web page a year ago see 
this warning and go update their page?


-Boris



Re: File API - Writer suspension

2014-06-02 Thread Arun Ranganathan
On Jun 1, 2014, at 1:22 PM, Julian Ladbury 
 wrote:

> I fail to understand why work on this API has been suspended.
> 


Just to be clear, by “this API” I think you mean: 
http://dev.w3.org/2009/dap/file-system/file-writer.html



> HTML5, JavaScript and CSS together are becoming a natural platform of choice 
> on which to write portable applications. Indeed, I have just started work on 
> just such a project, welcoming the chance it gives to break away from 
> proprietary solutions.
> 
> An essential part of any mature application is the ability to share data with 
> other applications. As an example, JSON provides an ideal way to do this: but 
> to be useful, it has to be possible to save a JSON file on a local system 
> (internet access cannot, and should not, be taken for granted) which can be 
> transmitted by email or other simple means.
> 



I think this is the primary family of use cases around the FileSystem API:

http://w3c.github.io/filesystem-api/Overview.html

which is a successor specification.

A few things:

1. You can already create Blobs without using the BlobBuilder API (which has 
been deprecated), and you can already save them, but through a user prompt (the 
“File Save As” dialog).

This Specifiction discourse thread is useful for other workarounds, and a 
discussion of their shortcomings: 
http://discourse.specifiction.org/t/saving-files-api/63

Yes, this area of the platform is currently underspecified. Hopefully not for 
long though :)

2. In the interim, you can probably use IndexedDB to address the immediate use 
case. 

— A*




[webcomponents]: Semi-regular telcon tomorrow

2014-06-02 Thread Dimitri Glazkov
We will be having our second Web Components telcon tomorrow (June 3).
If you'd like to suggest specific agenda items, please reply to this
mail.

Potential agenda items:
* Understanding Shadow DOM theming problem, brainstorming primitives,
maybe even filing bugs (who knows!).
* Reduce the frequency of the call to once in 2 weeks (once a month?)

Details (also on https://www.w3.org/wiki/WebComponents/):

* IRC: #webapps (irc://irc.w3.org) -- please join if you participate
in the call, since logging/minuting/attendance logistics happen there

* Phone Bridge: W3C's Zakim Bridge +1.617.761.6200; PIN = 9274#; SIP
information 

* Time and Day:
- Tuesday: San Francisco 09:00; Boston 12:00; UTC: 16:00; Paris:
18:00; Helsinki: 19:00;
- Wednesday: Tokyo: 01:00

* Duration: 30-45 min

:DG<



[Bug 25914] No definition of parsing blob's scheme data

2014-06-02 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=25914

Arun  changed:

   What|Removed |Added

 Status|REOPENED|RESOLVED
 Resolution|--- |FIXED

--- Comment #6 from Arun  ---
(In reply to Anne from comment #5)
> This algorithm should operate on a parsed URL, not a fresh URL. You want to
> hand this algorithm a scheme data component.


Done! And revisited the original intent of this bug.

So:

1. Specified extracting the identifier from a fresh URL:
http://dev.w3.org/2006/webapi/FileAPI/#extractionOfIdentifierFromBlobURL 

2. Specified extracting origin from a scheme data component / identifier:
http://dev.w3.org/2006/webapi/FileAPI/#extractionOfOriginFromIdentifier

Clarified what emitting methods do.

-- 
You are receiving this mail because:
You are on the CC list for the bug.



[Bug 25915] Cross-origin requests

2014-06-02 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=25915

Arun  changed:

   What|Removed |Added

 Status|REOPENED|RESOLVED
 Resolution|--- |FIXED

--- Comment #3 from Arun  ---
Fixed!
http://dev.w3.org/2006/webapi/FileAPI/#NetworkError

-- 
You are receiving this mail because:
You are on the CC list for the bug.



Re: HTML imports: new XSS hole?

2014-06-02 Thread Giorgio Maone
On 02/06/2014 15:01, Boris Zbarsky wrote:
> On 6/2/14, 8:54 AM, James M Snell wrote:
>> So long as they're handled with the same policy and restrictions as the
>> script tag, it shouldn't be any worse.
>
> It's worse for sites that have some sort of filtering on user-provided
> content but don't catch this case right now, no?
>
> -Boris
>

I do hope any filter already blocked out  elements, as CSS has
been a XSS vector for a long time, courtesy of MSIE expressions and XBL
bindings.
-- G




Re: WebApp installation via the browser

2014-06-02 Thread Alex Russell
On Mon, Jun 2, 2014 at 2:06 AM, Jonas Sicking  wrote:

> On Fri, May 30, 2014 at 5:40 PM, Jeffrey Walton 
> wrote:
> > Are there any platforms providing the feature? Has the feature gained
> > any traction among the platform vendors?
>
> The webapps platform that we use in FirefoxOS and Firefox Desktop
> allows any website to be an app store. I *think*, though I'm not 100%
> sure, that this works in Firefox for Android as well.
>
> I'm not sure what you mean by "side loaded", but we're definitely
> trying to allow normal websites to provide the same experience as the
> firefox marketplace. The user doesn't have to turn on any "developer
> mode" or otherwise do anything otherwise "special" to use such a
> marketplace. The user simply needs to browse to the website/webstore
> and start using it.
>
> The manifest spec that is being developed in this WG is the first step
> towards standardizing the same capability set. It doesn't yet have the
> concept of an "app store", instead any website can self-host itself as
> an app.
>

The Chrome team is excited about this direction and is collaborating on the
manifest format in order to help make aspects of this real. In particular
we're excited to see a Service Worker entry added to the format in a future
version as well as controls for window decorations and exit extents.


> It's not clear to me if there's interest from other browser vendors
> for allowing websites to act as app stores, for now we're focusing the
> standard on simpler use cases.


I can only speak for the Chrome team, but the idea of a page as an
app-store seems less important than the concept of the page *as* an app.


Re: HTML imports: new XSS hole?

2014-06-02 Thread Boris Zbarsky

On 6/2/14, 4:21 PM, Giorgio Maone wrote:

I do hope any filter already blocked out  elements, as CSS has
been a XSS vector for a long time


 elements without "stylesheet" in rel don't load CSS, though.

Hence the worries about blacklist vs whitelist...

-Boris



RE: contentEditable=minimal

2014-06-02 Thread Ben Peters
> From: Robin Berjon [mailto:ro...@w3.org]
> 
> I think we agree at the high level but might disagree over smaller details. 
> You
> seem to want something that would roughly resemble the
> following:
> 
> BeforeSelectionChange
> {
>direction:  "forward"
> , step:   "word"
> }
> 
> whereas I would see something capturing information more along those lines:
> 
> BeforeSelectionChange
> {
>oldRange:  [startNode, startOffset, endNode, endOffset] , newRange:
> [startNode, startOffset, endNode, endOffset] }
> 
> I think that the latter is better because it gives the library the computed
> range that matches the operation, which as far as I can imagine is what you
> actually want to check (e.g. check that the newRange does not contain
> something unselectable, isn't outside a given boundary, etc.).
> 
> The former requires getting a lot of details right in the spec, and those 
> would
> become hard to handle at the script level. On some platforms a triple click 
> (or
> some key binding) can select the whole line. This not only means that you
> need direction: "both" but also that the script needs a notion of line that it
> has no access to (unless the Selection API grants it). What makes up a "word"
> as a step also varies a lot (e.g.
> I tend to get confused by what Office apps think a word is as it doesn't match
> the platform's idea) and there can be interesting interactions with language
> (e.g. is "passive-aggressive" one word or two? What about "co-operation"?).
> 
> But maybe you have a use case for providing the information in that way that
> I am not thinking of?

This seems like it's getting pretty close to the browser just doing the 
selection. A browser would still have to figure out what the selection should 
look like in the version you suggest. Instead, maybe each site could decide 
what is thinks is a word ("passive" or "passive-agressive"). The line example 
is good, so maybe we should have a 'line' level selection just like the 'word' 
level?

> >> Not all of those are separate, though. Voice input is just an input
> >> (or beforeinput) that's more than one character long. There's nothing
> >> wrong with that. So is pasting (though you need cleaning up).
> >> Composition you need to handle, but I would really, really hope that
> >> the platform gives you a delete event with a range that matches what
> >> it is expected to delete rather than have you support all the
> >> modifiers (which you'll get wrong for the user as they are platform
> >> specific). As seen in the code gist I posted, given such a delete
> >> event the scripting is pretty simple.
> >
> > I agree, except that I don't know why we want paste to fire two
> > 'intention' events (paste and input). Seems like we should make it
> > clear that the intention is insert text (type, voice, whatever),
> > remove text (delete, including what text to remove), or paste (so you
> > can clean it up).
> 
> I don't think we want to fire both paste and input, but if my reading is 
> correct
> that is the case today (or expected to be — this isn't exactly an area of high
> interop).

Yes my understanding is that today you get both. I'm not arguing against that 
as the events stand today, but when we talk about 'Intention Events' as an 
abstract type with certain properties like commandName, I think you should only 
get one of those (paste or command or beforeinput), and I'm suggesting that it 
should be paste in this case.

Ben


RE: contentEditable=minimal

2014-06-02 Thread Ben Peters
Great context. Thanks! Let me ask my question another way- should 
CompositionEvents be used when there isn't a composition? Should typing 'a' 
fire CompositionEnd? If not we still need a CommandEvent of type insertText, 
and it seems inconsistent not to fire it for all typing, doesn't it?

> From: Robin Berjon [mailto:ro...@w3.org]
> 
> On 27/05/2014 01:52 , Ben Peters wrote:
> >> From: Robin Berjon [mailto:ro...@w3.org] On 23/05/2014 01:23 , Ben
> >> Peters wrote:
>  As I said I am unsure that the way in which composition events are
>  described in DOM 3 Events is perfect, but that's only because I
>  haven't used them in anger and they aren't supported much.
> >>>
> >>> My thought is that we can use CommandEvent with type="insertText".
> >>> This would be the corollary to execComamnd("insertText"), and the
> >>> data would be the ñ that is about to be inserted.
> >>
> >> But if you only get one event you can't render the composition as it
> >> is carrying out.
> >
> > I believe Composition Events are very important for IME input, but we
> > should fire CommandEvent with Insert text for all text input,
> > including IME. Are you saying we should use Composition Events even
> > for non-IME input?
> 
> I am not using an IME, and yet I could not type in French on my keyboard
> without composition.
> 
> Obviously, if I switch to Kotoeri input, I'll get composition *and* an IME
> popup. But for regular French input (in a US keyboard) I need:
> 
>é -> Alt-E, E
>è -> Alt-`, E
>à -> Alt-`, A
>ô -> Alt-I, O
>ü -> Alt-U, U
>ñ -> Alt-˜, N (for the occasional Spanish)
>(and a bunch more)
> 
> Some older apps (you pretty much can't find them anymore) used to not
> display the composition as it was ongoing and only show the text after
> composition had terminated. That was survivable but annoying, and it only
> worked because composition in Latin-script languages is pretty trivial (except
> perhaps for all you Livonian speakers out there!), but I don't think it would 
> be
> viable for more complex compositions. And even in simple cases it would
> confuse users to be typing characters with no rendering feedback.
> 
> Without composition events you can't render the ongoing composition. See
> what's going on at:
> 
> 
> https://gist.github.com/darobin/8a128f05106d0e02717b#file-twitter-html-
> L81
> 
> That is basically inserting text in a range that's decorated to be underlined 
> to
> show composition in progress. Composition updates
> *replace* the text in the range. And at the end the range is removed and
> text is inserted.
> 
> The above is for Mac, but I have distant memories of using something similar
> on Windows called the "US International Keyboard" where you could have
> apostrophes compose as accents, etc.. I don't recall how it was rendered
> though.
> 
> --
> Robin Berjon - http://berjon.com/ - @robinberjon


Re: HTML imports: new XSS hole?

2014-06-02 Thread James M Snell
Some initial informal testing shows that import links do make it through
the filters I have readily handy. It was quick work to write up some custom
filters, however.
On Jun 2, 2014 1:52 PM, "Boris Zbarsky"  wrote:

> On 6/2/14, 4:21 PM, Giorgio Maone wrote:
>
>> I do hope any filter already blocked out  elements, as CSS has
>> been a XSS vector for a long time
>>
>
>  elements without "stylesheet" in rel don't load CSS, though.
>
> Hence the worries about blacklist vs whitelist...
>
> -Boris
>
>


Re: HTML imports: new XSS hole?

2014-06-02 Thread Eduardo' Vela"
As with any new feature, there's the risk of introducing new security bugs
on applications that otherwise wouldn't have them. The usual argument goes
as follows:

Browser vendors have a lot of undocumented functionality, and it would be
foolish to create a blacklist approach on content filtering, since you
can't possibly know what are all the things the browser does.

There is a common counter-argument is that in some cases, you might
certainly ask to browser vendors "is doing X safe"? And they will be able
to say yes.

This is the perfect example of that.

Say you have a website, and you have a whitelist-based content filter. You
want to allow your users to run arbitrary CSS, so you HTML-sanitize their
content, and allow  tags. You thorougly check that the user's browser
is Chrome or Firefox latest version, and even, before load, you do a
runtime check to ensure that they are up-to-date and safe.

Now, CSS now a days in modern browsers (even Opera) is relatively safe
against JavaScript injection attacks. Sure, there are bugs every once in a
while, but browsers have been killing those features slowly and steadily.

So, this guy (let's call him Mark) comes to Blackhat and find the "security
guy" from Firefox and the "security guy" from Chrome, and hey, why not,
even a "web security guy" from Internet Explorer. Let's call them Dave,
Chris and Jesse. And they ask them during the Microsoft Party.. "hey guys,
I want to make the internet more fun and allow people to run arbitrary CSS.
If I make sure to strip all PII from the document I'm injecting the CSS to,
there shouldn't be any way for the user to attack other parts of my web app
right???". And they all look at each other, think "what is this guy doing?
and why doesn't he have a wristband?" and eventually say "you should use
seamless iframe sandboxes". And he goes home, and make a big company based
on that promise.

Now, fast-forward 2014. Turns out, he used iframe sandbox="allow-scripts
allow-same-origin" because he wanted to append an event handler to the
sandboxed iframe content from the outer document, and until today, that
would have been safe.. because there was no PII to leak from that site
(perhaps.. visited state for links in some browsers?). He also, foolishly
assumed that since  tags can only be *really* used for CSS, he didn't
have to check for "rel", since, well, you know, CSS was "the worst thing
you could do" from an iframe. He knew to remove secret data from the
document, since he read the existing literature and he learnt that you can
exfiltrate data with CSS, but he saw, as mentioned online many times, that
CSS-based XSS doesn't yield JavaScript execution anymore in modern browsers.

Now, fast-forward 2015. Some guy, let's call him Mario documents this
feature in a website, say, html5sec.org, and another guy, let's call him
Alex, is bored one weekend, and decided to well, go on a rampage and own
Mark's website to it's knees. Mark, baffled, can't understand what
happened. He literally followed the security advice from 3 of the top
browser vendors. WHYYY!?!?

He would be right to be upset. I think. We can't really expect Mark to know
about all obscure browser features and how the rest of the internet has to
evolve around them.

Well, turns out he is NOT right. Mark made three mistakes:
 1. He went to BlackHat seeking security advice. BlackHat isn't really the
place you go for learning about secure coding practices. Also, you
shouldn't go to a party that requires you to use a wrist band.
 2. He misused iframe@sandbox. allow-same-origin and allow-scripts probably
shouldn't be allowed together.. they make little sense (or if they are
allowed together, they should be making it clearer that all security
benefits went down the toilet).
 3. Finally, and most importantly, he designed a security feature by
himself, and decided not to be kept up to date (said in a different way, he
should be subscribed to some of these mailing lists).

Now, I'm not sure how many have tried to implement an HTML sanitizers. Even
a whitelist-based one has the following problems:
 1. You have to write a parser OR You have to use a third-party parser.
  1.1 This has the problem of writing-your-own is a headache, and you will
get it wrong. If you use a third-party parser, it'll most likely try to be
as lenient as flexible as possible, accepting malformed input (for
"compatibility" yay!).
 2. You have to get a serializer.
  2.1 This is way harder than the parser. Even browsers get it wrong (and
the FSM shall bless you if you need to write a serializer for CSS).
 3. You need a sane whitelist.
  3.1 And the whitelist, apparently, needs to be aware of not just
 pairs, but also  geez!

That is to say, no one really has a good HTML sanitizers. Everyone either
over-protects, or has XSS. Possibly a mixture of both. So what can poor
Mark do?

I personally think that HTML imports are a nice feature (and I mean, I'm
legitimately happy about it, they sound pretty cool), and I think they

Re: HTML imports: new XSS hole?

2014-06-02 Thread Boris Zbarsky

On 6/2/14, 11:17 PM, Eduardo' Vela"  wrote:

Now, I'm not sure how many have tried to implement an HTML sanitizers.


I've reviewed Gecko's implementation of one, if that counts...


  1. You have to write a parser OR You have to use a third-party parser.


Wasn't an issue for us obviously.


  2. You have to get a serializer.


Likewise.


  3. You need a sane whitelist.


This was a pain.


   3.1 And the whitelist, apparently, needs to be aware of not just
 pairs, but also  geez!


And this.  We actually rip out all @rel values specifically on  
elements, because we in fact do not want to allow rel="stylesheet" (but 
we do want to allow we do allow @rel on other elements)


I agree with your general point, though, which is that writing a good 
sanitizer is pretty nontrivial.


-Boris



Re: HTML imports: new XSS hole?

2014-06-02 Thread Simon Pieters
On Mon, 02 Jun 2014 11:32:45 +0200, Anne van Kesteren   
wrote:



How big of a problem is it that we're making  as dangerous as