Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-28 Thread Gervase Markham
On 28/10/09 16:23, Gervase Markham wrote:
> On 27/10/09 09:33, Adam Barth wrote:
>> My technical argument is as follows.  I think that CSP would be better
>> off with a policy language where each directive was purely subtractive
>> because that design would have a number of simplifying effects:
> 
> CSP's precursor, Content Restrictions
> http://www.gerv.net/security/content-restrictions/
> was designed to be purely subtractive, for many of the technical reasons
> you state. And I do continue to think that it's a better choice.

Having said that, it doesn't preclude the very presence of the header
implying some restrictions. It just means that if the presence of the
header implies some restrictions, you shouldn't be able to remove those
restrictions by adding tokens to the header.

Gerv
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-28 Thread Gervase Markham
On 27/10/09 09:33, Adam Barth wrote:
> My technical argument is as follows.  I think that CSP would be better
> off with a policy language where each directive was purely subtractive
> because that design would have a number of simplifying effects:

CSP's precursor, Content Restrictions
http://www.gerv.net/security/content-restrictions/
was designed to be purely subtractive, for many of the technical reasons
you state. And I do continue to think that it's a better choice.


Why write the spec in terms of "restrictions" rather than "capabilities"?

Backwards-compatibility. Current user agents are fully capable. Any
restrictions we can place on content to possibly mitigate XSS is
therefore a bonus. Also, if it were in terms of capabilities, you might
require UI if the capabilities the page wanted conflicted with the
desires of the user. This is a UI-free specification, which is a feature.


Gerv

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-27 Thread Brandon Sterne

On 10/27/2009 02:33 AM, Adam Barth wrote:

My technical argument is as follows.  I think that CSP would be better
off with a policy language where each directive was purely subtractive
because that design would have a number of simplifying effects:


I couldn't find a comment that summarizes the model you are proposing so 
I'll try to recreate your position from memory of our last phone 
conversation.  Please correct me where I'm wrong.


I believe you advocate a model where a site specifies the directives it 
knows/cares about, and everything else is allowed.  This model would 
make the default "allow" directive unnecessary.  The main idea is to 
allow sites to restrict the things it knows about and not have to worry 
about inadvertently blocking things it doesn't consider a risk.


My main objection to this approach is that it turns the whitelist 
approach we started with into a hybrid whitelist/blacklist.  The 
proposal doesn't support the simple use case of a site saying:
"I only want the following things (e.g. script and images from myself). 
 Disallow everything else."


Under your proposal, this site needs to explicitly opt-out of every 
directive, including any new directives that get added in the future. 
We're essentially forcing sites to maintain an exhaustive blacklist for 
all time in order to avoid us (browsers) accidentally blocking things in 
the future that the site forgot to whitelist.



1) Forward and backward compatibility.  As long as sites did not use
the features blocked by their CSP directives, their sites would
function correctly in partial / future implementations of CSP.


Under your proposed model, a site will continue to "function correctly" 
only in the sense that nothing will be blocked in newer implementations 
of CSP that wouldn't also have been blocked in a legacy implementation. 
 From my perspective, the blocking occurs when something unexpected by 
the site was included in the page.  In our model, the newer 
implementation, while potentially creating an inconsistency with the 
older version, has also potentially blocked an attack.


Are you suggesting that a blocked resource is more likely to have come 
from a web developer who forgot to update the CSP when s/he added new 
content than it is to have been injected by an attacker?  This seems 
like a dangerous assumption.  All we are getting, in this case, is 
better consistency in behavior from CSP 
implementation-to-implementation, but not better security.



2) Modularity.  We would be free to group the directives into whatever
modules we liked because there would be no technical interdependence.


I actually don't see how opt-in vs. opt-out has any bearing at all on 
module interdependence.  Maybe you can provide an example?


Let's also not forget that CSP modularity really only helps browser 
vendors.  From the perspective of websites, CSP modules are just one 
more thing that they have to keep track of in terms of which browsers 
support which modules.  I support the idea of making it easier for other 
browser vendors to implement CSP piecemeal, but our primary motivation 
should remain making the lives of websites and their users better.



3) Trivial Combination.  Instead of the current elaborate algorithm
for combining policies, we could simply concatenate the directives.
An attacker who could inject a Content-Security-Policy header could
then only further reduce his/her privileges.


In the case of an injected header, this is already the case now.  We 
intersect both policy sets, resulting in a combined policy more 
restrictive than either of the two separate policies.


If we are talking about an attacker who can inject an additional 
directive into an existing CSP header then, yes, the attacker could 
"relax" the policy intended to be set by the site.  I'm not sure how 
much I care about this case.



4) Syntactic Simplicity.  Instead of two combination operators, ";"
for union and "," for intersection, we could simply use "," and match
standard HTTP header syntax.


Okay, sure.


Balancing against these pros, the con seem to be that we hope the
additive, opt-out syntax will prod web developers into realizing that
adding "script-src inline" to the tutorial code they copy-and-paste is
more dangerous than removing "block-xss".


Those seem equivalent to me, so I'm not sure which model your example 
favors.


In general, I'm slightly skeptical of the view that we need to base our 
design around the fact that admins will copy-paste from tutorials. 
Sure, this will happen in practice, but what is the probability that 
such a site is a high value target for an attacker, and by extension how 
important is it that such a site gets CSP right?  Remember, a site 
cannot make their security profile any worse with CSP than without it.


I do want CSP to be easy to get right.  I should do some homework and 
collect some stats on real world websites to support the following 
claim, but I still maintain that a HUGE number 

Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-27 Thread Adam Barth
On Tue, Oct 27, 2009 at 12:39 PM, Daniel Veditz  wrote:
> I don't think we're having a technical argument, and we're not getting
> the feedback we need to break the impasse in this limited forum.

I agree that we're not making progress in this discussion.

At a high level, the approach of letting sites to restrict the
privileges of their own content is a rich space for security
mechanisms.  My opinion is that the current CSP design is overly
complex for the use cases it supports and insufficiently flexible as a
platform for addressing future use cases.  If I find the time, I'll
send along a full design that tries to improve these aspects along the
lines I've suggested in the foregoing discussion.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-27 Thread Daniel Veditz
On 10/27/09 2:33 AM, Adam Barth wrote:
> I understand the seductive power of "secure-by-default" here.

If only she loved me back.

> This statement basically forecloses further discussion because it does
> not advance a technical argument that I can respond to.  In this
> forum, you are the king and I am but a guest.

I don't think we're having a technical argument, and we're not getting
the feedback we need to break the impasse in this limited forum. Either
syntax can be made to express the same set of current restrictions.
You're arguing for extensible syntax, and I'm arguing for what will best
encourage the most web authors to "do the right thing".

An argument about whether your syntax is or is not more extensible can
at least be made on technical merits, but what I really want is feedback
from potential web app authors about which approach is more intuitive
and useful to them. Those folks aren't here, and I don't know how to
reach them.

At a technical level your approach appears to be a blacklist. If I'm
understanding you correctly, if there's an empty CSP header then there's
no restriction whatsoever on the page. In our version it'd be a
locked-down page with a default inability to load source from anywhere.
If the web author has left something out they will know because the page
will not work. I'd rather have that than a web author thinking they're
safe when CSP isn't actually turned on for their page.

The bottom line, though, is I'm in favor of anything that gets more web
sites and more browsers to support the concept.

-Dan
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-27 Thread Devdatta
Hi

There are two threads running in parallel here:

1) Should blocking XSS be default behaviour of adding a
X-Content-Security-Policy? (instead of the straw man proposal where a
additional 'block-xss' would be required )
2) Should the result of blocking XSS also cause eval and inline
scripts to be disabled?

If 1 is the case, then blocking eval and inline scripts by default is
imho unacceptable. The reasons are the same as Adam succinctly pointed
out in his ' Forward and backward compatibility ' bullet in the
previous mail.

But if to enable XSS protection, the user types in block-xss, then I
think Brandon argument makes sense. block-xss should block XSS , which
requires us to disable eval and inline scripts. But if for
compatibility the user wants to continue supporting them , he should
explicity add support for them with say 'allow-eval'. With a
block-eval directive, the correct policy would always be 'block-xss
block-eval' which doesn't make sense to me if we are hoping that eval
support would just be a stop gap while the web admins figure out how
to get by without it.


Regards
Devdatta

2009/10/27 Adam Barth :
> On Mon, Oct 26, 2009 at 6:11 PM, Daniel Veditz  wrote:
>> They have already opted in by adding the CSP header. Once they've
>> opted-in to our web-as-we-wish-it-were they have to opt-out of the
>> restrictions that are too onerous for their site.
>
> I understand the seductive power of "secure-by-default" here.  It's
> important to understand what we're giving up in terms of complexity
> and extensibility.
>
>> We feel
>> extraordinarily strongly that sites should have to explicitly say they
>> want to run inline-script, like signing a waiver that you're going
>> against medical advice. The only thing that is likely to deter us is
>> releasing a test implementation and then crashing and burning while
>> trying to implement a reasonable test site like AMO or MDC or the
>> experiences of other web developers doing the same.
>
> This statement basically forecloses further discussion because it does
> not advance a technical argument that I can respond to.  In this
> forum, you are the king and I am but a guest.
>
> My technical argument is as follows.  I think that CSP would be better
> off with a policy language where each directive was purely subtractive
> because that design would have a number of simplifying effects:
>
> 1) Forward and backward compatibility.  As long as sites did not use
> the features blocked by their CSP directives, their sites would
> function correctly in partial / future implementations of CSP.
>
> 2) Modularity.  We would be free to group the directives into whatever
> modules we liked because there would be no technical interdependence.
>
> 3) Trivial Combination.  Instead of the current elaborate algorithm
> for combining policies, we could simply concatenate the directives.
> An attacker who could inject a Content-Security-Policy header could
> then only further reduce his/her privileges.
>
> 4) Syntactic Simplicity.  Instead of two combination operators, ";"
> for union and "," for intersection, we could simply use "," and match
> standard HTTP header syntax.
>
> Balancing against these pros, the con seem to be that we hope the
> additive, opt-out syntax will prod web developers into realizing that
> adding "script-src inline" to the tutorial code they copy-and-paste is
> more dangerous than removing "block-xss".
>
> Adam
> ___
> dev-security mailing list
> dev-security@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security
>
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-27 Thread Adam Barth
On Mon, Oct 26, 2009 at 6:11 PM, Daniel Veditz  wrote:
> They have already opted in by adding the CSP header. Once they've
> opted-in to our web-as-we-wish-it-were they have to opt-out of the
> restrictions that are too onerous for their site.

I understand the seductive power of "secure-by-default" here.  It's
important to understand what we're giving up in terms of complexity
and extensibility.

> We feel
> extraordinarily strongly that sites should have to explicitly say they
> want to run inline-script, like signing a waiver that you're going
> against medical advice. The only thing that is likely to deter us is
> releasing a test implementation and then crashing and burning while
> trying to implement a reasonable test site like AMO or MDC or the
> experiences of other web developers doing the same.

This statement basically forecloses further discussion because it does
not advance a technical argument that I can respond to.  In this
forum, you are the king and I am but a guest.

My technical argument is as follows.  I think that CSP would be better
off with a policy language where each directive was purely subtractive
because that design would have a number of simplifying effects:

1) Forward and backward compatibility.  As long as sites did not use
the features blocked by their CSP directives, their sites would
function correctly in partial / future implementations of CSP.

2) Modularity.  We would be free to group the directives into whatever
modules we liked because there would be no technical interdependence.

3) Trivial Combination.  Instead of the current elaborate algorithm
for combining policies, we could simply concatenate the directives.
An attacker who could inject a Content-Security-Policy header could
then only further reduce his/her privileges.

4) Syntactic Simplicity.  Instead of two combination operators, ";"
for union and "," for intersection, we could simply use "," and match
standard HTTP header syntax.

Balancing against these pros, the con seem to be that we hope the
additive, opt-out syntax will prod web developers into realizing that
adding "script-src inline" to the tutorial code they copy-and-paste is
more dangerous than removing "block-xss".

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-26 Thread Devdatta
> It seems reasonable to mitigate both of those without using CSP at all.

+1.

But the current spec was trying to address them. For e.g all the
img-src, frame-src , frame-ancestor, font-src, style-src isn't really
needed for preventing XSS (afaik). My view is that there is not
problem with including them. The word 'content-security-policy' is
very generic. If it is only going to apply for XSS then you should
rename it to something more specific.

> clickjacking. NoScript's "ClearClick" seems to do a pretty good job
> (after a rough start) and gets to the heart of the issue without
> requiring site changes.

Agreed. I am nott sure if it would be easy for browser vendors to
actually implement something like ClearClick. Ideally ClearClick is
the correct way to solve the threat (over frame ancestors).

Cheers
Devdatta
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-26 Thread Daniel Veditz
On 10/22/09 6:09 PM, Adam Barth wrote:
> I agree, but if you think sites should be explicit, doesn't that mean
> they should explicitly opt-in to changing the normal (i.e., non-CSP)
> behavior?

They have already opted in by adding the CSP header. Once they've
opted-in to our web-as-we-wish-it-were they have to opt-out of the
restrictions that are too onerous for their site.

> It seems very reasonable to mitigate history stealing and ClickJacking
> without using CSP to mitigate XSS.

It seems reasonable to mitigate both of those without using CSP at all.
History stealing is going to come from attacker.com where they aren't
going to add headers anyway. The proposed CSP frame-ancestors could just
as easily go into an extended X-Frame-Options (and be a better fit). And
it's really only a partial clickjacking defense anyway so maybe that
aspect should go into whatever defense feature prevents the rest of
clickjacking. NoScript's "ClearClick" seems to do a pretty good job
(after a rough start) and gets to the heart of the issue without
requiring site changes.

> I think we're all agreed on this point.  Our current disagreements appear to 
> be:
> 
> 1) Whether frame-src should be in the resources module or in the same
> module as frame-ancestor.
> 2) Whether sites should have to opt-in or opt-out to disabling inline
> script and/or eval-like APIs.

I don't think this is the right venue for deciding the latter, the
audience here just doesn't have enough of the right people. We feel
extraordinarily strongly that sites should have to explicitly say they
want to run inline-script, like signing a waiver that you're going
against medical advice. The only thing that is likely to deter us is
releasing a test implementation and then crashing and burning while
trying to implement a reasonable test site like AMO or MDC or the
experiences of other web developers doing the same.

The eval stuff I feel a lot less strongly about the default, but feel
there's value in consistency of having site authors loosen restrictions
rather than have some tighten and some loosen.

-Dan
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-26 Thread Brandon Sterne
On 10/22/2009 06:09 PM, Adam Barth wrote:
> On Thu, Oct 22, 2009 at 5:22 PM, Brandon Sterne  wrote:
>> But the point is that the threat of
>> history stealing is not fully mitigated by changes to CSS for
>> cross-origin links.  A complete mitigation of the threat requires both
>> altering the behavior of getComputedStyle as well as disabling
>> non-trusted scripts in the document.
>
> I don't think this argument makes sense.  When people complain about
> history stealing, e.g. on
> https://bugzilla.mozilla.org/show_bug.cgi?id=14, they're not
> worried about the case when their site has XSS.  They're worried about
> a much weaker attacker who simply operates a web site.

Granted, it was a contrived example.  Here's another try: say we have
the theoretical CSRF module that does a perfect job of stopping the
browser from sending spurious requests to a site that originate from
some other site.  The site would still be vulnerable to forged same-site
requests if an attacker is able to inject script or other content into
the site itself.  You might say "well, *cross-site* request forgery is
mitigated so our obligation is met", but the broader forged request
threat isn't completely sealed off.  A site will still need additional
policy to be secured against request forgery.

It's challenging to make this case for capability-based modules given
only the current set of known web app threats.  We know that future
attacks will be based on the combination of browser capabilities.  If we
can do a good job of enumerating those capabilities and providing policy
"levers" to restrict them we'll be able to address new threats that
arise in the future with new policies, which can be deployed overnight
by websites, rather than new modules which take at least one browser
release cycle plus a policy change by the websites to take advantage of
the module.

FWIW, I'm actually not hearing you object strongly to the
capability-based module system but rather that you're pointing out
(admitted) weakness in my earlier example.  Do I have that right?  Are
others still preferring a threat model based approach to the modules, or
can we close this issue?

>>   Why, though, would we ever want to
>> change from an opt-in to an opt-out model?
>
> I don't think we'll want to change in the future.  We should pick the
> better design now and stick with it (whichever design we decide is
> better).

Well, I personally think that safe-by-default (opt-in to inline scripts,
etc.) is a better design because it forces sites to be more explicit
with what they are permitting and it is consistent with the white list
approach used throughout the model.  I've stated it elsewhere, but we
definitely want to avoid sites thinking they are protected when they are
not, in fact.

>> I think it's better to have sites be explicit with their policies, as it
>> forces them to understand the implications of each part of the policy.
>> If we provide pre-canned policies, sites may wind up with incorrect
>> assumptions about what is being restricted.
>
> I agree, but if you think sites should be explicit, doesn't that mean
> they should explicitly opt-in to changing the normal (i.e., non-CSP)
> behavior?

I apologize, but I don't understand this question.  What is the normal
behavior we are talking about changing in this example?

>>   The situation I
>> want to avoid is having browsers advertise (partial) CSP support and
>> have websites incorrectly assume that they are getting XSS protection
>> from those browsers.
>
> I don't understand.  There is no advertisement mechanism in CSP.  Do
> you mean in the press?

Yes, in the press e.g. some table on a web developer site showing "CSP
support in all major browsers" but only a subset supporting the core XSS
part.

> What's actually going to happen is that thought leaders will write
> blog posts with sample code and non-experts will copy/paste it into
> their web sites.  Experts (e.g., PayPal) will read the spec and test
> various implementations.
>
> As for the press, I doubt anything we write in the spec will have much
> impact on how the press spins the story.  Personally, I don't care
> about what the press says.  We should design the best mechanism on a
> technical level.

We're in total agreement here.

>>   Also, it seems unlikely to me that successful
>> mitigations can be put in place for the other threats if XSS is still
>> possible  (I can provide examples if people are interested, but I have
>> to run to catch a train, unfortunately).
>
> It seems very reasonable to mitigate history stealing and ClickJacking
> without using CSP to mitigate XSS.  As a web developer, I can't do
> anything about history stealing myself.  I need help from the browser.
>   On the the other hand, I can do something about XSS myself.
>
>>   If we can agree that XSS is
>> the main threat that we want to address with CSP, then I think we can
>> also agree to make it a required module.
>
> I think we're all agreed on this point.

Awesom

Re: Comments on the Content Security Policy specification

2009-10-23 Thread Gervase Markham

On 23/10/09 01:50, Daniel Veditz wrote:

blocking inline-script is key to stopping XSS. We added the ability to
turn that bit of CSP off as an interim crutch for complex sites trying
to convert, but if our proof-of-concept site has to rely on it we've
clearly failed and will be setting a bad example to boot.


What I was doing in my message was creating a policy for the site as it 
is now exactly - i.e. one you could use without any modifications. So as 
the site had inline-script, I had to add the inline-script directive. 
What else would you have me do? :-)


If we are doing a proof-of-concept conversion, then let's actually do 
some conversion work. That would mean moving the one line of JS which 
kicks off Urchin into an external file.


Gerv
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Adam Barth
On Thu, Oct 22, 2009 at 5:22 PM, Brandon Sterne  wrote:
> Take XSS and history stealing for example.  Assume these are seperate
> modules and each is responsible for mitigating its respective threat.
> Presumably the safe history module will prevent a site from being able
> to do getComputedStyle (or equivalent) on a link from a different
> origin.  But an attacker could still steal history from any site that he
> can inject script into by document.writing the list of URLs into the
> page, testing if they are visited, and sending the results back to the
> attacker's site.  Granted, this is a contrived example and the attacker
> could probably do worse than history stealing if we're allowing that he
> can inject arbitrary script.  But the point is that the threat of
> history stealing is not fully mitigated by changes to CSS for
> cross-origin links.  A complete mitigation of the threat requires both
> altering the behavior of getComputedStyle as well as disabling
> non-trusted scripts in the document.

I don't think this argument makes sense.  When people complain about
history stealing, e.g. on
https://bugzilla.mozilla.org/show_bug.cgi?id=14, they're not
worried about the case when their site has XSS.  They're worried about
a much weaker attacker who simply operates a web site.

> Why, though, would we ever want to
> change from an opt-in to an opt-out model?

I don't think we'll want to change in the future.  We should pick the
better design now and stick with it (whichever design we decide is
better).

> I think it's better to have sites be explicit with their policies, as it
> forces them to understand the implications of each part of the policy.
> If we provide pre-canned policies, sites may wind up with incorrect
> assumptions about what is being restricted.

I agree, but if you think sites should be explicit, doesn't that mean
they should explicitly opt-in to changing the normal (i.e., non-CSP)
behavior?

> The situation I
> want to avoid is having browsers advertise (partial) CSP support and
> have websites incorrectly assume that they are getting XSS protection
> from those browsers.

I don't understand.  There is no advertisement mechanism in CSP.  Do
you mean in the press?

What's actually going to happen is that thought leaders will write
blog posts with sample code and non-experts will copy/paste it into
their web sites.  Experts (e.g., PayPal) will read the spec and test
various implementations.

As for the press, I doubt anything we write in the spec will have much
impact on how the press spins the story.  Personally, I don't care
about what the press says.  We should design the best mechanism on a
technical level.

> Also, it seems unlikely to me that successful
> mitigations can be put in place for the other threats if XSS is still
> possible  (I can provide examples if people are interested, but I have
> to run to catch a train, unfortunately).

It seems very reasonable to mitigate history stealing and ClickJacking
without using CSP to mitigate XSS.  As a web developer, I can't do
anything about history stealing myself.  I need help from the browser.
 On the the other hand, I can do something about XSS myself.

> If we can agree that XSS is
> the main threat that we want to address with CSP, then I think we can
> also agree to make it a required module.

I think we're all agreed on this point.  Our current disagreements appear to be:

1) Whether frame-src should be in the resources module or in the same
module as frame-ancestor.
2) Whether sites should have to opt-in or opt-out to disabling inline
script and/or eval-like APIs.

I have a few more minor points, but we can get to those after we
settle the above two.

I think the way forward is for me (or someone else if they're
interested) to write up our current thinking on the wiki.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Daniel Veditz

On 10/22/09 10:31 AM, Mike Ter Louw wrote:

Any ideas for how best to address the redirect problem?


In the existing parts of CSP the restrictions apply to redirects. That 
is, if you only allow images from foo.com then try to load an image from 
a redirector on foo.com it will fail if the redirection is to some other 
site. (This has turned out to be an annoying part of CSP to implement as 
redirects happen deep in the network library far from the places that 
have the context to enforce this rule)


Likewise your anti-csrf rules should propagate through redirects for 
consistency.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-22 Thread Daniel Veditz

On 8/11/09 3:19 AM, Gervase Markham wrote:

Here's some possibilities for www.mozilla.org, based on the home page -
which does repost RSS headlines, so there's at least the theoretical
possibility of an injection. To begin with:

allow self; options inline-script;


blocking inline-script is key to stopping XSS. We added the ability to 
turn that bit of CSP off as an interim crutch for complex sites trying 
to convert, but if our proof-of-concept site has to rely on it we've 
clearly failed and will be setting a bad example to boot.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Mike Ter Louw

Brandon Sterne wrote:
I'd like to take a quick step back before we proceed further with the 
modularization discussion.  I think it is fine to split CSP into 
modules, but with the following caveats:


1. Splitting the modules based upon different threat models doesn't seem 
to be the right approach.  There are many areas where the threats we 
want to mitigate overlap in terms of browser functionality.  A better 
approach, IMHO, is to create the modules based upon browser 
capabilities.  With those capability building blocks, sites can then 
construct policy sets to address any given threat model (including ones 
we haven't thought of yet).


Part of the value of the threat-centric module approach is it 
facilitates analysis of the defensive efficacy of CSP directives.  This 
can point us to additional policies that are needed for more complete 
coverage, and reveal policies that are superfluous (I'm not saying any 
existing proposed policy is useless) and browser vendors need not 
implement.  However, as Lucas rightly pointed out, the correctness of 
this analysis is dependent on our awareness and understanding of threats.


If browser implementers are to pick and choose among CSP policies to 
support (besides XSS related ones, we agree), there should ideally be 
some reference that indicates the combined set of policies that are 
needed to mitigate each threat.  This can aid browser implementers in 
deciding which policies to implement.  For instance, if some browser 
vendor wants to support CSP protection against CSRF attacks, the vendor 
should know that it's of limited use to only strip cookies from form 
submissions; form action URIs must also be constrained to a set of 
trusted origins.


Perhaps the spec can have an appendix recommending sets of directives 
for several significant threats, based on some thorough analysis of each 
threat, citing known capabilities and limitations of each set.  This can 
benefit the spec writers, browser implementors and web developers.


Mike
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Brandon Sterne
On 10/22/09 3:02 PM, Collin Jackson wrote:
> On Thu, Oct 22, 2009 at 2:22 PM, Brandon Sterne  wrote:
>> 1. Splitting the modules based upon different threat models doesn't seem to
>> be the right approach.  There are many areas where the threats we want to
>> mitigate overlap in terms of browser functionality.  A better approach,
>> IMHO, is to create the modules based upon browser capabilities.  With those
>> capability building blocks, sites can then construct policy sets to address
>> any given threat model (including ones we haven't thought of yet).
> 
> Would that mean that each module would have multiple directives, with
> a separate threat model for each one? It seems like the directives
> should be granular to the level of threat models, or else a site will
> be forced to give up functionality to defend against threats it's not
> concerned about.

I imagine each module would have its own directives, though not
necessarily more than one.  If the modules are based on browser
capabilities, then it is possible to map these capabilities to threat
mitigations.  If we try to do the reverse, that is, map threat
mitigations to CSP modules, we run the risk of having particular browser
capabilities governed by multiple, potentially conflicting, modules.

Take XSS and history stealing for example.  Assume these are seperate
modules and each is responsible for mitigating its respective threat.
Presumably the safe history module will prevent a site from being able
to do getComputedStyle (or equivalent) on a link from a different
origin.  But an attacker could still steal history from any site that he
can inject script into by document.writing the list of URLs into the
page, testing if they are visited, and sending the results back to the
attacker's site.  Granted, this is a contrived example and the attacker
could probably do worse than history stealing if we're allowing that he
can inject arbitrary script.  But the point is that the threat of
history stealing is not fully mitigated by changes to CSS for
cross-origin links.  A complete mitigation of the threat requires both
altering the behavior of getComputedStyle as well as disabling
non-trusted scripts in the document.

Given sufficient granularity in browser capabilities, it is fairly easy
to build a policy to address any particular threat model.  Starting from
the threat and working backwards seems to me to force sites to accept
restrictions which may be unexpected or non-intuitive.

>> 2. The original goal of CSP was to mitigate XSS attacks.  The scope of the
>> proposal has grown substantially, which is fine, but I'm not at all
>> comfortable with a product that does not require the XSS protections as the
>> fundamental core of the model. I think if we go with the module approach,
>> the XSS protection needs to be required, and any additional modules can be
>> optionally implemented.
> 
> I think it makes sense to have modules that are required for browser
> vendos to implement, but are not required for web authors to enable.
> Is that what you mean? We could make the XSSModule "required" for
> browser vendors to implement instead of just "recommended." I don't,
> however, think that a web author should be required to use the
> XSSModule in order to benefit from the ClickJackingModule (for
> example).

I agree completely with this.  "Required" module would apply to browser
vendors only.  Sites would not be required to utilize any particular
module, but they would be guaranteed that any required module will be
present in every CSP implementation.

>> I propose that the default behavior for CSP (no
>> optional modules implemented) is to block all inline scripts (opt-in still
>> possible) and to use a white list for all sources of external script files.
> 
> I understand the desire to have "by-default" security, but one problem
> with opt-out CSP rules is that they're hard to change. You can't add
> new opt-out rules in the future because it will break web sites that
> didn't know they were supposed to opt out, so we'd be stuck with an
> initial set of opt-out rules and any rules added in future versions of
> the spec would have to be opt-in. Also, it's tricky to change an
> opt-out rule to be an opt-in rule in the future because web sites may
> be relying on the opt-out behavior.

"By-default" security is definitely the motivation for the opt-in
mechanism currently proposed.  I see how a change from opt-in to opt-out
in the future would be impossible, because lots of sites who were once
safe suddenly lose protection.  I also see that changing from opt-out to
opt-in would be slightly better: you would have some sites intending to
use inline scripts suddenly break, but they would likely be no less
secure by turning off the scripts.  Why, though, would we ever want to
change from an opt-in to an opt-out model?  If we agree now (and I'm not
assuming we all do) that opting-in to potentially dangerous features is
a better model, what could change in the Web envir

Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Collin Jackson
On Thu, Oct 22, 2009 at 2:22 PM, Brandon Sterne  wrote:
> 1. Splitting the modules based upon different threat models doesn't seem to
> be the right approach.  There are many areas where the threats we want to
> mitigate overlap in terms of browser functionality.  A better approach,
> IMHO, is to create the modules based upon browser capabilities.  With those
> capability building blocks, sites can then construct policy sets to address
> any given threat model (including ones we haven't thought of yet).

Would that mean that each module would have multiple directives, with
a separate threat model for each one? It seems like the directives
should be granular to the level of threat models, or else a site will
be forced to give up functionality to defend against threats it's not
concerned about.

> 2. The original goal of CSP was to mitigate XSS attacks.  The scope of the
> proposal has grown substantially, which is fine, but I'm not at all
> comfortable with a product that does not require the XSS protections as the
> fundamental core of the model. I think if we go with the module approach,
> the XSS protection needs to be required, and any additional modules can be
> optionally implemented.

I think it makes sense to have modules that are required for browser
vendos to implement, but are not required for web authors to enable.
Is that what you mean? We could make the XSSModule "required" for
browser vendors to implement instead of just "recommended." I don't,
however, think that a web author should be required to use the
XSSModule in order to benefit from the ClickJackingModule (for
example).

> I propose that the default behavior for CSP (no
> optional modules implemented) is to block all inline scripts (opt-in still
> possible) and to use a white list for all sources of external script files.

I understand the desire to have "by-default" security, but one problem
with opt-out CSP rules is that they're hard to change. You can't add
new opt-out rules in the future because it will break web sites that
didn't know they were supposed to opt out, so we'd be stuck with an
initial set of opt-out rules and any rules added in future versions of
the spec would have to be opt-in. Also, it's tricky to change an
opt-out rule to be an opt-in rule in the future because web sites may
be relying on the opt-out behavior.

If there are a set of behaviors that make sense when used together,
then maybe providing a concise opt-in directive that enables them all
would be easier, e.g. "core-xss".

> I'm definitely not opposed to splitting apart the spec into modules,
> especially if it helps other browser implementers move forward with CSP.  I
> REALLY think, though, that the XSS protections need to be part of the base
> module.

Could you elaborate a little more on why you feel this way? This seems
like a major extensibility limitation that would be impossible to
change in the future.
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Brandon Sterne
I'd like to take a quick step back before we proceed further with the 
modularization discussion.  I think it is fine to split CSP into 
modules, but with the following caveats:


1. Splitting the modules based upon different threat models doesn't seem 
to be the right approach.  There are many areas where the threats we 
want to mitigate overlap in terms of browser functionality.  A better 
approach, IMHO, is to create the modules based upon browser 
capabilities.  With those capability building blocks, sites can then 
construct policy sets to address any given threat model (including ones 
we haven't thought of yet).


2. The original goal of CSP was to mitigate XSS attacks.  The scope of 
the proposal has grown substantially, which is fine, but I'm not at all 
comfortable with a product that does not require the XSS protections as 
the fundamental core of the model.  I think if we go with the module 
approach, the XSS protection needs to be required, and any additional 
modules can be optionally implemented.  I propose that the default 
behavior for CSP (no optional modules implemented) is to block all 
inline scripts (opt-in still possible) and to use a white list for all 
sources of external script files.  The script-src directive under the 
current model serves this function perfectly and doesn't need to be 
modified.  (We can discuss how plugin content and CSS, which can be 
vectors for script, should be governed by this core XSS module.)


As a straw man, the optional modules could be:
  * content loading (e.g. img-src, media-src, etc.)
  * framing (e.g. frame-src, frame-ancestors)
  * form action restriction
  * reporting (e.g. report-uri)
  * others?

I'm definitely not opposed to splitting apart the spec into modules, 
especially if it helps other browser implementers move forward with CSP. 
 I REALLY think, though, that the XSS protections need to be part of 
the base module.


Thoughts?

-Brandon


On 10/22/2009 09:37 AM, Adam Barth wrote:

On Thu, Oct 22, 2009 at 8:58 AM, Mike Ter Louw  wrote:

I've added a CSRF straw-man:

https://wiki.mozilla.org/Security/CSP/CSRFModule

This page borrows liberally from XSSModule.  Comments are welcome!


Two comments:

1) The attacker goal is very syntactic.  It would be better to explain
what the attacker is trying to achieve instead of how we imagine the
attack taking place.

2) It seems like an attacker can easily circumvent this module by
submitting a form to attacker.com and then generating the forged
request (which will be sent with cookies because attacker.com doesn't
enables the anti-csrf directive).

Adam

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Adam Barth
On Thu, Oct 22, 2009 at 12:36 PM, Mike Ter Louw  wrote:
> In this case, this boils down to: should CSP directives be threat-centric or
> content-type-centric?  Alternatively, this may be an example of CSP being
> too granular.

I suspect we'll need to experiment with different approaches before we
have a good idea how to answer this question.  In intuition tells me
that we'd be better off with a threat-centric design, but it's hard to
know ahead of time.

On Thu, Oct 22, 2009 at 12:53 PM, Mike Ter Louw  wrote:
> Is it acceptable (not too strict) to block all form submission to non-self
> and non-whitelisted action URIs when the anti-csrf directive is given?  If
> so, then the above usability issue may be moot: we can have anti-csrf imply
> an as-yet-undefined directive that blocks form submission.

Instead of bundling everything together into "anti-csrf", we might be
better off with a directive to control where you can submit forms,
e.g., "form-action", but we seem to be getting far afield of the
problem you're trying to solve.

At a high level, I'm glad that you took the time to add your ideas to
the wiki, and I hope that other folks will do the same.  My personal
opinion is that the current design has room for improvement,
particularly around clarifying precisely what problem the module is
trying to solve, but my opinion is just one among many.  I'd like to
encourage more people to contribute their ideas in the form of
experimental modules, and hopefully the best ideas will rise to the
top.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Mike Ter Louw

Mike Ter Louw wrote:
There is a usability issue here: is it more usable (w.r.t. the web 
developer) to:


(1) support a declaration of "anti-csrf" and enable the widest default 
set of protections that could be offered against CSRF (without being too 
strict as to break the most common use cases), but possibly having 
multiple modules specifying (complementary) form policies, or


(2) group all form-related policies in a single module, even if the 
policies address fundamentally different attacks?


Is it acceptable (not too strict) to block all form submission to 
non-self and non-whitelisted action URIs when the anti-csrf directive is 
given?  If so, then the above usability issue may be moot: we can have 
anti-csrf imply an as-yet-undefined directive that blocks form submission.


Mike
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Mike Ter Louw

Adam Barth wrote:

I think it might be better to focus this module on the "forum poster"
threat model.  Instead of assuming the attacker can inject arbitrary
content, we should limit the attacker to injecting content that is
allowed by popular form sites (e.g., bbcode).  At a first guess, I
would limit the attacker to text, hyperlinks, and images.  (And maybe
bold / italics, if that matters.)


There should be room for each directive to address slightly different 
threat scenarios.  For the forum threat you've described, the attack 
mechanics (i.e., CSRF) and basic remediation strategy (disallow sending 
cookies) are common to other threats the module aims to defend against. 
 Additionally, cookieless-images is complementary to anti-csrf because 
it defines an additional constraint to images loaded from |self|.  So 
perhaps the module needs to be better positioned and each directive 
better motivated.



I think we should assume that the attacker cannot inject form elements
because this is uncommon in forum web sites.


That is fine for motivating cookieless-images, but this assumption could 
prove inadequate for other scenarios where the threat exists.  It may be 
OK to remove the language governing form actions from CSRFModule if the 
issue is further deferred to another module (as does [1]), where this 
(currently hypothetical) module entirely blocks form submission if the 
action URI is not in a whitelist of trusted origins.  (That would target 
the form-based password theft threat, as well as the CSRF threat.)


There is a usability issue here: is it more usable (w.r.t. the web 
developer) to:


(1) support a declaration of "anti-csrf" and enable the widest default 
set of protections that could be offered against CSRF (without being too 
strict as to break the most common use cases), but possibly having 
multiple modules specifying (complementary) form policies, or


(2) group all form-related policies in a single module, even if the 
policies address fundamentally different attacks?


In this case, this boils down to: should CSP directives be 
threat-centric or content-type-centric?  Alternatively, this may be an 
example of CSP being too granular.


Mike


[1] https://wiki.mozilla.org/Security/CSP/XSSModule#Open_Issues
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Mike Ter Louw

Devdatta wrote:

Maybe we should focus the module on this threat more specifically.  My
understanding is that this is a big source of pain for folks who
operate forums, especially for user-supplied images that point back to
the forum itself.  What if the directive was something like
"cookieless-images" and affected all images, regardless of where they
were loaded from?


requiring it to implement this policy regardless of the running script
context would require the UA to maintain a cache of policies for each
site the user has visited. This is against the requirements of the
base module. And I for one am against any such type of caching
requirement in the UA.


I think what Adam is intending is for the image resource to be requested 
without cookies being sent, regardless of the image URI origin (i.e., 
the no-cookies policy applies even if the image URI is contained in 
|self|).  This would apply for all images requested in the context of a 
page that has cookieless-images enabled.  To enforce this policy, there 
wouldn't be a need to cache policies for sites the user has previously 
visited.


Mike
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Adam Barth
On Thu, Oct 22, 2009 at 10:15 AM, Mike Ter Louw  wrote:
> I think this is a good start, and should be an option for sites that don't
> want CSP to provide any other CSRF restrictions.  I've added an additional
> directive to the wiki, but it needs further definition.

I think it might be better to focus this module on the "forum poster"
threat model.  Instead of assuming the attacker can inject arbitrary
content, we should limit the attacker to injecting content that is
allowed by popular form sites (e.g., bbcode).  At a first guess, I
would limit the attacker to text, hyperlinks, and images.  (And maybe
bold / italics, if that matters.)

On Thu, Oct 22, 2009 at 10:16 AM, Devdatta  wrote:
> I don't understand. In each of the cases above, the attacker site will
> not enable the directives and img requests or form requests from his
> page will cause a CSRF to occur.

We might decide to concern ourselves only with "zero click" attacks.
Meaning that once the user has clicked on the attacker's content, all
bets are off.  If we imagine a 1% click-through rate, they we've
mitigated 99% of the problem.

On Thu, Oct 22, 2009 at 10:19 AM, Devdatta  wrote:
> requiring it to implement this policy regardless of the running script
> context would require the UA to maintain a cache of policies for each
> site the user has visited. This is against the requirements of the
> base module. And I for one am against any such type of caching
> requirement in the UA.

I agree that directives should affect only the current page.

On Thu, Oct 22, 2009 at 10:31 AM, Mike Ter Louw  wrote:
> For image CSRF, some protection would be required against redirection.
> Either redirection must be disallowed, or anti-csrf needs to be enforced
> for all redirections until the resource is located.  But I'm not sure if
> the latter is going to work if CSP policies are not composeable, and any
> of the redirections or the image itself defines a CSP policy.

I agree that cookieless-images should affect all redirects involved in
loading the image.

> Form requests to attacker.com would presumably be blocked, as
> attacker.com isn't in |self| nor the whitelist.  So the attacker won't
> be able to direct the user to a page without anti-csrf protection using
> forms.  But again this requires some enforcement of the whitelist during
> any redirects.

I think we should assume that the attacker cannot inject form elements
because this is uncommon in forum web sites.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Mike Ter Louw

Devdatta wrote:

I agree.  It seems anti-csrf (as currently defined) would be most beneficial
for defending against CSRF attacks that don't require any user action beyond
simply viewing the page (e.g., ).

Form actions would perhaps require some additional constraints, such as only
allowing submission to |self| or other whitelisted URIs.


I don't understand. In each of the cases above, the attacker site will
not enable the directives and img requests or form requests from his
page will cause a CSRF to occur.


For image CSRF, some protection would be required against redirection.
Either redirection must be disallowed, or anti-csrf needs to be enforced
for all redirections until the resource is located.  But I'm not sure if
the latter is going to work if CSP policies are not composeable, and any
of the redirections or the image itself defines a CSP policy.

Form requests to attacker.com would presumably be blocked, as
attacker.com isn't in |self| nor the whitelist.  So the attacker won't
be able to direct the user to a page without anti-csrf protection using
forms.  But again this requires some enforcement of the whitelist during
any redirects.

Any ideas for how best to address the redirect problem?

Mike

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Devdatta
>
> Maybe we should focus the module on this threat more specifically.  My
> understanding is that this is a big source of pain for folks who
> operate forums, especially for user-supplied images that point back to
> the forum itself.  What if the directive was something like
> "cookieless-images" and affected all images, regardless of where they
> were loaded from?
>

requiring it to implement this policy regardless of the running script
context would require the UA to maintain a cache of policies for each
site the user has visited. This is against the requirements of the
base module. And I for one am against any such type of caching
requirement in the UA.

cheers
devdatta

2009/10/22 Adam Barth :
> On Thu, Oct 22, 2009 at 9:52 AM, Mike Ter Louw  wrote:
>> I agree.  It seems anti-csrf (as currently defined) would be most beneficial
>> for defending against CSRF attacks that don't require any user action beyond
>> simply viewing the page (e.g., ).
>
> Maybe we should focus the module on this threat more specifically.  My
> understanding is that this is a big source of pain for folks who
> operate forums, especially for user-supplied images that point back to
> the forum itself.  What if the directive was something like
> "cookieless-images" and affected all images, regardless of where they
> were loaded from?
>
> Adam
> ___
> dev-security mailing list
> dev-security@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security
>
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Devdatta
>
> I agree.  It seems anti-csrf (as currently defined) would be most beneficial
> for defending against CSRF attacks that don't require any user action beyond
> simply viewing the page (e.g., ).
>
> Form actions would perhaps require some additional constraints, such as only
> allowing submission to |self| or other whitelisted URIs.
>

I don't understand. In each of the cases above, the attacker site will
not enable the directives and img requests or form requests from his
page will cause a CSRF to occur.

-devdatta

2009/10/22 Mike Ter Louw :
> Adam Barth wrote:
>>
>> 2) It seems like an attacker can easily circumvent this module by
>> submitting a form to attacker.com and then generating the forged
>> request (which will be sent with cookies because attacker.com doesn't
>> enables the anti-csrf directive).
>
> I agree.  It seems anti-csrf (as currently defined) would be most beneficial
> for defending against CSRF attacks that don't require any user action beyond
> simply viewing the page (e.g., ).
>
> Form actions would perhaps require some additional constraints, such as only
> allowing submission to |self| or other whitelisted URIs.
>
> Link activation is harder, because (I would assume) most websites want to
> allow links to different-origin URIs.  And as you stated, not sending
> cookies here doesn't help because the link could go to attacker.com, and the
> page can contain an image based CSRF (thus the threshold for successful
> attack is still 1 click).
>
> Thanks for the feedback,
>
> Mike
> ___
> dev-security mailing list
> dev-security@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security
>
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Mike Ter Louw

Adam Barth wrote:

On Thu, Oct 22, 2009 at 9:52 AM, Mike Ter Louw  wrote:

I agree.  It seems anti-csrf (as currently defined) would be most beneficial
for defending against CSRF attacks that don't require any user action beyond
simply viewing the page (e.g., ).


Maybe we should focus the module on this threat more specifically.  My
understanding is that this is a big source of pain for folks who
operate forums, especially for user-supplied images that point back to
the forum itself.  What if the directive was something like
"cookieless-images" and affected all images, regardless of where they
were loaded from?


I think this is a good start, and should be an option for sites that 
don't want CSP to provide any other CSRF restrictions.  I've added an 
additional directive to the wiki, but it needs further definition.


Mike
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Adam Barth
On Thu, Oct 22, 2009 at 9:52 AM, Mike Ter Louw  wrote:
> I agree.  It seems anti-csrf (as currently defined) would be most beneficial
> for defending against CSRF attacks that don't require any user action beyond
> simply viewing the page (e.g., ).

Maybe we should focus the module on this threat more specifically.  My
understanding is that this is a big source of pain for folks who
operate forums, especially for user-supplied images that point back to
the forum itself.  What if the directive was something like
"cookieless-images" and affected all images, regardless of where they
were loaded from?

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Mike Ter Louw

Adam Barth wrote:

2) It seems like an attacker can easily circumvent this module by
submitting a form to attacker.com and then generating the forged
request (which will be sent with cookies because attacker.com doesn't
enables the anti-csrf directive).


I agree.  It seems anti-csrf (as currently defined) would be most 
beneficial for defending against CSRF attacks that don't require any 
user action beyond simply viewing the page (e.g., ).


Form actions would perhaps require some additional constraints, such as 
only allowing submission to |self| or other whitelisted URIs.


Link activation is harder, because (I would assume) most websites want 
to allow links to different-origin URIs.  And as you stated, not sending 
cookies here doesn't help because the link could go to attacker.com, and 
the page can contain an image based CSRF (thus the threshold for 
successful attack is still 1 click).


Thanks for the feedback,

Mike
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Adam Barth
On Thu, Oct 22, 2009 at 8:58 AM, Mike Ter Louw  wrote:
> I've added a CSRF straw-man:
>
> https://wiki.mozilla.org/Security/CSP/CSRFModule
>
> This page borrows liberally from XSSModule.  Comments are welcome!

Two comments:

1) The attacker goal is very syntactic.  It would be better to explain
what the attacker is trying to achieve instead of how we imagine the
attack taking place.

2) It seems like an attacker can easily circumvent this module by
submitting a form to attacker.com and then generating the forged
request (which will be sent with cookies because attacker.com doesn't
enables the anti-csrf directive).

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-22 Thread Mike Ter Louw

Gervase Markham wrote:
I think it would be good if we didn't have to invent a new header for 
each idea of ways to lock down content. I think it would be great if 
people could experiment with Content-Security-Policy: x-my-cool-idea, 
and see if it was useful before standardization. Any idea which is a 
policy for content security should be in scope for experimentation.


I've added a CSRF straw-man:

https://wiki.mozilla.org/Security/CSP/CSRFModule

This page borrows liberally from XSSModule.  Comments are welcome!

Mike
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-22 Thread Gervase Markham

On 21/10/09 17:25, Sid Stamm wrote:

Additional Directives are not a problem either, unless they're mandatory
for all policies (which is not the case ... yet).  I'm still more in
favor of extension via new directives than extension by modifying
existing ones: this seems more obviously backward compatible and in
reality probably more forward compatible too.


Ideally, this would always be the case. And the thinking that's going 
into the modularization should help us to correctly separate concerns.



Right.  This was proposed a while back (I don't recall the thread off
hand) as one header to convey all relevant security policies.  Something
like Accept-Policies I think.  If we want to turn CSP into that, we
could, but it surely wasn't designed from the ground up with that in mind.


I think the name "Content Security Policy" is generic enough already :-)

Gerv
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-21 Thread Sid Stamm
On 10/21/09 2:49 AM, Gervase Markham wrote:
> I think we need to differentiate between added complexity in syntax and
> added complexity in implementation.
> 
> If we design the syntax right, there is no need for additional CSP
> directives to make the syntax more complicated for those who neither
> wish to know nor care about them.

Additional Directives are not a problem either, unless they're mandatory
for all policies (which is not the case ... yet).  I'm still more in
favor of extension via new directives than extension by modifying
existing ones: this seems more obviously backward compatible and in
reality probably more forward compatible too.

> If we modularise CSP correctly, there is no necessity that additional
> ideas lead to greater implementation complexity for those browsers who
> don't want to adopt those ideas (yet).

Agreed.   I'm not against modularization at all, I just want to be
careful so that it is specked out that way -- we just need to keep this
in mind.

> I think it would be good if we didn't have to invent a new header for
> each idea of ways to lock down content. I think it would be great if
> people could experiment with Content-Security-Policy: x-my-cool-idea,
> and see if it was useful before standardization. Any idea which is a
> policy for content security should be in scope for experimentation.

Right.  This was proposed a while back (I don't recall the thread off
hand) as one header to convey all relevant security policies.  Something
like Accept-Policies I think.  If we want to turn CSP into that, we
could, but it surely wasn't designed from the ground up with that in mind.

> I agree with your concerns about scope creep, but I don't think making
> sure the syntax is forwards-compatible requires a fundamental redesign.
> And I don't think allowing the possibility of other things means we are
> on the hook to implement them, either for Firefox 3.6 or for any other
> release.

Point taken.  I'm on board for modularization so long as we don't have
to completely redesign the policy syntax.

I'm also a bit worried that we might lose sight of the original goals of
CSP and so I wanted to bring up the fact that we have wandered far far
away from where CSP started.  If everyone is okay with the diversion, I
see no cause for concern.

> We may wish to say "OK, CSP 1.0 is these 3 modules", so that a browser
> could say "I support CSP 1.0" without having to be more specific and
> detailed. But given that CSP support is unlikely to be a major marketing
> sell, I don't think that's a big factor.

What?  No "CSP 1.0 Compatible!" stickers for my laptop?  Or "CSP
inside"?  :)

-Sid
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-21 Thread Gervase Markham

On 20/10/09 21:20, Sid Stamm wrote:

While I agree with your points enumerated above, we should be really
careful about scope creep and stuffing new goals into an old idea.  The
original point of CSP was not to provide a global security
infrastructure for web sites, but to provide content restrictions and
help stop XSS (mostly content restrictions).  Rolling all sorts of extra
threats like history sniffing into CSP will make it huge and complex,
and for not what was initially desired.  (A complex CSP isn't so bad if
it were modular, but I don't think 'wide-reaching' was the original aim
for CSP).


I think we need to differentiate between added complexity in syntax and 
added complexity in implementation.


If we design the syntax right, there is no need for additional CSP 
directives to make the syntax more complicated for those who neither 
wish to know nor care about them.


If we modularise CSP correctly, there is no necessity that additional 
ideas lead to greater implementation complexity for those browsers who 
don't want to adopt those ideas (yet).


I think it would be good if we didn't have to invent a new header for 
each idea of ways to lock down content. I think it would be great if 
people could experiment with Content-Security-Policy: x-my-cool-idea, 
and see if it was useful before standardization. Any idea which is a 
policy for content security should be in scope for experimentation.


I agree with your concerns about scope creep, but I don't think making 
sure the syntax is forwards-compatible requires a fundamental redesign. 
And I don't think allowing the possibility of other things means we are 
on the hook to implement them, either for Firefox 3.6 or for any other 
release.


We may wish to say "OK, CSP 1.0 is these 3 modules", so that a browser 
could say "I support CSP 1.0" without having to be more specific and 
detailed. But given that CSP support is unlikely to be a major marketing 
sell, I don't think that's a big factor.


Gerv
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: ClickJackingModule (was Re: Comments on the Content Security Policy specification)

2009-10-20 Thread Lucas Adamski
Note that the XSS mitigations can be opted out of, so we shouldn't  
assume that mitigating something specific like clickjacking requires  
XSS mitigations in the current proposal.

  Lucas.

On Oct 20, 2009, at 6:50 PM, Adam Barth wrote:


Thanks Devdatta.  One of the nice thing about separating the
clickjacking concerns from the XSS concerns is that developers can
deploy a policy like

X-Content-Security-Policy: frame-ancestors self

without having to make sure that all the setTimeout calls in their web
app use function objects instead of strings.

Adam


On Tue, Oct 20, 2009 at 6:05 PM, Devdatta   
wrote:
On a related note, just to have one more example (and for my  
learning)

, I went ahead and wrote a draft for ClickJackingModule.
https://wiki.mozilla.org/Security/CSP/ClickJackingModule

In general I like how short and simple each individual module is.

Cheers
Devdatta


___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


ClickJackingModule (was Re: Comments on the Content Security Policy specification)

2009-10-20 Thread Adam Barth
Thanks Devdatta.  One of the nice thing about separating the
clickjacking concerns from the XSS concerns is that developers can
deploy a policy like

X-Content-Security-Policy: frame-ancestors self

without having to make sure that all the setTimeout calls in their web
app use function objects instead of strings.

Adam


On Tue, Oct 20, 2009 at 6:05 PM, Devdatta  wrote:
> On a related note, just to have one more example (and for my learning)
> , I went ahead and wrote a draft for ClickJackingModule.
> https://wiki.mozilla.org/Security/CSP/ClickJackingModule
>
> In general I like how short and simple each individual module is.
>
> Cheers
> Devdatta
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Devdatta
On a related note, just to have one more example (and for my learning)
, I went ahead and wrote a draft for ClickJackingModule.
https://wiki.mozilla.org/Security/CSP/ClickJackingModule

In general I like how short and simple each individual module is.

Cheers
Devdatta

2009/10/20 Lucas Adamski :
> I'm confident we can figure out how best to communicate CSP use cases to
> developers independent of implementation.  What we should have are
> documentation modules that walk a web dev through specific goal-driven
> examples, for example.
>
> The problem with modules I see is they will complicate the model in the long
> run, as the APIs they govern will not be mutually exlusive.  What if 3
> different modules dictate image loading behaviors?  What if the given user
> agent in a scenario does not implement the module where the most restrictive
> of the 3 policies is specified?
>  Lucas
>
> On Oct 20, 2009, at 15:07 Devdatta  wrote:
>
>> I actually think the modular approach is better for the web developer
>> as the policy is easier to write and understand.
>>
>> But I do share your concern, Atleast right now, it is pretty easy to
>> say -- user agents that support XSSModule are protected against XSS
>> and user agents that support history module are protected against
>> history enumeration attacks.  Going forward, we want to keep the
>> separation just as clear and simple.
>>
>> * This would require very clear and simply stated threat models for
>> each module. Each module's threats should be (ideally) disjoint.
>> * A module should be small and complete. We should make it clear why
>> every part of the module is important for the given threat model. This
>> would hopefully ensure that browser vendors either implement the whole
>> module or none of it. (I.E implementing half of a module will give no
>> security)
>>
>> I think this breakup of the spec into modules is useful to the
>> webdevelopers (making it easier to understand) and easier for the
>> browser vendors to implement.
>>
>> Regards
>> Devdatta
>> ___
>> dev-security mailing list
>> dev-security@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security
>
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Module granularity (Re: Comments on the Content Security Policy specification)

2009-10-20 Thread Lucas Adamski
The reporting infrastructure does seem pretty easy to modularize but  
it's also a bit exceptional as it doesn't drive any actual content  
behaviors. I'm going to have to chew on this some more but my primary  
concern remains that this approach could increase complexity and  
reduce reliability in the long run (esp. when combined with fragmented  
implementation by user agents).

  Lucas.

On Oct 20, 2009, at 15:49, Adam Barth   
wrote:
On Tue, Oct 20, 2009 at 3:35 PM, Lucas Adamski   
wrote:
The problem with modules I see is they will complicate the model in  
the long
run, as the APIs they govern will not be mutually exlusive.  What  
if 3
different modules dictate image loading behaviors?  What if the  
given user
agent in a scenario does not implement the module where the most  
restrictive

of the 3 policies is specified?


This seems like a question of granularity.  Presumably a decomposition
that has three modules competing to control image loads is too
granular.  There seem to be some clear wins to modularizing the
current spec.  For example, the reporting infrastructure seems
independent of whether you can block XMLHttpRequest targets.

Adam

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Module granularity (Re: Comments on the Content Security Policy specification)

2009-10-20 Thread Adam Barth
On Tue, Oct 20, 2009 at 3:35 PM, Lucas Adamski  wrote:
> The problem with modules I see is they will complicate the model in the long
> run, as the APIs they govern will not be mutually exlusive.  What if 3
> different modules dictate image loading behaviors?  What if the given user
> agent in a scenario does not implement the module where the most restrictive
> of the 3 policies is specified?

This seems like a question of granularity.  Presumably a decomposition
that has three modules competing to control image loads is too
granular.  There seem to be some clear wins to modularizing the
current spec.  For example, the reporting infrastructure seems
independent of whether you can block XMLHttpRequest targets.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Versioning vs. Modularity (was Re: Comments on the Content Security Policy specification)

2009-10-20 Thread Lucas Adamski
I'm not a fan of it but it's unavoidable for a security mechanism. We  
already had bugs filed against CSP that would result in content  
impacting behavioral changes. Not to mention that even module-centric  
functionality would have to be revised to govern new APIs and new  
types of attacks against existing APIs.  Other option I guess is not  
versioning and just breaking content periodically.

  Lucas

On Oct 20, 2009, at 15:27, Adam Barth   
wrote:


On Tue, Oct 20, 2009 at 3:21 PM, Lucas Adamski   
wrote:
I've been a firm believer that CSP will evolve over time but that's  
an
argument for versioning though, not modularity. We are as likely to  
have to
modify existing behaviors as introduce whole new sets.  It's also  
not a

reason to split the existing functionality into modules.


I'm not sure versioning is the best approach for web technologies.
For example, versioning has been explicitly rejected for HTML,
ECMAScript, and cookies.  In fact, I can't really think of a
successful web technology that uses versioning instead of
extensibility.  Maybe SSL/TLS?  Even there, the modern approach is to
advance the protocol with extensions (e.g., SNI).

Adam

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Lucas Adamski
I'm confident we can figure out how best to communicate CSP use cases  
to developers independent of implementation.  What we should have are  
documentation modules that walk a web dev through specific goal-driven  
examples, for example.


The problem with modules I see is they will complicate the model in  
the long run, as the APIs they govern will not be mutually exlusive.   
What if 3 different modules dictate image loading behaviors?  What if  
the given user agent in a scenario does not implement the module where  
the most restrictive of the 3 policies is specified?

  Lucas

On Oct 20, 2009, at 15:07 Devdatta  wrote:


I actually think the modular approach is better for the web developer
as the policy is easier to write and understand.

But I do share your concern, Atleast right now, it is pretty easy to
say -- user agents that support XSSModule are protected against XSS
and user agents that support history module are protected against
history enumeration attacks.  Going forward, we want to keep the
separation just as clear and simple.

* This would require very clear and simply stated threat models for
each module. Each module's threats should be (ideally) disjoint.
* A module should be small and complete. We should make it clear why
every part of the module is important for the given threat model. This
would hopefully ensure that browser vendors either implement the whole
module or none of it. (I.E implementing half of a module will give no
security)

I think this breakup of the spec into modules is useful to the
webdevelopers (making it easier to understand) and easier for the
browser vendors to implement.

Regards
Devdatta
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Versioning vs. Modularity (was Re: Comments on the Content Security Policy specification)

2009-10-20 Thread Adam Barth
On Tue, Oct 20, 2009 at 3:21 PM, Lucas Adamski  wrote:
> I've been a firm believer that CSP will evolve over time but that's an
> argument for versioning though, not modularity. We are as likely to have to
> modify existing behaviors as introduce whole new sets.  It's also not a
> reason to split the existing functionality into modules.

I'm not sure versioning is the best approach for web technologies.
For example, versioning has been explicitly rejected for HTML,
ECMAScript, and cookies.  In fact, I can't really think of a
successful web technology that uses versioning instead of
extensibility.  Maybe SSL/TLS?  Even there, the modern approach is to
advance the protocol with extensions (e.g., SNI).

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Lucas Adamski
I've been a firm believer that CSP will evolve over time but that's an  
argument for versioning though, not modularity. We are as likely to  
have to modify existing behaviors as introduce whole new sets.  It's  
also not a reason to split the existing functionality into modules.

  Lucas

On Oct 20, 2009, at 14:53, Collin Jackson   
wrote:



It seems to me that thinking ahead would tend to favor the modular
approach, since we're unlikely to guess the most compelling use cases
on the first try, and modules will provide a backwards-compatible
means of evolving the spec to what web authors actually need.

On Tue, Oct 20, 2009 at 2:49 PM, Lucas Adamski   
wrote:
We should think ahead, not just a year or two but to the point that  
all
current browsers will be EOL and (just like every other feature  
that is

currently in HTML5) this will be widely adopted and reliable.
 Lucas.

On Oct 20, 2009, at 2:30 PM, Collin Jackson wrote:

Why do web developers need to keep track of which user agents  
support
CSP? I thought CSP was a defense in depth. I really hope people  
don't

use this as their only XSS defense. :)

On Tue, Oct 20, 2009 at 2:25 PM, Lucas Adamski   
wrote:


I'm not sure that providing a modular approach for vendors to  
implemented

pieces of CSP is really valuable to our intended audience (web
developers).
 It will be hard enough for developers to keep track of which  
user agents
support CSP, without requiring a matrix to understand which  
particular
versions of which agents support the mix of CSP features they  
want to

use,
and what it means if a given browser only supports 2 of the 3  
modules

they
want to use.  If this means some more up-front pain for vendors in
implementation costs vs. pushing more complexity to web  
developers, the

former approach seems to be a lot less expensive in the net.
 Lucas.

On Oct 20, 2009, at 1:42 PM, Collin Jackson wrote:

On Tue, Oct 20, 2009 at 1:20 PM, Sid Stamm   
wrote:


While I agree with your points enumerated above, we should be  
really
careful about scope creep and stuffing new goals into an old  
idea.  The

original point of CSP was not to provide a global security
infrastructure for web sites, but to provide content  
restrictions and
help stop XSS (mostly content restrictions).  Rolling all sorts  
of

extra
threats like history sniffing into CSP will make it huge and  
complex,
and for not what was initially desired.  (A complex CSP isn't  
so bad if
it were modular, but I don't think 'wide-reaching' was the  
original aim

for CSP).


I think we're completely in agreement, except that I don't think
making CSP modular is particularly hard. In fact, I think it  
makes the

proposal much more approachable because vendors can implement just
BaseModule (the CSP header syntax) and other modules they like  
such as
XSSModule without feeling like they have to implement the ones  
they

think aren't interesting. And they can experiment with their own
modules without feeling like they're breaking the spec.

One idea that might make a module CSP more approachable for  
vendors is

to have a status page that shows the various modules, like this:
https://wiki.mozilla.org/Security/CSP/Modules
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security







___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Devdatta
I actually think the modular approach is better for the web developer
as the policy is easier to write and understand.

But I do share your concern, Atleast right now, it is pretty easy to
say -- user agents that support XSSModule are protected against XSS
and user agents that support history module are protected against
history enumeration attacks.  Going forward, we want to keep the
separation just as clear and simple.

* This would require very clear and simply stated threat models for
each module. Each module's threats should be (ideally) disjoint.
* A module should be small and complete. We should make it clear why
every part of the module is important for the given threat model. This
would hopefully ensure that browser vendors either implement the whole
module or none of it. (I.E implementing half of a module will give no
security)

I think this breakup of the spec into modules is useful to the
webdevelopers (making it easier to understand) and easier for the
browser vendors to implement.

Regards
Devdatta
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Collin Jackson
It seems to me that thinking ahead would tend to favor the modular
approach, since we're unlikely to guess the most compelling use cases
on the first try, and modules will provide a backwards-compatible
means of evolving the spec to what web authors actually need.

On Tue, Oct 20, 2009 at 2:49 PM, Lucas Adamski  wrote:
> We should think ahead, not just a year or two but to the point that all
> current browsers will be EOL and (just like every other feature that is
> currently in HTML5) this will be widely adopted and reliable.
>  Lucas.
>
> On Oct 20, 2009, at 2:30 PM, Collin Jackson wrote:
>
>> Why do web developers need to keep track of which user agents support
>> CSP? I thought CSP was a defense in depth. I really hope people don't
>> use this as their only XSS defense. :)
>>
>> On Tue, Oct 20, 2009 at 2:25 PM, Lucas Adamski  wrote:
>>>
>>> I'm not sure that providing a modular approach for vendors to implemented
>>> pieces of CSP is really valuable to our intended audience (web
>>> developers).
>>>  It will be hard enough for developers to keep track of which user agents
>>> support CSP, without requiring a matrix to understand which particular
>>> versions of which agents support the mix of CSP features they want to
>>> use,
>>> and what it means if a given browser only supports 2 of the 3 modules
>>> they
>>> want to use.  If this means some more up-front pain for vendors in
>>> implementation costs vs. pushing more complexity to web developers, the
>>> former approach seems to be a lot less expensive in the net.
>>>  Lucas.
>>>
>>> On Oct 20, 2009, at 1:42 PM, Collin Jackson wrote:
>>>
 On Tue, Oct 20, 2009 at 1:20 PM, Sid Stamm  wrote:
>
> While I agree with your points enumerated above, we should be really
> careful about scope creep and stuffing new goals into an old idea.  The
> original point of CSP was not to provide a global security
> infrastructure for web sites, but to provide content restrictions and
> help stop XSS (mostly content restrictions).  Rolling all sorts of
> extra
> threats like history sniffing into CSP will make it huge and complex,
> and for not what was initially desired.  (A complex CSP isn't so bad if
> it were modular, but I don't think 'wide-reaching' was the original aim
> for CSP).

 I think we're completely in agreement, except that I don't think
 making CSP modular is particularly hard. In fact, I think it makes the
 proposal much more approachable because vendors can implement just
 BaseModule (the CSP header syntax) and other modules they like such as
 XSSModule without feeling like they have to implement the ones they
 think aren't interesting. And they can experiment with their own
 modules without feeling like they're breaking the spec.

 One idea that might make a module CSP more approachable for vendors is
 to have a status page that shows the various modules, like this:
 https://wiki.mozilla.org/Security/CSP/Modules
 ___
 dev-security mailing list
 dev-security@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-security
>>>
>>>
>
>
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Lucas Adamski
We should think ahead, not just a year or two but to the point that  
all current browsers will be EOL and (just like every other feature  
that is currently in HTML5) this will be widely adopted and reliable.

  Lucas.

On Oct 20, 2009, at 2:30 PM, Collin Jackson wrote:


Why do web developers need to keep track of which user agents support
CSP? I thought CSP was a defense in depth. I really hope people don't
use this as their only XSS defense. :)

On Tue, Oct 20, 2009 at 2:25 PM, Lucas Adamski   
wrote:
I'm not sure that providing a modular approach for vendors to  
implemented
pieces of CSP is really valuable to our intended audience (web  
developers).
 It will be hard enough for developers to keep track of which user  
agents
support CSP, without requiring a matrix to understand which  
particular
versions of which agents support the mix of CSP features they want  
to use,
and what it means if a given browser only supports 2 of the 3  
modules they

want to use.  If this means some more up-front pain for vendors in
implementation costs vs. pushing more complexity to web developers,  
the

former approach seems to be a lot less expensive in the net.
 Lucas.

On Oct 20, 2009, at 1:42 PM, Collin Jackson wrote:


On Tue, Oct 20, 2009 at 1:20 PM, Sid Stamm  wrote:


While I agree with your points enumerated above, we should be  
really
careful about scope creep and stuffing new goals into an old  
idea.  The

original point of CSP was not to provide a global security
infrastructure for web sites, but to provide content restrictions  
and
help stop XSS (mostly content restrictions).  Rolling all sorts  
of extra
threats like history sniffing into CSP will make it huge and  
complex,
and for not what was initially desired.  (A complex CSP isn't so  
bad if
it were modular, but I don't think 'wide-reaching' was the  
original aim

for CSP).


I think we're completely in agreement, except that I don't think
making CSP modular is particularly hard. In fact, I think it makes  
the

proposal much more approachable because vendors can implement just
BaseModule (the CSP header syntax) and other modules they like  
such as

XSSModule without feeling like they have to implement the ones they
think aren't interesting. And they can experiment with their own
modules without feeling like they're breaking the spec.

One idea that might make a module CSP more approachable for  
vendors is

to have a status page that shows the various modules, like this:
https://wiki.mozilla.org/Security/CSP/Modules
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security





___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Collin Jackson
Why do web developers need to keep track of which user agents support
CSP? I thought CSP was a defense in depth. I really hope people don't
use this as their only XSS defense. :)

On Tue, Oct 20, 2009 at 2:25 PM, Lucas Adamski  wrote:
> I'm not sure that providing a modular approach for vendors to implemented
> pieces of CSP is really valuable to our intended audience (web developers).
>  It will be hard enough for developers to keep track of which user agents
> support CSP, without requiring a matrix to understand which particular
> versions of which agents support the mix of CSP features they want to use,
> and what it means if a given browser only supports 2 of the 3 modules they
> want to use.  If this means some more up-front pain for vendors in
> implementation costs vs. pushing more complexity to web developers, the
> former approach seems to be a lot less expensive in the net.
>  Lucas.
>
> On Oct 20, 2009, at 1:42 PM, Collin Jackson wrote:
>
>> On Tue, Oct 20, 2009 at 1:20 PM, Sid Stamm  wrote:
>>>
>>> While I agree with your points enumerated above, we should be really
>>> careful about scope creep and stuffing new goals into an old idea.  The
>>> original point of CSP was not to provide a global security
>>> infrastructure for web sites, but to provide content restrictions and
>>> help stop XSS (mostly content restrictions).  Rolling all sorts of extra
>>> threats like history sniffing into CSP will make it huge and complex,
>>> and for not what was initially desired.  (A complex CSP isn't so bad if
>>> it were modular, but I don't think 'wide-reaching' was the original aim
>>> for CSP).
>>
>> I think we're completely in agreement, except that I don't think
>> making CSP modular is particularly hard. In fact, I think it makes the
>> proposal much more approachable because vendors can implement just
>> BaseModule (the CSP header syntax) and other modules they like such as
>> XSSModule without feeling like they have to implement the ones they
>> think aren't interesting. And they can experiment with their own
>> modules without feeling like they're breaking the spec.
>>
>> One idea that might make a module CSP more approachable for vendors is
>> to have a status page that shows the various modules, like this:
>> https://wiki.mozilla.org/Security/CSP/Modules
>> ___
>> dev-security mailing list
>> dev-security@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security
>
>
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Lucas Adamski
I'm not sure that providing a modular approach for vendors to  
implemented pieces of CSP is really valuable to our intended audience  
(web developers).  It will be hard enough for developers to keep track  
of which user agents support CSP, without requiring a matrix to  
understand which particular versions of which agents support the mix  
of CSP features they want to use, and what it means if a given browser  
only supports 2 of the 3 modules they want to use.  If this means some  
more up-front pain for vendors in implementation costs vs. pushing  
more complexity to web developers, the former approach seems to be a  
lot less expensive in the net.

  Lucas.

On Oct 20, 2009, at 1:42 PM, Collin Jackson wrote:


On Tue, Oct 20, 2009 at 1:20 PM, Sid Stamm  wrote:

While I agree with your points enumerated above, we should be really
careful about scope creep and stuffing new goals into an old idea.   
The

original point of CSP was not to provide a global security
infrastructure for web sites, but to provide content restrictions and
help stop XSS (mostly content restrictions).  Rolling all sorts of  
extra

threats like history sniffing into CSP will make it huge and complex,
and for not what was initially desired.  (A complex CSP isn't so  
bad if
it were modular, but I don't think 'wide-reaching' was the original  
aim

for CSP).


I think we're completely in agreement, except that I don't think
making CSP modular is particularly hard. In fact, I think it makes the
proposal much more approachable because vendors can implement just
BaseModule (the CSP header syntax) and other modules they like such as
XSSModule without feeling like they have to implement the ones they
think aren't interesting. And they can experiment with their own
modules without feeling like they're breaking the spec.

One idea that might make a module CSP more approachable for vendors is
to have a status page that shows the various modules, like this:
https://wiki.mozilla.org/Security/CSP/Modules
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Devdatta
Hi

Sorry, I didn't read your modular approach proposal before sending the
email.

Cheers
Devdatta

On Oct 20, 2:03 pm, Adam Barth  wrote:
> In the modular approach, this is not true.  You simply send this header:
>
> X-Content-Security-Policy: safe-history
>
> The requirements to remove inline script, eval, etc aren't present
> because you haven't opted into the XSSModule.  You can, of course,
> combine them using this sort of policy:
>
> X-Content-Security-Policy: safe-history, block-xss
>
> but you certainly don't have to.
>
> Adam
>
> On Tue, Oct 20, 2009 at 1:59 PM, Devdatta  wrote:
> > The history enumeration threat is a simple threat with a simple
> > solution. Opting into Safe History protection shouldn't require me to
> > do all the work of opting into CSP. In addition, I don't see any
> > infrastructure that is needed by this feature that is in common with
> > CSP.
>
> > Lets say I am a website adminstrator, and I am concerned about this
> > particular threat . Opting into CSP involves a lot of work -
> > understanding the spec, noting down all the domains that interact
> > everywhere on my site, removing inline scripts and evals and
> > javascript URLs to corrected code etc. etc. My fear is that this will
> > make admins write policies that are too lenient (say with allow-eval)
> > , just to get the safe history feature.
>
> > Cheers
> > Devdatta
>
> > 2009/10/20 Adam Barth :
> >> On Tue, Oct 20, 2009 at 12:50 PM, Devdatta  wrote:
> >>> Regarding , History enumeration -- I don't see why it should be part
> >>> of CSP. A separate header - X-Safe-History can be used.
>
> >> I think one of the goals of CSP is to avoid having one-off HTTP
> >> headers for each threat we'd like to mitigate.  Combining different
> >> directives into a single policy mechanism has advantages:
>
> >> 1) It's easier for web site operators to manage one policy.
> >> 2) The directives can share common infrastructure, like the reporting
> >> facilities.
>
> >> Adam

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Adam Barth
On Tue, Oct 20, 2009 at 1:42 PM, Collin Jackson
 wrote:
> I think we're completely in agreement, except that I don't think
> making CSP modular is particularly hard. In fact, I think it makes the
> proposal much more approachable because vendors can implement just
> BaseModule (the CSP header syntax) and other modules they like such as
> XSSModule without feeling like they have to implement the ones they
> think aren't interesting. And they can experiment with their own
> modules without feeling like they're breaking the spec.

I've factored the BaseModule out of the XSSModule, so it's clear that
you could implement the HistoryModule without the XSSModule.  I'd be
happy to take a crack at breaking up the main CSP spec into modules on
the wiki if you'd like to see what that would look like.  I don't
think it would be that hard.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Adam Barth
In the modular approach, this is not true.  You simply send this header:

X-Content-Security-Policy: safe-history

The requirements to remove inline script, eval, etc aren't present
because you haven't opted into the XSSModule.  You can, of course,
combine them using this sort of policy:

X-Content-Security-Policy: safe-history, block-xss

but you certainly don't have to.

Adam


On Tue, Oct 20, 2009 at 1:59 PM, Devdatta  wrote:
> The history enumeration threat is a simple threat with a simple
> solution. Opting into Safe History protection shouldn't require me to
> do all the work of opting into CSP. In addition, I don't see any
> infrastructure that is needed by this feature that is in common with
> CSP.
>
> Lets say I am a website adminstrator, and I am concerned about this
> particular threat . Opting into CSP involves a lot of work -
> understanding the spec, noting down all the domains that interact
> everywhere on my site, removing inline scripts and evals and
> javascript URLs to corrected code etc. etc. My fear is that this will
> make admins write policies that are too lenient (say with allow-eval)
> , just to get the safe history feature.
>
> Cheers
> Devdatta
>
> 2009/10/20 Adam Barth :
>> On Tue, Oct 20, 2009 at 12:50 PM, Devdatta  wrote:
>>> Regarding , History enumeration -- I don't see why it should be part
>>> of CSP. A separate header - X-Safe-History can be used.
>>
>> I think one of the goals of CSP is to avoid having one-off HTTP
>> headers for each threat we'd like to mitigate.  Combining different
>> directives into a single policy mechanism has advantages:
>>
>> 1) It's easier for web site operators to manage one policy.
>> 2) The directives can share common infrastructure, like the reporting
>> facilities.
>>
>> Adam
>>
>
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Collin Jackson
On Tue, Oct 20, 2009 at 1:20 PM, Sid Stamm  wrote:
> While I agree with your points enumerated above, we should be really
> careful about scope creep and stuffing new goals into an old idea.  The
> original point of CSP was not to provide a global security
> infrastructure for web sites, but to provide content restrictions and
> help stop XSS (mostly content restrictions).  Rolling all sorts of extra
> threats like history sniffing into CSP will make it huge and complex,
> and for not what was initially desired.  (A complex CSP isn't so bad if
> it were modular, but I don't think 'wide-reaching' was the original aim
> for CSP).

I think we're completely in agreement, except that I don't think
making CSP modular is particularly hard. In fact, I think it makes the
proposal much more approachable because vendors can implement just
BaseModule (the CSP header syntax) and other modules they like such as
XSSModule without feeling like they have to implement the ones they
think aren't interesting. And they can experiment with their own
modules without feeling like they're breaking the spec.

One idea that might make a module CSP more approachable for vendors is
to have a status page that shows the various modules, like this:
https://wiki.mozilla.org/Security/CSP/Modules
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Sid Stamm
On 10/20/09 12:58 PM, Adam Barth wrote:
> I think one of the goals of CSP is to avoid having one-off HTTP
> headers for each threat we'd like to mitigate.  Combining different
> directives into a single policy mechanism has advantages:
> 
> 1) It's easier for web site operators to manage one policy.
> 2) The directives can share common infrastructure, like the reporting
> facilities.

While I agree with your points enumerated above, we should be really
careful about scope creep and stuffing new goals into an old idea.  The
original point of CSP was not to provide a global security
infrastructure for web sites, but to provide content restrictions and
help stop XSS (mostly content restrictions).  Rolling all sorts of extra
threats like history sniffing into CSP will make it huge and complex,
and for not what was initially desired.  (A complex CSP isn't so bad if
it were modular, but I don't think 'wide-reaching' was the original aim
for CSP).

Brandon, Gerv, step in and correct me if I'm wrong -- you were working
on this long before me -- but I want to be really careful if we're going
to start changing the goals of this project.  If we want to come up with
something extensible and wide-reaching, we should probably step back and
seriously overhaul the design.

-Sid
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Adam Barth
On Tue, Oct 20, 2009 at 12:50 PM, Devdatta  wrote:
> Regarding , History enumeration -- I don't see why it should be part
> of CSP. A separate header - X-Safe-History can be used.

I think one of the goals of CSP is to avoid having one-off HTTP
headers for each threat we'd like to mitigate.  Combining different
directives into a single policy mechanism has advantages:

1) It's easier for web site operators to manage one policy.
2) The directives can share common infrastructure, like the reporting
facilities.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Adam Barth
On Tue, Oct 20, 2009 at 12:47 PM, Mike Ter Louw  wrote:
> The threat model of HistoryModule, as currently defined, seems to be
> precisely the threat model that would be addressed by a similar module
> implementing a per-origin cache partitioning scheme to defeat history timing
> attacks.

Good point.  I've added cache timing as an open issue at the bottom of
the HistoryModule wiki page.

> If these are to be kept as separate modules, then perhaps the threat model
> should be more tightly scoped, and directive names should be specific to the
> features they enable?

It's somewhat unclear when to break things into separate modules, but
having one module per threat seems to make sense.  The visited link
issue and the cache timing issue seem related enough (i.e., both about
history stealing) to be in the same module.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Devdatta
> class) can give people power to do surprising things (e.g. internal
> network ping sweeping, user history enumeration respectively).

Isn't the ping sweeping threat already taken care of by CSP? No
requests to internal networks will be honored as they won't be allowed
by the policy. (although its not a threat present in the threat model
for CSP )

Regarding , History enumeration -- I don't see why it should be part
of CSP. A separate header - X-Safe-History can be used.

Cheers
Devdatta

On Oct 19, 6:43 am, Johnathan Nightingale  wrote:
> On 19-Oct-09, at 7:34 AM, Gervase Markham wrote:
>
> > On 15/10/09 22:20, Brandon Sterne wrote:
> >> IOW, we need to decide if webpage defacement via injected style is in
> >> the treat model for CSP and, if so, then we need to do B.
>
> > Is it just about defacement, or is it also about the fact that CSS  
> > can bring in behaviours etc?
>
> > If it's about defacement, then there's no set of "non-dangerous  
> > stylesheet constructs", and you can ignore my C. I think that,  
> > without executing JS code support, the successful attacks you could  
> > mount using CSS are limited. I guess you might put a notice on the  
> > bank website: "Urgent! Call this number and give them all your  
> > personal info!"...
>
> Not as limited as you might like. Remember that even apparently non-
> dangerous constructs (e.g. background-image, the :visited pseudo  
> class) can give people power to do surprising things (e.g. internal  
> network ping sweeping, user history enumeration respectively).
>
> J
>
> ---
> Johnathan Nightingale
> Human Shield
> john...@mozilla.com

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Mike Ter Louw

Collin Jackson wrote:

If you want to make a module that prevents history sniffing completely
against specific sites and avoids assuming the user never interacts
with a bad site, you could have a CSP module that allows a server to
specify whether its history entries can be treated as visited by other
origins. Sites concerned about user privacy would then have control
over whether other sites could detect that they've been visited. A
similar module could be used for cross-origin cache loads to address
timing attacks.


Collin Jackson wrote:

I put together a brief description of the history module proposal on the wiki:

https://wiki.mozilla.org/Security/CSP/HistoryModule


The threat model of HistoryModule, as currently defined, seems to be 
precisely the threat model that would be addressed by a similar module 
implementing a per-origin cache partitioning scheme to defeat history 
timing attacks.


If these are to be kept as separate modules, then perhaps the threat 
model should be more tightly scoped, and directive names should be 
specific to the features they enable?


I like the idea of modularizing CSP.

Mike
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Collin Jackson
I put together a brief description of the history module proposal on the wiki:

https://wiki.mozilla.org/Security/CSP/HistoryModule

On Tue, Oct 20, 2009 at 10:03 AM, Collin Jackson
 wrote:
> If you want to make a module that prevents history sniffing completely
> against specific sites and avoids assuming the user never interacts
> with a bad site, you could have a CSP module that allows a server to
> specify whether its history entries can be treated as visited by other
> origins. Sites concerned about user privacy would then have control
> over whether other sites could detect that they've been visited. A
> similar module could be used for cross-origin cache loads to address
> timing attacks.
>
> On Tue, Oct 20, 2009 at 6:26 AM, Johnathan Nightingale
>  wrote:
>> On 19-Oct-09, at 5:39 PM, Adam Barth wrote:
>>>
>>> On Mon, Oct 19, 2009 at 6:43 AM, Johnathan Nightingale
>>>  wrote:

 Not as limited as you might like. Remember that even apparently
 non-dangerous constructs (e.g. background-image, the :visited pseudo
 class)
 can give people power to do surprising things (e.g. internal network ping
 sweeping, user history enumeration respectively).
>>>
>>> I'm not arguing for or against providing the ability to
>>> block-inline-css, but keep in mind that an attacker can do all those
>>> things as soon as you visit attacker.com.
>>
>> Yeah, I think you're absolutely right that CSP is primarily about preventing
>> attackers from exploiting your browser's trust relationship with victim.com,
>> and the examples I offered are (for lack of a better term), victim-agnostic.
>> They don't steal victim.com credentials or cause unwanted changes to, or
>> transactions with, your victim.com presence.
>>
>> I do think, though, that a helpful secondary effect of CSP is that it
>> reduces attackers' ability to amplify the effect of their attacks. You're
>> right that it doesn't take much to get users to click on a link, but I think
>> it is nevertheless the case that a good history enumerator or ping sweep
>> which happens in the background while you're reading a NYTimes article will
>> have a substantially higher success rate than a link in the comment section
>> that says "Click here for free goodies." Basically by definition,
>> link-clickers are a subset of your total prospective victim pool.
>>
>> I think this is more specifically what makes me feel like there's still
>> value to locking down all inline styling, or at least providing that
>> facility, but I appreciate you forcing me to refine my thinking a little
>> more.
>>
>>>  In the past, I've found it helpful to simply assume the
>>> user is always visiting attacker.com in some background tab.  After
>>> all, Firefox is supposed to let you view untrusted web sites securely.
>>
>> Yes, absolutely so. We should continue to try to bend smarts towards fixing
>> :visited and other nasty sleights-of-hand. But the one course of work
>> doesn't preclude the other (and I don't think you were saying that it did).
>>
>> Johnathan
>>
>> ---
>> Johnathan Nightingale
>> Human Shield
>> john...@mozilla.com
>>
>>
>>
>> ___
>> dev-security mailing list
>> dev-security@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security
>>
>
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Collin Jackson
If you want to make a module that prevents history sniffing completely
against specific sites and avoids assuming the user never interacts
with a bad site, you could have a CSP module that allows a server to
specify whether its history entries can be treated as visited by other
origins. Sites concerned about user privacy would then have control
over whether other sites could detect that they've been visited. A
similar module could be used for cross-origin cache loads to address
timing attacks.

On Tue, Oct 20, 2009 at 6:26 AM, Johnathan Nightingale
 wrote:
> On 19-Oct-09, at 5:39 PM, Adam Barth wrote:
>>
>> On Mon, Oct 19, 2009 at 6:43 AM, Johnathan Nightingale
>>  wrote:
>>>
>>> Not as limited as you might like. Remember that even apparently
>>> non-dangerous constructs (e.g. background-image, the :visited pseudo
>>> class)
>>> can give people power to do surprising things (e.g. internal network ping
>>> sweeping, user history enumeration respectively).
>>
>> I'm not arguing for or against providing the ability to
>> block-inline-css, but keep in mind that an attacker can do all those
>> things as soon as you visit attacker.com.
>
> Yeah, I think you're absolutely right that CSP is primarily about preventing
> attackers from exploiting your browser's trust relationship with victim.com,
> and the examples I offered are (for lack of a better term), victim-agnostic.
> They don't steal victim.com credentials or cause unwanted changes to, or
> transactions with, your victim.com presence.
>
> I do think, though, that a helpful secondary effect of CSP is that it
> reduces attackers' ability to amplify the effect of their attacks. You're
> right that it doesn't take much to get users to click on a link, but I think
> it is nevertheless the case that a good history enumerator or ping sweep
> which happens in the background while you're reading a NYTimes article will
> have a substantially higher success rate than a link in the comment section
> that says "Click here for free goodies." Basically by definition,
> link-clickers are a subset of your total prospective victim pool.
>
> I think this is more specifically what makes me feel like there's still
> value to locking down all inline styling, or at least providing that
> facility, but I appreciate you forcing me to refine my thinking a little
> more.
>
>>  In the past, I've found it helpful to simply assume the
>> user is always visiting attacker.com in some background tab.  After
>> all, Firefox is supposed to let you view untrusted web sites securely.
>
> Yes, absolutely so. We should continue to try to bend smarts towards fixing
> :visited and other nasty sleights-of-hand. But the one course of work
> doesn't preclude the other (and I don't think you were saying that it did).
>
> Johnathan
>
> ---
> Johnathan Nightingale
> Human Shield
> john...@mozilla.com
>
>
>
> ___
> dev-security mailing list
> dev-security@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security
>
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Johnathan Nightingale

On 19-Oct-09, at 5:39 PM, Adam Barth wrote:

On Mon, Oct 19, 2009 at 6:43 AM, Johnathan Nightingale
 wrote:

Not as limited as you might like. Remember that even apparently
non-dangerous constructs (e.g. background-image, the :visited  
pseudo class)
can give people power to do surprising things (e.g. internal  
network ping

sweeping, user history enumeration respectively).


I'm not arguing for or against providing the ability to
block-inline-css, but keep in mind that an attacker can do all those
things as soon as you visit attacker.com.


Yeah, I think you're absolutely right that CSP is primarily about  
preventing attackers from exploiting your browser's trust relationship  
with victim.com, and the examples I offered are (for lack of a better  
term), victim-agnostic. They don't steal victim.com credentials or  
cause unwanted changes to, or transactions with, your victim.com  
presence.


I do think, though, that a helpful secondary effect of CSP is that it  
reduces attackers' ability to amplify the effect of their attacks.  
You're right that it doesn't take much to get users to click on a  
link, but I think it is nevertheless the case that a good history  
enumerator or ping sweep which happens in the background while you're  
reading a NYTimes article will have a substantially higher success  
rate than a link in the comment section that says "Click here for free  
goodies." Basically by definition, link-clickers are a subset of your  
total prospective victim pool.


I think this is more specifically what makes me feel like there's  
still value to locking down all inline styling, or at least providing  
that facility, but I appreciate you forcing me to refine my thinking a  
little more.



  In the past, I've found it helpful to simply assume the
user is always visiting attacker.com in some background tab.  After
all, Firefox is supposed to let you view untrusted web sites securely.


Yes, absolutely so. We should continue to try to bend smarts towards  
fixing :visited and other nasty sleights-of-hand. But the one course  
of work doesn't preclude the other (and I don't think you were saying  
that it did).


Johnathan

---
Johnathan Nightingale
Human Shield
john...@mozilla.com



___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-19 Thread Adam Barth
On Mon, Oct 19, 2009 at 6:43 AM, Johnathan Nightingale
 wrote:
> Not as limited as you might like. Remember that even apparently
> non-dangerous constructs (e.g. background-image, the :visited pseudo class)
> can give people power to do surprising things (e.g. internal network ping
> sweeping, user history enumeration respectively).

I'm not arguing for or against providing the ability to
block-inline-css, but keep in mind that an attacker can do all those
things as soon as you visit attacker.com.

There are many ways for the attacker to convince the user to visit
attacker.com.  In the past, I've found it helpful to simply assume the
user is always visiting attacker.com in some background tab.  After
all, Firefox is supposed to let you view untrusted web sites securely.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-19 Thread Johnathan Nightingale

On 19-Oct-09, at 7:34 AM, Gervase Markham wrote:

On 15/10/09 22:20, Brandon Sterne wrote:

IOW, we need to decide if webpage defacement via injected style is in
the treat model for CSP and, if so, then we need to do B.


Is it just about defacement, or is it also about the fact that CSS  
can bring in behaviours etc?


If it's about defacement, then there's no set of "non-dangerous  
stylesheet constructs", and you can ignore my C. I think that,  
without executing JS code support, the successful attacks you could  
mount using CSS are limited. I guess you might put a notice on the  
bank website: "Urgent! Call this number and give them all your  
personal info!"...



Not as limited as you might like. Remember that even apparently non- 
dangerous constructs (e.g. background-image, the :visited pseudo  
class) can give people power to do surprising things (e.g. internal  
network ping sweeping, user history enumeration respectively).


J

---
Johnathan Nightingale
Human Shield
john...@mozilla.com



___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-19 Thread Gervase Markham

On 15/10/09 22:20, Brandon Sterne wrote:

I think we face a decision:
A) we continue to allow inline styles and make external stylesheet loads
be subject to the "allow" policy, or
B) we disallow inline style and create an opt-in mechanism similar to
the inline-script option [2]


C) We do A, but disallow entirely some dangerous stylesheet constructs.


IOW, we need to decide if webpage defacement via injected style is in
the treat model for CSP and, if so, then we need to do B.


Is it just about defacement, or is it also about the fact that CSS can 
bring in behaviours etc?


If it's about defacement, then there's no set of "non-dangerous 
stylesheet constructs", and you can ignore my C. I think that, without 
executing JS code support, the successful attacks you could mount using 
CSS are limited. I guess you might put a notice on the bank website: 
"Urgent! Call this number and give them all your personal info!"...


Gerv
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-15 Thread Brandon Sterne
On 07/30/2009 07:06 AM, Gervase Markham wrote:
> On 29/07/09 23:23, Ian Hickson wrote:
>>   * Combine style-src and font-src
> 
> That makes sense.

I agree.  @font-face has to come from CSS which is already subject to
style-src restrictions.  I don't think there are any practical attacks
we are preventing by allowing a site to say "style can come from 
but not fonts".  I propose we combine the two directives and will do so
if there aren't objections.

Separately, there is another style-src related problem with the current
model [1]:

style-src restricts which sources are valid for externally linked
stylesheets, but all inline style is still allowed.  The current model
offers no real protection against style injected by an attacker.  If
anything, it provides a way for sites to prevent outbound requests
(CSRF) via injected  tags.  But if this is the
only protection we are providing, we could easily have stylesheets be
restricted to the "allow" list.

I think we face a decision:
A) we continue to allow inline styles and make external stylesheet loads
be subject to the "allow" policy, or
B) we disallow inline style and create an opt-in mechanism similar to
the inline-script option [2]

IOW, we need to decide if webpage defacement via injected style is in
the treat model for CSP and, if so, then we need to do B.

Thoughts?

-Brandon

[1] https://wiki.mozilla.org/Security/CSP/Spec#style-src
[2] https://wiki.mozilla.org/Security/CSP/Spec#options
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-09-03 Thread Gervase Markham

On 12/08/09 00:11, Ian Hickson wrote:

I think in almost all cases, multiple headers will be a sign of an attack
or error, not the sign of cooperation.


OK. I think that's a fair challenge. Can someone come up with a 
plausible and specific scenario where multiple headers would be useful?


The ones that come immediately to my mind are where the ISP would want a 
strict general policy but might allow customers to loosen it on a 
site-by-site basis (e.g. allowing media from a particular site). But 
that can't be achieved by multiple headers anyway, because you get the 
permissions intersection, not the union.



How do you expect them to do it?


Copy-and-paste from sites that didn't understand the spec, for example
copying from w3schools.com, and then modifying it more or less at random.
Or copy-and-paste from some other site, without understanding what they're
doing.


The fix for that seems to me to be good error reporting, both to the 
server and in the browser. If their site doesn't work, we want them to 
know why. If it does work, but it's because their policy is far too lax, 
then they've gained little benefit - but if you try and deploy 
technologies you don't understand, the best you can hope for is not to 
shoot yourself in the foot.



Making the spec shorter is a pretty important part of simplifying the
language. The simpler the spec, the more people will be able to understand
it, the fewer mistakes will occur.


I don't think people should be writing policies based on reading the 
spec. People don't write HTML based on reading the HTML 4.01 spec - do 
they? A spec has to give as much space to error conditions and corner 
cases as it does the important, mainstream stuff. Whereas, a "How to 
write a CSP policy" document can just talk about best practice and 
common situations.


Gerv
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-08-11 Thread Ian Hickson
On Thu, 30 Jul 2009, Gervase Markham wrote:
> On 29/07/09 23:23, Ian Hickson wrote:
> >   * Remove external policy files.
> 
> I'm not sure how that's a significant simplification; the syntax is 
> exactly the same just with an extra level of indirection, and if that 
> makes things too complicated for you, don't use them.

Complexity affects everyone, not just those who use it.


> >   * If there are multiple headers, fail to fully closed.
> 
> How is this a simplification? It means that if there are multiple people 
> (e.g. an ISP and their customer) who want input into the policy, the ISP 
> or the customer has to manually merge and intersect the policies to make 
> one header, rather than the browser doing it. In other words, the 
> intersection code gets written 1000 times, often badly, rather than 
> once, hopefully right.

I think in almost all cases, multiple headers will be a sign of an attack 
or error, not the sign of cooperation.


> >   * Combine img-src, media-src, object-src, frame-src
> 
> But then the combined lotsofthings-src would have to be set to the 
> intersection of all the above, which means e.g. far more potential 
> sources of objects (in particular) than you might otherwise want. 
> "object-src: none" sounds to me like a great idea for a load of sites 
> which also want to display images.
> 
> OTOH, "lotsofthings-src: host1.com host2.com host3.com" would still be a 
> big improvement over now, where we effectively have "lotsofthings-src: 
> all".

I think simplification is a win here, even if it makes the language less 
expressive. Obviously, it's a judgement call. I'm just letting you know 
what I think is needed to make this good.


> > I'm concerned that people will eventually do something that causes the 
> > entire policy to be ignored, and not realise it ("yay, I fixed the 
> > problem") or will do something that other people will then copy and 
> > paste without understanding ("well this policy worked for that site... 
> > yay, now I'm secure").
> 
> These would be issues with any possible formulation.

It's dramatically reduced if the format fails safe and is of minimal 
expressiveness.


> > > I imagine sites starting with the simplest policy, e.g. "allow 
> > > self", and then progressively adding policy as required to let the 
> > > site function properly.  This will result in more-or-less minimal 
> > > policies being developed, which is obviously best from a security 
> > > perspective.
> > 
> > This is maybe how competentely written sites will do it. It's not how 
> > most sites will do it.
> 
> How do you expect them to do it?

Copy-and-paste from sites that didn't understand the spec, for example 
copying from w3schools.com, and then modifying it more or less at random. 
Or copy-and-paste from some other site, without understanding what they're 
doing.


> That's like saying "some people will start their Ruby on Rails web 
> application by writing it full of XSS holes, and then try and remove 
> them later". This may be true, but we don't blame Ruby on Rails. Do we?

Ruby on Rails isn't purporting to be a standard.


> > You are assuming the person reading all this is familiar with security 
> > concepts, with Web technologies, with "whitelists" and wildcards and 
> > so on. This is a fundamentally flawed assumption.
> 
> I don't see how we could change CSP to make it understandable to people 
> unfamiliar with Web technologies and wildcards. I think almost everyone 
> is familiar with the concept of a whitelist, but perhaps under a 
> different name. Any suggestions?

I think the dramatic simplification I described would be a good start. I'd 
have to look at the result before I could really say what else could be 
done to make the language safer for novices.


On Thu, 30 Jul 2009, Daniel Veditz wrote:
> > 
> >  * Drop the "allow" directive, default all the directives to "self"
> 
> "allow" is an important simplification.

I don't think that making policies shorter is the same as simplification. 
In fact, when it comes to security policies, I think simplicity 
corresponds almost directly to how explicit the language is. Any 
implications can end up tripping up authors, IMHO.


> > We could remove many of the directives, for example. That would make 
> > it much shorter.
> 
> Make what shorter? The spec, certainly, but probably not the typical 
> policy. And if a complex policy needed those directives then eliminating 
> them hasn't really helped.

Making the spec shorter is a pretty important part of simplifying the 
language. The simpler the spec, the more people will be able to understand 
it, the fewer mistakes will occur.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.

Re: Comments on the Content Security Policy specification

2009-08-11 Thread Gervase Markham

On 10/08/09 22:56, Sid Stamm wrote:

I tried to find in my notes and email archives how exactly we decided to
move the keywords out, and couldn't find anything specific. Anyway, I
added an "options" directive to the spec[0] that captures this change. I
also added a thread on the wiki discussion page[1].


I think we agreed to make them standalone top-level directives. 
"Options" is a vague word and it doesn't make it clear that these are 
script-related.


Gerv
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-08-11 Thread Gervase Markham

On 10/08/09 19:50, Brandon Sterne wrote:

Working examples will be forthcoming as soon as we have Firefox builds
available which contain CSP.


We shouldn't need to wait for working builds to try and work out the 
policies, should we? Although perhaps it would be a lot easier if you 
could test them via trial and error.


Here's some possibilities for www.mozilla.org, based on the home page - 
which does repost RSS headlines, so there's at least the theoretical 
possibility of an injection. To begin with:


allow self; options inline-script;

would be a perfectly reasonable policy. The inline-script is required 
because the Urchin tracker script appears to need kicking off using a 
single line of inline script. If this could be avoided, you could remove 
that second directive.


A tighter alternative would be:

allow none; options inline-script; img-src self; script-src self; 
style-src self;


I used the Page Info tab on the home page to get lists of image URLs in 
some categories. An add-on which did this for all CSP categories and 
provided other help would definitely be useful.


(Note that mozilla.org is going through a redesign, so the new version 
might require a different policy.)


I must say I do find myself automatically wanting to use colons (like 
CSS) or equals signs in these directives...


Gerv
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-08-10 Thread Sid Stamm

On 8/10/09 5:00 AM, Gervase Markham wrote:

On 30/07/09 18:51, Daniel Veditz wrote:

* Move "inline" and "eval" keywords from "script-src" to a separate
directive, so that all the -src directives have the same syntax


I've argued that too and I think we agreed, although I don't see that
reflected in the spec or on the talk page.


Yes, we did agree this.


I tried to find in my notes and email archives how exactly we decided to 
move the keywords out, and couldn't find anything specific.  Anyway, I 
added an "options" directive to the spec[0] that captures this change. 
I also added a thread on the wiki discussion page[1].


Cheers,
Sid

[0]https://wiki.mozilla.org/Security/CSP/Spec#options
[1]https://wiki.mozilla.org/Talk:Security/CSP/Spec#Option_.28not_source.29_Keywords_.28OPEN.29
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-08-10 Thread Brandon Sterne
On 8/10/09 10:27 AM, TO wrote:
> I'd like to ask again to
> see some real-world policy examples.  I suggested CNN last time, but
> if something like Twitter would be an easier place to start, maybe we
> could see that one?  Or see the example for mozilla.org, maybe?  Or
> even just some toy problems to start, working up to real-world stuff
> later.

Working examples will be forthcoming as soon as we have Firefox builds
available which contain CSP.  Absent the working builds, do you think
it's valuable for people to compare page source for an existing popular
site and a CSP-converted version?

> I'm asking for a reason: I think the process of trying to determine
> good policy for some real sites will give a lot of insight into where
> CSP may be too complex, or equally, where it's unable to be
> sufficiently precise.  And it provides a bit of a usability test:
> remember that initially, many people wanting to use CSP will be
> applying it to existing sites as opposed to designing sites such that
> they work well with CSP.
> 
> People will want examples eventually as part of the documentation for
> CSP because, as has been pointed out, they're more likely to cut and
> paste from these examples than to generate policy from scratch.  So
> let's see what sort of examples people will be cutting and pasting
> from!
> 
>  Terri
> 
> PS - Full Disclosure: I'm one of the authors of a much simpler system
> with similar goals, called SOMA: http://www.ccsl.carleton.ca/software/soma/
> so obviously I'm a big believer in simpler policies.  We presented
> SOMA last year at ACM CCS, so I promise this isn't just another system
> from some random internet denizen -- This is peer-reviewed work from
> professional security researchers.

I read through your ACM CCS slides and the project whitepaper and SOMA
doesn't appear to address the XSS vector of inline scripts in any way.
Have I overlooked some major aspect of SOMA, or does the model only
provide controls for remotely-included content?

-Brandon
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-08-10 Thread TO
On a related note (to Ian's initial message), I'd like to ask again to
see some real-world policy examples.  I suggested CNN last time, but
if something like Twitter would be an easier place to start, maybe we
could see that one?  Or see the example for mozilla.org, maybe?  Or
even just some toy problems to start, working up to real-world stuff
later.

I'm asking for a reason: I think the process of trying to determine
good policy for some real sites will give a lot of insight into where
CSP may be too complex, or equally, where it's unable to be
sufficiently precise.  And it provides a bit of a usability test:
remember that initially, many people wanting to use CSP will be
applying it to existing sites as opposed to designing sites such that
they work well with CSP.

People will want examples eventually as part of the documentation for
CSP because, as has been pointed out, they're more likely to cut and
paste from these examples than to generate policy from scratch.  So
let's see what sort of examples people will be cutting and pasting
from!

 Terri

PS - Full Disclosure: I'm one of the authors of a much simpler system
with similar goals, called SOMA: http://www.ccsl.carleton.ca/software/soma/
so obviously I'm a big believer in simpler policies.  We presented
SOMA last year at ACM CCS, so I promise this isn't just another system
from some random internet denizen -- This is peer-reviewed work from
professional security researchers.
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-08-10 Thread Gervase Markham

On 30/07/09 18:51, Daniel Veditz wrote:

  * Remove external policy files.


I'd be happy to drop those, personally. Some people have expressed
bandwidth concerns that would be solved by a cacheable policy file.


Can we quantify that? At this stage, it's looking like most policies 
won't be significantly longer than a URL. And the extra RTT on first 
load, as Hixie says, means that big sites may well choose not to use 
them. So if removing it reduces implementation and spec complexity, why 
don't we do that? At least for the first "X-" version.



  * Move "inline" and "eval" keywords from "script-src" to a separate
directive, so that all the -src directives have the same syntax


I've argued that too and I think we agreed, although I don't see that
reflected in the spec or on the talk page.


Yes, we did agree this.

Gerv
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-07-30 Thread Anne van Kesteren
On Thu, 30 Jul 2009 19:51:45 +0200, Daniel Veditz  wrote:
> Ian Hickson wrote:
>>> If a large site such as Twitter were to implement it,
>>> that's millions of users protected that otherwise wouldn't be.
>>
>> Assuming they got it right.
>
> If they don't some researcher gets an easy conference talk out of
> bypassing the restrictions and poking fun at them, and then it gets
> fixed. The sites most likely to use and benefit from CSP are the ones
> most likely to be closely watched.

I seriously doubt that. I was at a conference in Portugal where a major ISP got 
pointed out the enormous amounts of holes they had which makes me think that 
given the severity of the problem (that and Rasmus Lerdorf indicating this was 
nothing new) it needs a rather simple solution because authors will not get it. 
They are not informed about all the various attacks that can happen on sites. 
Not at all. And this is not surprising given the vast complexity of the Web 
platform.

(Tne conference was a few months ago.)


-- 
Anne van Kesteren
http://annevankesteren.nl/
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-07-30 Thread Daniel Veditz
Ian Hickson wrote:
>> If a large site such as Twitter were to implement it, 
>> that's millions of users protected that otherwise wouldn't be.
> 
> Assuming they got it right.

If they don't some researcher gets an easy conference talk out of
bypassing the restrictions and poking fun at them, and then it gets
fixed. The sites most likely to use and benefit from CSP are the ones
most likely to be closely watched.

> I think that something like CSP can definitely be useful. I just think it 
> has to be orders of magnitude simpler.

That's what stalled "Content-Restrictions", and nothing simpler came out
of it. As-is, with all its complications, it has gotten favorable
interest from enough of the right people that it's worth continuing the
experiment.

> Here are some suggestions for simplification:
> 
>  * Remove external policy files.

I'd be happy to drop those, personally. Some people have expressed
bandwidth concerns that would be solved by a cacheable policy file.

>  * Combine style-src and font-src

Ian, meet Eric; Eric meet Ian -- you two work it out. If we don't get
agreement I'd tend to go the way that makes it more likely other browser
vendors will adopt the spec.

>  * Drop the "allow" directive, default all the directives to "self"

"allow" is an important simplification. A fairly simple site (but not
single host) could well use a policy like "CSP: allow *.mydomain.com"
wheras with your proposed simplification they would have to enter each
term of each web technology they use just so they can duplicate
"*.mydomain.com".

I strongly encourage sites to use simple "allow " policies, and
only get into the others if they want to disable certain technologies or
specifically make relax the restrictions on something relatively safe
like img-src.

>  * Move "inline" and "eval" keywords from "script-src" to a separate 
>directive, so that all the -src directives have the same syntax

I've argued that too and I think we agreed, although I don't see that
reflected in the spec or on the talk page.

>> Or should we do nothing and expect site authors to write correct and 
>> safe PHP+HTML+JavaScript as it stands. CSP seems far less complicated 
>> than the things authors already are expected to understand.
> 
> Authors get the things authors already are expected to understand wrong 
> all the time.

Whatever we create those guys are going to get wrong. I'd rather focus
on what features are useful and necessary for the ones who are able to
get it right.

> I'm concerned that people will eventually do something that causes the 
> entire policy to be ignored, and not realise it ("yay, I fixed the 
> problem") or will do something that other people will then copy and paste 
> without understanding ("well this policy worked for that site... yay, now 
> I'm secure").

Don't know that any CSP formulation would help prevent that. Not even if
we simplified it to the point of uselessness for complex sites.

 We are not creating this tool for naive, untrained people.
>>> Naive, untrained people are who is going to use it.
>> Yes, but we're really trying to protect the millions of users who visit 
>> Google, Yahoo, PayPal, banks, etc, and hopefully those kinds of 
>> high-traffic sites are run by smart people (yes, I am being naive).
> 
> It doesn't matter who you are trying to protect. This _will_ be used by 
> naive, untrained people, and so we have to make sure it works for them.

"If you make something idiot-proof they'll just make a better idiot"
comes to mind. Or perhaps "Build a system that even a fool can use, and
only a fool will want to use it." George Bernard Shaw (Shaw? Really?)

> We could remove many of the directives, for example. That would make it 
> much shorter.

Make what shorter? The spec, certainly, but probably not the typical
policy. And if a complex policy needed those directives then eliminating
them hasn't really helped.

Frankly we're going to resolve this as a mental exercise. Feedback from
people trying to use a working X-CSP implementation will be more
valuable than our guesses about how people will use it. That feedback
will go into the non-X- version of the spec.

>> Using a policy file and having a different one for every page would be 
>> horrid, but what would be the problem with having a cachable policy file 
>> per service? Only the user's initial visit would suffer.
> 
> Making the user's initial visit suffer wouldn't be acceptable to Google, 
> for several reasons; first, it seems that far more visits than just the 
> "initial" visit involve cache misses, and second, the first visit is the 
> most important one in terms of having a good (= fast) user experience.

That's good feedback. However, the ability to use a policy file doesn't
mean you'd have to.

>> If a site hosts all its own
>> content then a policy of "X-Content-Security-Policy: allow self" will
>> suffice and will provide all the XSS protection out of the box.
> 
> It will also break inline scripts, analytics, an

Re: Comments on the Content Security Policy specification

2009-07-30 Thread Bil Corry
Gervase Markham wrote on 7/30/2009 9:06 AM: 
> On 29/07/09 23:23, Ian Hickson wrote:
>>   * Remove external policy files.
> 
> I'm not sure how that's a significant simplification; the syntax is
> exactly the same just with an extra level of indirection, and if that
> makes things too complicated for you, don't use them.

If both a policy definition and a policy-uri field are present, CSP fails 
closed.  Not allowing external policy files means avoiding this issue entirely 
-- one less point of potential failure.

That said, the external policy file may actually make CSP easier to deploy for 
some organizations.  If authors are responsible for including the CSP header 
via a dynamic language, but another person is responsible for 
creating/maintaining the actual CSP policy definitions, then having them in 
multiple external policy files may make it easier to separate the duties.


>>   * If there are multiple headers, fail to fully closed.
> 
> How is this a simplification? It means that if there are multiple people
> (e.g. an ISP and their customer) who want input into the policy, the ISP
> or the customer has to manually merge and intersect the policies to make
> one header, rather than the browser doing it. In other words, the
> intersection code gets written 1000 times, often badly, rather than
> once, hopefully right.

Wouldn't an ISP have to leave all the restrictions wide-open?  Since 
intersecting policies can only result in a more restrictive policy, I don't 
think an ISP could lock down anything as it would disallow it for all of their 
client sites.  The only feature of intersecting policies that I see them taking 
advantage of is the report-uri, so that they get a report too.  Or maybe I just 
need to see a practical example of a policy that an ISP would implement.

 
>>   * Combine img-src, media-src, object-src, frame-src
> 
> But then the combined lotsofthings-src would have to be set to the
> intersection of all the above, which means e.g. far more potential
> sources of objects (in particular) than you might otherwise want.
> "object-src: none" sounds to me like a great idea for a load of sites
> which also want to display images.
> 
> OTOH, "lotsofthings-src: host1.com host2.com host3.com" would still be a
> big improvement over now, where we effectively have "lotsofthings-src:
> all".

I like the granular control of img-src, media-src, etc, but wouldn't be opposed 
to a single directive that still achieves that:

X-Content-Security-Policy: allow self; source host1.tld host2.tld 
object host3.tld image host4.tld;

Or maybe it's still too confusing?


>>   * Drop the "allow" directive, default all the directives to "self"
> 
> That's an interesting idea.

I like this idea.



- Bil

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-07-30 Thread Gervase Markham

On 29/07/09 23:23, Ian Hickson wrote:

  * Remove external policy files.


I'm not sure how that's a significant simplification; the syntax is 
exactly the same just with an extra level of indirection, and if that 
makes things too complicated for you, don't use them.



  * Remove  policies.


Done.


  * If there are multiple headers, fail to fully closed.


How is this a simplification? It means that if there are multiple people 
(e.g. an ISP and their customer) who want input into the policy, the ISP 
or the customer has to manually merge and intersect the policies to make 
one header, rather than the browser doing it. In other words, the 
intersection code gets written 1000 times, often badly, rather than 
once, hopefully right.



  * Combine img-src, media-src, object-src, frame-src


But then the combined lotsofthings-src would have to be set to the 
intersection of all the above, which means e.g. far more potential 
sources of objects (in particular) than you might otherwise want. 
"object-src: none" sounds to me like a great idea for a load of sites 
which also want to display images.


OTOH, "lotsofthings-src: host1.com host2.com host3.com" would still be a 
big improvement over now, where we effectively have "lotsofthings-src: all".



  * Combine style-src and font-src


That makes sense.


  * Drop the "allow" directive, default all the directives to "self"


That's an interesting idea.


  * Move "inline" and "eval" keywords from "script-src" to a separate
directive, so that all the -src directives have the same syntax


Yes, we've done this.


I'm concerned that people will eventually do something that causes the
entire policy to be ignored, and not realise it ("yay, I fixed the
problem") or will do something that other people will then copy and paste
without understanding ("well this policy worked for that site... yay, now
I'm secure").


These would be issues with any possible formulation.


I imagine sites starting with the simplest policy, e.g. "allow self",
and then progressively adding policy as required to let the site
function properly.  This will result in more-or-less minimal policies
being developed, which is obviously best from a security perspective.


This is maybe how competentely written sites will do it. It's not how most
sites will do it.


How do you expect them to do it? Start with "allow all"? That's like 
saying "some people will start their Ruby on Rails web application by 
writing it full of XSS holes, and then try and remove them later". This 
may be true, but we don't blame Ruby on Rails. Do we?



You are assuming the person reading all this is familiar with security
concepts, with Web technologies, with "whitelists" and wildcards and so
on. This is a fundamentally flawed assumption.


I don't see how we could change CSP to make it understandable to people 
unfamiliar with Web technologies and wildcards. I think almost everyone 
is familiar with the concept of a whitelist, but perhaps under a 
different name. Any suggestions?



Seatbelts are simple to understand. Make CSP as simple as seatbelts and
I'll agree.


Ah, the magic "fix my security problems" header. Why didn't we think of 
implementing that before?



Make the BNF that defines the syntax be something that matches all
possible strings.



This is great. We should do this.

Gerv
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-07-29 Thread Ian Hickson
On Thu, 16 Jul 2009, Bil Corry wrote:
> Ian Hickson wrote on 7/16/2009 5:51 AM: 
> > I think that this complexity, combined with the tendency for authors 
> > to rely on features they think are solvign their problems, would 
> > actually lead to authors writing policy files in what would externally 
> > appear to be a random fashion, changing them until their sites worked, 
> > and would then assume their site is safe. This would then likely make 
> > them _less_ paranoid about XSS problems, which would further increase 
> > the possibility of them being attacked, with a good chance of the 
> > policy not actually being effective.
> 
> I think your point that CSP may be too complex and/or too much work for 
> some developers is spot on.  Even getting developers to use something as 
> simple as the Secure flag for cookies on HTTPS sites is still a 
> challenge.  And if we can't get developers to use the Secure flag, the 
> chances of getting sites configured with CSP is daunting at best.

I agree. I think many people will try, will think they got it right 
(because their site works), and will then assume that they therefore don't 
have to worry about (e.g.) people inserting scripts into their pages, when 
in fact they just allowed anything.


> At first glance, it may seem like a waste of time to implement CSP if 
> the best we can achieve is only partial coverage, but instead of looking 
> at it from the number of sites covered, look at it from the number of 
> users covered.  If a large site such as Twitter were to implement it, 
> that's millions of users protected that otherwise wouldn't be.

Assuming they got it right.

I think that something like CSP can definitely be useful. I just think it 
has to be orders of magnitude simpler.


> > I think CSP should be more consistent about what happens with multiple 
> > policies. Right now, two headers will mean the second is ignored, and 
> > two s will mean the second is ignored; but a header and a  
> > will cause the intersection to be used. Similarly, a header with both 
> > a policy and a URL will cause the most restrictive mode to be used 
> > (and both policies to be ignored), but a misplaced  will cause 
> > no CSP to be applied.
> 
> I agree.  There's been some discussion about removing  support 
> entirely and/or allowing multiple headers with a intersection algorithm, 
> so depending on how those ideas are adopted, it makes sense to ensure 
> consistency across the spec.

Removing  altogether would be one good step towards simplification.


On Fri, 17 Jul 2009, Daniel Veditz wrote:
> Ian Hickson wrote:
> > This isn't intended to be a "gotcha" question. My point is just that 
> > CSP is too complicated, too powerful, to be understood by many authors 
> > on the Web, and that because this is a security technology, this will 
> > directly lead to security bugs on sites (and worse, on sites that 
> > think they are safe because they are using a Security Policy).
> 
> So do you have a simpler syntax to suggest? A different approach 
> entirely?

Here are some suggestions for simplification:

 * Remove external policy files.
 * Remove  policies.
 * If there are multiple headers, fail to fully closed.
 * Combine img-src, media-src, object-src, frame-src
 * Combine style-src and font-src
 * Drop the "allow" directive, default all the directives to "self"
 * Move "inline" and "eval" keywords from "script-src" to a separate 
   directive, so that all the -src directives have the same syntax



> Or should we do nothing and expect site authors to write correct and 
> safe PHP+HTML+JavaScript as it stands. CSP seems far less complicated 
> than the things authors already are expected to understand.

Authors get the things authors already are expected to understand wrong 
all the time.


> >>>X-Content-Security-Policy: allow https://self:443
> >> Using "self" for anything other than a keyword is a botch and I will 
> >> continue to argue against it.
> > 
> > The examples I gave in the previous e-mail were all directly from the 
> > spec itself.
> 
> The spec is a group effort and I'm sure there are things in it each of
> us would prefer to be different. It's also not set in stone, which is
> why I mention things like this (but I don't hear a lot of agreement so
> maybe everyone else likes using "self" as a pseudo-host).

My point is that these are not things I made up -- they are policies that 
have been put forward by people as examples. If they demonstrate problems, 
then it's not just me making up edge cases that show problems.


> >> I'll admit that the default "no inline" behavior is not at all 
> >> obvious and people will just have to learn that
> > 
> > This strategy has not worked in the past.
> 
> But in this case they will learn rather quickly if their site doesn't 
> work.

I'm concerned that people will eventually do something that causes the 
entire policy to be ignored, and not realise it ("yay, I fixed the 
problem") or will do something tha

Re: Comments on the Content Security Policy specification

2009-07-17 Thread Daniel Veditz
Jean-Marc Desperrier wrote:
> In fact a solution could be that everytime the browser reject
> downloading a ressource due to CSP rules, it spits out a warning on the
> javascript console together with the minimal CSP authorization that
> would be required to obtain that ressource.
> This could help authors to write the right declarations without
> understanding much to CSP.

Announcing rejected resources is an important part of the plan. The spec
has a reportURI for just this reason, and the Mozilla implementation
will also echo errors to the Error Console.
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-07-17 Thread Brandon Sterne
On 7/16/09 8:17 PM, Ian Hickson wrote:
> On Thu, 16 Jul 2009, Daniel Veditz wrote:
>> Ian Hickson wrote:
>>> * The more complicated something is, the more mistakes people will 
>>> make.
>> We encourage people to use the simplest policy possible. The additional 
>> options are there for the edge cases.
> 
> It doesn't matter what we encourage. Most authors are going to be using 
> this through copy-and-paste from tutorials that were written by people who 
> made up anything they didn't work out from trial and error themselves.

Dan's point is absolutely true.  The majority of sites will be able to
benefit from simple, minimal policies.  If a site hosts all its own
content then a policy of "X-Content-Security-Policy: allow self" will
suffice and will provide all the XSS protection out of the box.  I tend
to think this will be the common example that gets cut-and-pasted the
majority of the time.  Only more sophisticated sites will need to delve
into the other features of CSP.

Content Security Policy has admittedly grown more complex since it's
earliest design but only out of necessity.  As we talked through the
model we have realized that a certain about of complexity is in fact
necessary to support various use cases which might not common on the
Web, but need to be supported.

>>> I believe that if one were to take a typical Web developer, show him 
>>> this:
>>>
>>>X-Content-Security-Policy: allow self; img-src *;
>>>   object-src media1.com media2.com;
>>>   script-src trustedscripts.example.com
>>>
>>> ...and ask him "does this enable or disable data: URLs in " or 
>>> "would an onclick='' handler work with this policy" or "are framesets 
>>> enabled or disabled by this set of directives", the odds of them 
>>> getting the answers right are about 50:50.
>> Sure, if you confuse them first by asking about "disabling". 
>> _everything_ is disabled; the default policy is "allow none". If you ask 
>> "What does this policy enable?" the answers are easier.
> 
> I was trying to make the questions neutral ("enable or disable"). The 
> authors, though, aren't going to actually ask these questions explicitly, 
> they'll just subconsciously form decisions about what the answers are 
> without really knowing that's what they're doing.

I don't think it makes sense for sites to work backwards from a complex
policy example as the best way to understand CSP.  I imagine sites
starting with the simplest policy, e.g. "allow self", and then
progressively adding policy as required to let the site function
properly.  This will result in more-or-less minimal policies being
developed, which is obviously best from a security perspective.

>> data URLs? nope, not mentioned
>> inline handlers? nope, not mentioned
> 
> How is an author supposed to know that anything not mentioned won't work?
> 
> And is that really true?
> 
>X-Content-Security-Policy: allow *; img-src self;
> 
> Are cross-origin scripts enabled? They're not mentioned, so the answer 
> must be no, right?
> 
> This isn't intended to be a "gotcha" question. My point is just that CSP 
> is too complicated, too powerful, to be understood by many authors on the 
> Web, and that because this is a security technology, this will directly 
> lead to security bugs on sites (and worse, on sites that think they are 
> safe because they are using a Security Policy).

I don't think your example is proof at all that CSP is too complex.  If
I were writing that policy, my spidey senses would start tingling as
soon as I wrote "allow *".  I would expect everything to be in-bounds at
that point.  This is a whitelist mechanism after all.

>>>X-Content-Security-Policy: allow https://self:443
>> Using "self" for anything other than a keyword is a botch and I will 
>> continue to argue against it. If you mean "myhost at some other scheme" 
>> then it's not too much to ask you to spell it out. I kind of liked 
>> Gerv's suggestion to syntactically distinguish keywords from host names, 
>> too.
> 
> The examples I gave in the previous e-mail were all directly from the 
> spec itself.

I also agree that this example is awkward.  In fact, the scheme and port
are inherited from the protected document if they are not specified in
the policy, so this policy would only make sense if it were a non-https
page which wanted to load all its resources over https.

I don't feel strongly about keeping that feature.  Perhaps we should
allow self to be used not-in-conjunction with scheme or port as Dan says.

>>> ...I don't think a random Web developer would be able to correctly 
>>> guess whether or not inline scripts on the page would work, or whether 
>>> Google Analytics would be disabled or not.
>> Are inline scripts mentioned in that policy? Is Google Analytics? No, so 
>> they are disabled.
> 
> _I_ know the answer. I read the spec. My point is that it isn't intuitive 
> and that authors _will_ guess wrong.

Sorry, but I think this

Re: Comments on the Content Security Policy specification

2009-07-17 Thread Bil Corry
Jean-Marc Desperrier wrote on 7/17/2009 11:18 AM: 
> Bil Corry wrote:
>> CSP is non-trivial; it takes a bit of work to configure it properly
>> and requires on-going maintenance as the site evolves.  It's not
>> targeted to the uninformed author, it simply isn't possible to
>> achieve that kind of coverage -- I suspect in the pool of all
>> authors, the majority of them don't even know what XSS is, let alone
>> ways to code against it and using CSP to augment defense.
> 
> But did you try to get feedback, not from the average site author, but
> from those who have experience at successfully protecting against XSS
> large sites that evolve frequently ?

It's my opinion that anyone with experience configuring rules for firewalls and 
WAFs to protect large sites by will find CSP very understandable and 
approachable.  In fact, when compared to the syntax for iptables[1] or 
modsecurity[2], CSP is actually much simpler to understand and implement and is 
on par with the syntax of a similar technology, ABE[3].


> If the syntax has to be ugly,

It has to be functional; do you have specific suggestions on how the syntax 
should look?


> then there should be a tool that takes a
> site and calculates the appropriate CSP declarations.

I agree that a browser plug-in to do this would be helpful.


> In fact a solution could be that everytime the browser reject
> downloading a ressource due to CSP rules, it spits out a warning on the
> javascript console together with the minimal CSP authorization that
> would be required to obtain that ressource.
> This could help authors to write the right declarations without
> understanding much to CSP.

This could work too.  Or a tool that imports the Violation Report and allows an 
author to generate rules to allow the violation in the future.


- Bil

[1] http://iptables-tutorial.frozentux.net/iptables-tutorial.html
[2] 
http://www.modsecurity.org/documentation/modsecurity-apache/2.5.9/html-multipage/
[3] http://noscript.net/abe/abe_rules-0.5.pdf

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-07-17 Thread Daniel Veditz
Ian Hickson wrote:
> This isn't intended to be a "gotcha" question. My point is just that CSP 
> is too complicated, too powerful, to be understood by many authors on the 
> Web, and that because this is a security technology, this will directly 
> lead to security bugs on sites (and worse, on sites that think they are 
> safe because they are using a Security Policy).

So do you have a simpler syntax to suggest? A different approach
entirely? Or should we do nothing and expect site authors to write
correct and safe PHP+HTML+JavaScript as it stands. CSP seems far less
complicated than the things authors already are expected to understand.

>>>X-Content-Security-Policy: allow https://self:443
>> Using "self" for anything other than a keyword is a botch and I will 
>> continue to argue against it.
> 
> The examples I gave in the previous e-mail were all directly from the 
> spec itself.

The spec is a group effort and I'm sure there are things in it each of
us would prefer to be different. It's also not set in stone, which is
why I mention things like this (but I don't hear a lot of agreement so
maybe everyone else likes using "self" as a pseudo-host).

>> I'll admit that the default "no inline" behavior is not at all obvious 
>> and people will just have to learn that
> 
> This strategy has not worked in the past.

But in this case they will learn rather quickly if their site doesn't work.

>> We are not creating this tool for naive, untrained people.
> 
> Naive, untrained people are who is going to use it.

Yes, but we're really trying to protect the millions of users who visit
Google, Yahoo, PayPal, banks, etc, and hopefully those kinds of
high-traffic sites are run by smart people (yes, I am being naive).

> I agree entirely. But we don't get to require that people pass a test 
> before they use a technology. They'll use it because they heard of it on 
> w3schools, or because someone on digg linked to it, or because their 
> friend at the local gym heard his sysadmin team is using it.
> 
> We know that people do this. We have to take that into account.

I don't know what to do with this feedback. Are you saying "don't do
CSP"? Do you have suggestions on how to make it safer or simpler to use?
An alternate technology that will address the XSS problem?

> I would recommend making the entire policy language signficantly simpler, 
> such that it can be expressed in less space than a URL's length, which 
> would solve this problem as well as the above issues.

Since the policy is mostly a list of hosts or domains it would seem
difficult to shorten it much. We could make the directives terse or even
cryptic, but that doesn't gain much in length nor would it help
understandability.

>> It will block page _parsing_, just as a 

Re: Comments on the Content Security Policy specification

2009-07-17 Thread Sid Stamm

On 7/17/09 8:40 AM, Bil Corry wrote:

An external validation tool could help authors understand

> what their CSP rules are actually allowing/preventing (maybe
> something similar to validator.w3.org).  To compliment it,
> another handy tool would be a browser plug-in that could help
> create CSP rules based on how the site actually works.
These are great ideas.  We are currently working on some "how to" 
documents with the spec for CSP that cover things such as "how to create 
a policy for my site", and would love to see such tools come out of all 
this.


-Sid
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-07-17 Thread Jean-Marc Desperrier

Bil Corry wrote:

CSP is non-trivial; it takes a bit of work to configure it properly
and requires on-going maintenance as the site evolves.  It's not
targeted to the uninformed author, it simply isn't possible to
achieve that kind of coverage -- I suspect in the pool of all
authors, the majority of them don't even know what XSS is, let alone
ways to code against it and using CSP to augment defense.


But did you try to get feedback, not from the average site author, but 
from those who have experience at successfully protecting against XSS 
large sites that evolve frequently ?


If the syntax has to be ugly, then there should be a tool that takes a 
site and calculates the appropriate CSP declarations.


In fact a solution could be that everytime the browser reject 
downloading a ressource due to CSP rules, it spits out a warning on the 
javascript console together with the minimal CSP authorization that 
would be required to obtain that ressource.
This could help authors to write the right declarations without 
understanding much to CSP.


PS : Sorry for the multi-posting earlier, I was trying to cross-post to 
www-arch...@w3.org but it didn't work and I did not know it had sent the 
message to the group.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-07-17 Thread Bil Corry
Jean-Marc Desperrier wrote on 7/17/2009 2:26 AM: 
> Daniel Veditz wrote:
>> CSP is designed so that mistakes of omission tend to break the site
>> break. This won't introduce subtle bugs, rudimentary content testing
>> will quickly reveal problems.
> 
> But won't authors fail to understand how to solve the problem, and open
> everything wide ? From experience, that's what happens with technologies
> that are too complex.

If authors believe it's too complex, I would imagine they wouldn't implement it 
at all; but if they do configure it wide open, it's the equivalent of not using 
it -- the net result is identical, except perhaps Ian's suggestion that an 
uninformed author would mistakenly believe they were protected.

An external validation tool could help authors understand what their CSP rules 
are actually allowing/preventing (maybe something similar to validator.w3.org). 
 To compliment it, another handy tool would be a browser plug-in that could 
help create CSP rules based on how the site actually works.


> A simpler syntax for simple case really would help, it's just that Ian
> is coming a bit late for this.

What specific changes do you recommend that would make it easier to use, but 
still function properly?

There appears to be a disconnect between the audience CSP is actually targeting 
vs. the general audience some believe it is targeting.  CSP is non-trivial; it 
takes a bit of work to configure it properly and requires on-going maintenance 
as the site evolves.  It's not targeted to the uninformed author, it simply 
isn't possible to achieve that kind of coverage -- I suspect in the pool of all 
authors, the majority of them don't even know what XSS is, let alone ways to 
code against it and using CSP to augment defense.


- Bil

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-07-17 Thread Jean-Marc Desperrier

Daniel Veditz wrote:

CSP is designed so that mistakes of omission tend to break the site
break. This won't introduce subtle bugs, rudimentary content testing
will quickly reveal problems.


But won't authors fail to understand how to solve the problem, and open 
everything wide ? From experience, that's what happens with technologies 
that are too complex.


A simpler syntax for simple case really would help, it's just that Ian 
is coming a bit late for this.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-07-17 Thread Jean-Marc Desperrier

Daniel Veditz wrote:

CSP is designed so that mistakes of omission tend to break the site
break. This won't introduce subtle bugs, rudimentary content testing
will quickly reveal problems.


But won't authors fail to understand how to solve the problem, and open 
everything wide ? From experience, that's what happens with technologies 
that are too complex.


A simpler syntax for simple case really would help, it's just that Ian 
is coming a bit late for this.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-07-17 Thread Jean-Marc Desperrier

Daniel Veditz wrote:

CSP is designed so that mistakes of omission tend to break the site
break. This won't introduce subtle bugs, rudimentary content testing
will quickly reveal problems.


But won't authors fail to understand how to solve the problem, and open 
everything wide ? From experience, that's what happens with technologies 
that are too complex.


A simpler syntax for simple case really would help, it's just that Ian 
is coming a bit late for this.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-07-16 Thread Daniel Veditz
Ian Hickson wrote:
> * Authors will rely on technologies that they perceive are solving their 
>   problems,

XSS is a huge and persistent problem on the web. If this solves that
problem authors will use it.

> * Authors will invariably make mistakes, primarily mistakes of omission,

CSP is designed so that mistakes of omission tend to break the site
break. This won't introduce subtle bugs, rudimentary content testing
will quickly reveal problems.

> * The more complicated something is, the more mistakes people will make.

We encourage people to use the simplest policy possible. The additional
options are there for the edge cases.

> I believe that if one were to take a typical Web developer, show him this:
> 
>X-Content-Security-Policy: allow self; img-src *;
>   object-src media1.com media2.com;
>   script-src trustedscripts.example.com
> 
> ...and ask him "does this enable or disable data: URLs in " or 
> "would an onclick='' handler work with this policy" or "are framesets 
> enabled or disabled by this set of directives", the odds of them getting 
> the answers right are about 50:50.

Sure, if you confuse them first by asking about "disabling".
_everything_ is disabled; the default policy is "allow none". If you ask
"What does this policy enable?" the answers are easier.

data URLs? nope, not mentioned
inline handlers? nope, not mentioned

>X-Content-Security-Policy: allow https://self:443

Using "self" for anything other than a keyword is a botch and I will
continue to argue against it. If you mean "myhost at some other scheme"
then it's not too much to ask you to spell it out. I kind of liked
Gerv's suggestion to syntactically distinguish keywords from host names,
too.

> ...I don't think a random Web developer would be able to correctly guess 
> whether or not inline scripts on the page would work, or whether Google 
> Analytics would be disabled or not.

Are inline scripts mentioned in that policy? Is Google Analytics? No, so
they are disabled. I'll admit that the default "no inline" behavior is
not at all obvious and people will just have to learn that, but when it
comes to domains it should be pretty clear from the syntax that anything
not explicitly "allowed" is, in fact, not allowed.

> lead to authors writing policy files in what would externally appear to be 
> a random fashion, changing them until their sites worked, and would then 
> assume their site is safe.

We are not creating this tool for naive, untrained people. We don't
expect every site to use it. Taking that approach to any security
technology is going to get you into trouble.

> This would then likely make them _less_ paranoid about XSS problems,

I hope not, since it does nothing to help their visitors using legacy
browsers that don't support CSP. CSP is a back-up insurance policy,
defense-in-depth and not the defense itself.

> I'm concerned about the round-trip latency of fetching an external policy

Us too. We don't like the complexity added by the external policy file,
but it was a popular request. It could reduce bandwidth for a site with
a complex policy since it would be cachable.

> or would it block page loading?

It will block page _parsing_, just as a 

Re: Comments on the Content Security Policy specification

2009-07-16 Thread Bil Corry
Ian Hickson wrote on 7/16/2009 5:51 AM: 
> I think that this complexity, combined with the tendency for authors to 
> rely on features they think are solvign their problems, would actually 
> lead to authors writing policy files in what would externally appear to be 
> a random fashion, changing them until their sites worked, and would then 
> assume their site is safe. This would then likely make them _less_ 
> paranoid about XSS problems, which would further increase the possibility 
> of them being attacked, with a good chance of the policy not actually 
> being effective.

I think your point that CSP may be too complex and/or too much work for some 
developers is spot on.  Even getting developers to use something as simple as 
the Secure flag for cookies on HTTPS sites is still a challenge.  And if we 
can't get developers to use the Secure flag, the chances of getting sites 
configured with CSP is daunting at best.  More to my point, getting developers 
to use *any* security feature is daunting, so any solution to a security issue 
that doesn't involve protection by default is going to lack coverage, either 
due to lack of deployment, or misconfigured deployment.  And since protection 
by default (in this case) would mean broken web sites, we're left with an 
opt-in model that achieves only partial coverage.

At first glance, it may seem like a waste of time to implement CSP if the best 
we can achieve is only partial coverage, but instead of looking at it from the 
number of sites covered, look at it from the number of users covered.  If a 
large site such as Twitter were to implement it, that's millions of users 
protected that otherwise wouldn't be.



> I think CSP should be more consistent about what happens with multiple 
> policies. Right now, two headers will mean the second is ignored, and two 
> s will mean the second is ignored; but a header and a  will 
> cause the intersection to be used. Similarly, a header with both a policy 
> and a URL will cause the most restrictive mode to be used (and both 
> policies to be ignored), but a misplaced  will cause no CSP to be 
> applied.

I agree.  There's been some discussion about removing  support entirely 
and/or allowing multiple headers with a intersection algorithm, so depending on 
how those ideas are adopted, it makes sense to ensure consistency across the 
spec.



> I don't think UAs should advertise support for this feature in their HTTP 
> requests. Doing this for each feature doesn't scale. Also, browsers are 
> notoriously bad at claiming support accurately; since bugs will be present 
> whatever happens, servers are likely to need to do regular browser 
> sniffing anyway, even if support _is_ advertised. On the long term, all 
> browsers would support this, and during the transition period, browser 
> sniffing would be fine. (If we do add the advertisment, we can never 
> remove it, even if all browsers support it -- just like we can't remove 
> the "Mozilla/4.0" part of every browser's UA string now.)

This is under discussion too; if you have an interest, here's the most recent 
thread where it's being discussed:

http://groups.google.com/group/mozilla.dev.security/browse_thread/thread/571f1495e6ccf822#anchor_1880c3647a49d3e7



- Bil

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Comments on the Content Security Policy specification

2009-07-16 Thread Ian Hickson

First, let me state up front some assumptions I'm making:

* Authors will rely on technologies that they perceive are solving their 
  problems,

* Authors will invariably make mistakes, primarily mistakes of omission,

* The more complicated something is, the more mistakes people will make.


I think CSP is orders of magnitude too complicated to be a successful 
security mechanism on the Web.

I believe that if one were to take a typical Web developer, show him this:

   X-Content-Security-Policy: allow self; img-src *;
  object-src media1.com media2.com;
  script-src trustedscripts.example.com

...and ask him "does this enable or disable data: URLs in " or 
"would an onclick='' handler work with this policy" or "are framesets 
enabled or disabled by this set of directives", the odds of them getting 
the answers right are about 50:50.

Similarly, given the following:

   X-Content-Security-Policy: allow https://self:443

...I don't think a random Web developer would be able to correctly guess 
whether or not inline scripts on the page would work, or whether Google 
Analytics would be disabled or not.

I think that this complexity, combined with the tendency for authors to 
rely on features they think are solvign their problems, would actually 
lead to authors writing policy files in what would externally appear to be 
a random fashion, changing them until their sites worked, and would then 
assume their site is safe. This would then likely make them _less_ 
paranoid about XSS problems, which would further increase the possibility 
of them being attacked, with a good chance of the policy not actually 
being effective.



Other comments:

I'm concerned about the round-trip latency of fetching an external policy 
file. Would the policy only be enforced after it is downloaded, or would 
it block page loading? The former seems like a big security problem (you 
would be vulnerable to an XSS if the attacker can DOS the connection). The 
latter would be unacceptable from a performance perspective. Applying a 
lockdown policy in the meantime would likely break the page (e.g. no 
scripts or images could be fetched).

I think CSP should be more consistent about what happens with multiple 
policies. Right now, two headers will mean the second is ignored, and two 
s will mean the second is ignored; but a header and a  will 
cause the intersection to be used. Similarly, a header with both a policy 
and a URL will cause the most restrictive mode to be used (and both 
policies to be ignored), but a misplaced  will cause no CSP to be 
applied.

A policy-uri to a third-party domain is blocked supposedly to prevent an 
XSS from being able to run a separate policy, but then the policy can be 
inclued inline, so that particular hole doesn't seem to be actually 
blocked.

I don't think UAs should advertise support for this feature in their HTTP 
requests. Doing this for each feature doesn't scale. Also, browsers are 
notoriously bad at claiming support accurately; since bugs will be present 
whatever happens, servers are likely to need to do regular browser 
sniffing anyway, even if support _is_ advertised. On the long term, all 
browsers would support this, and during the transition period, browser 
sniffing would be fine. (If we do add the advertisment, we can never 
remove it, even if all browsers support it -- just like we can't remove 
the "Mozilla/4.0" part of every browser's UA string now.)

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security