On Mon, 12 May 2014, Eduardo' Vela\" <Nava> wrote:
> On Mon, May 12, 2014 at 4:17 PM, Ian Hickson <i...@hixie.ch> wrote:
> >
> > Note that there _is_ still a content type check with appcache, it's 
> > just done on the first few bytes of the file instead of on the 
> > metadata. (This is IMHO how all file typing should work.)
> This seems to imply MIME types should always be ignored, is that 
> actually the case?

Only for Appcache and other formats that have unambiguous signatures, like 
PNG, GIF, etc.

> I mean, it's clearly possible to have a file that is both, valid CSS and 
> valid JS.

CSS, JS, HTML, etc, are IMHO poorly designed text formats, since they 
don't have a defined fixed signature. So for those, we need Content-Type.

> > (The Content-Type is a bit of a red herring here. If you can prevent 
> > the attacker from overriding the Content-Type, you can prevent them 
> > from sending the appcache signature also.)
> The author feedback that made the CT check to be dropped is that it was 
> too hard for authors to set CT, but it was easy to set the appcache 
> signature.
> The "attacker" will usually have the same (and more) constrains than 
> what the author has (although, attackers might be more tech savy than 
> their victim site owners in some cases).
> In usual (and complex) web applications, it's not as common to be able 
> to set the Content Type as it is to be able to control the first few 
> bytes (think JSONP endpoints for example - but exporting data in text 
> format is another example).

I agree that you're less likely to be able to control the headers. But I 
don't think that's enough. A big part of the reason that authors find it 
hard to set HTTP headers is that doing so is technically complicated, not 
that it's impossible. If an attacker is putting files on an Apache server 
because there's some upload vulnerability, it becomes trivial to set the 
HTTP headers: just upload a .htaccess file.

My point, though, was from the other angle. If you can _prevent_ someone 
from setting HTTP headers somehow, you can equally prevent them from 
uploading files that match a signature.

In any case, requiring HTTP headers for appcache was very poorly received. 
I don't think we should return to requiring them. Aside from the fact that 
it would break a lot of sites, it would also mean ignoring pretty clear 
author feedback.

> > "Cookie Bombing" (causing the user agent to send an HTTP request 
> > that's bigger than the server accepts) should IMHO be resolved by 
> > setting an upper limit on what clients can send in cookies, and having 
> > user agent enforce this limit. Server would then know that they need 
> > to support that much and no more for cookies.
> Yes, I agree. This was an example of a simple client DoS attack.

Is fixing Cookie Bombing being tracked by anyone?

> > On Mon, 12 May 2014, Eduardo' Vela\" <Nava> wrote:
> > >
> > > One idea is with a special CSP policy that forbids manifest files 
> > > from working without the right CT
> >
> > I don't think limiting it to Content-Types is a real fix, but we could 
> > mark, with CSP, that the domain doesn't support appcache at all.
> The problem I see with the CSP approach is that only pages that have CSP 
> will be protected.

Well, that's true of anything involving CSP.

> > We could, though, say that a manifest can only do fallbacks for URLs 
> > that are within the subpath that the manifest finds itself in. That 
> > would be an interesting way of scoping manifests on shared domains.
> This is how crossdomain.xml works, so it might make sense. But I'm not 
> sure if that would be sufficient.

Well, all of this is defense in depth, essentially. So strictly speaking 
none of it is necessary. Obviously the deeper the defense, the better; but 
we shouldn't go so deep as to make the feature unusable. (I mean, we could 
have really _good_ defense by just dropping the feature entirely.)

So, moving forward, what do we want to do? Should I add the path 
restriction to fallback? (Is that compatible enough? Do we have data on 
that?) Should we add CSP directives for this? Something else?

Are the lessons learnt here being reported to the Service Worker team?

On Mon, 12 May 2014, Michal Zalewski wrote:
> Yup, from the perspective of a significant proportion of modern
> websites, MIME sniffing would be almost certainly a disaster.

I'm not suggesting sniffing, I'm suggesting having a single well-defined 
algorithm with well-defined fixed signatures.

For formats that don't have signatures, this doesn't work, obviously.

Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Reply via email to