Re: [whatwg] Proposal: Write-only submittable form-associated controls.

2014-10-17 Thread Eduardo' Vela Nava
I would be happy to be proven wrong, but it's unlikely the amount of effort
this will incur will be worth the small number of sites that will use it
(large sites probably won't, and small sites, as usual, won't even know
about it's existence). In addition, it's going to be such a fragile
security control that I suspect will be the next XSS Auditor in terms of a
bypass is always 1 hour away.


Re: [whatwg] Proposal: Write-only submittable form-associated controls.

2014-10-16 Thread Eduardo' Vela Nava
1. How are keyup/down/press restrictions useful for password protection?
Actually they seem more useful for CSRF instead.
2. How is the tainting problem simplified by focusing on write only?
3. How is tagging the credential as write-only help with the secure
deployment of a site-wide CSP policy?
4. Why are sites that echo passwords in password fields shooting themselves
in the foot?

Also, since it seems I didn't explain myself correctly with what I meant
with Channel ID, I'll explain it differently.

Imagine if the password manager, instead of just syncing passwords around
also moved an httpOnly cookie. And whenever it detects the password going
by it appends the httpOnly cookie.

If the server detects such cookie in the request it concatenates it after
the password and uses that as the auth credential.

On the server, this only requires a one line change (adding he cookie at
the end if present), on the client the APIs already exist.

Same can be done with Channel ID with the further advantage that the OBC
can't be copied. The advantage of the cookie approach is that it can be
morphed and generated more easily.

Also, as a point of reference we've redone authentication many more times
in a lot less time with a lot less resources than deployed CSP across all
of Google. So yes, it's easier.


Re: [whatwg] Proposal: Write-only submittable form-associated controls.

2014-10-16 Thread Eduardo' Vela Nava
On Thu, Oct 16, 2014 at 11:59 AM, Mike West mk...@google.com wrote:

 On Thu, Oct 16, 2014 at 10:36 AM, Eduardo' Vela Nava e...@google.com
 wrote:

 1. How are keyup/down/press restrictions useful for password protection?
 Actually they seem more useful for CSRF instead.

 These events are some of the many ways in which the data being typed into
 a particular field is exposed to script. If we're going to call the field
 writeonly, we need to lock those down. That said, they only protect that
 particular field: attackers can easily bypass this restriction by doing
 more work (overlaying a new form, etc).

OK, so it's just being locked down out of a formality, but has no security
purpose. Sure, so it's not useful for any security purposes.

How are they useful for CSRF? I don't see the link.

Let's not go into it. But if you lock down an origin as described in the
spec, and send a FormData via XHR then one could authenticate an action
came as a result of a user action. Anyway, I shouldn't have said that, the
solution is incomplete since value= prefilling would also have to be
disabled for that to work.

 2. How is the tainting problem simplified by focusing on write only?

  Focusing on autofill means that we're moving the flag up to the
 credential, so we can ensure that the browser only autofills into writeonly
 form fields. To your specific example, it means we don't have to care about
 blobs in iframes, because (I'm pretty sure...) we don't autofill blobs in
 iframes.


The blob: URL would simply steal the FormData from the other page (it's
same origin), and submit it via XHR to another origin..

 3. How is tagging the credential as write-only help with the secure
 deployment of a site-wide CSP policy?

 It doesn't. The two are completely unrelated.

 My point was simply that if we tag a credential as writeonly, we wouldn't
 fill it into a form that lacked the writeonly flag in a page that lacked
 the writeonly CSP. Solving CSP deployment isn't a precondition of this
 scheme.


I see, so it wouldn't be sufficient to make a field writeonly, you would
also need to declare that in the CSP. The whole website would have to have
connect-src policies stricly restricting the exfiltration of data from the
domain. Is there any (relevant/important) website where locking down
connect-src for the whole origin is possible or feasible? Or are we
assuming every website has their login form in a unique origin? (Are you
expecting to have per-page suborigins implemented before this is feasible?).

4. Why are sites that echo passwords in password fields shooting themselves
 in the foot?

Echoing sensitive data in a place where an injected XHR can read it is a
 general problem. This proposal doesn't make that worse.


Sensitive data is echoed in places where XHRs can read all the time. Your
email, your authentication credentials (OAuth, etc), your bank statements.
This isn't a general problem.

 Imagine if the password manager, instead of just syncing passwords around
 also moved an httpOnly cookie. And whenever it detects the password going
 by it appends the httpOnly cookie.

 If the server detects such cookie in the request it concatenates it after
 the password and uses that as the auth credential.

 On the server, this only requires a one line change (adding he cookie at
 the end if present), on the client the APIs already exist.

 Same can be done with Channel ID with the further advantage that the OBC
 can't be copied. The advantage of the cookie approach is that it can be
 morphed and generated more easily.

 Sounds like a great way of hardening sign-in. It's not clear to me how
 that negates the problem of password theft, as I'd still need to be able to
 sign in on my friend's computer when I accidentally spill coke on my
 laptop, right?


How is the password manager able to sync your password to your friend's
computer?

 Also, as a point of reference we've redone authentication many more times
 in a lot less time with a lot less resources than deployed CSP across all
 of Google. So yes, it's easier.

 That surprises me.

 Still, I suspect both are significantly more work than adding an attribute
 to a form field.

 --
 Mike West mk...@google.com
 Google+: https://mkw.st/+, Twitter: @mikewest, Cell: +49 162 10 255 91

 Google Germany GmbH, Dienerstrasse 12, 80331 München, Germany
 Registergericht und -nummer: Hamburg, HRB 86891
 Sitz der Gesellschaft: Hamburg
 Geschäftsführer: Graham Law, Christine Elizabeth Flores
 (Sorry; I'm legally required to add this exciting detail to emails. Bleh.)




Re: [whatwg] Proposal: Write-only submittable form-associated controls.

2014-10-16 Thread Eduardo' Vela Nava
On Thu, Oct 16, 2014 at 3:07 PM, Mike West mk...@google.com wrote:

 On Thu, Oct 16, 2014 at 12:16 PM, Eduardo' Vela Nava e...@google.com
 wrote:

 On Thu, Oct 16, 2014 at 11:59 AM, Mike West mk...@google.com wrote:

 On Thu, Oct 16, 2014 at 10:36 AM, Eduardo' Vela Nava e...@google.com
 wrote:

 OK, so it's just being locked down out of a formality, but has no
 security purpose. Sure, so it's not useful for any security purposes.


 Well, no. It doesn't solve the problem, but it introduces a hurdle. Making
 an attacker do more work is good.

 2. How is the tainting problem simplified by focusing on write only?

  Focusing on autofill means that we're moving the flag up to the
 credential, so we can ensure that the browser only autofills into writeonly
 form fields. To your specific example, it means we don't have to care about
 blobs in iframes, because (I'm pretty sure...) we don't autofill blobs in
 iframes.


 The blob: URL would simply steal the FormData from the other page (it's
 same origin), and submit it via XHR to another origin..


 Ah, that's clever! However: blobs inherit the CSP of their parent frame,
 so the same mitigations apply.

Well, it doesn't today. But maybe you mean in the future.

But the point is that there are many ways to exfiltrate, these are just the
first thing that comes to mind but others like input pattern also are an
info leak, and I mean, these are just ideas coming up the top of my mind,
the browser wasn't designed to solve this problem and it's dubious if it's
really feasible to do so without reinventing a lot of things.


 Moreover, if we're doing the network-stack replacement thing, and we know
 details about the credential, then we can refuse to do the replacement in
 the browser process if the destination doesn't match the details we have
 stored for the credential, CSP or not.

 3. How is tagging the credential as write-only help with the secure
 deployment of a site-wide CSP policy?

 It doesn't. The two are completely unrelated.

 My point was simply that if we tag a credential as writeonly, we
 wouldn't fill it into a form that lacked the writeonly flag in a page that
 lacked the writeonly CSP. Solving CSP deployment isn't a precondition of
 this scheme.


 I see, so it wouldn't be sufficient to make a field writeonly, you would
 also need to declare that in the CSP.


 Let me rephrase: if we tag a credential as writeonly, we wouldn't fill it
 into a form field that wasn't writeonly. Form fields are tagged writeonly
 either by virtue of an inlined attribute, or a CSP on the page.

 Either individually signals to the browser that the site does not intend
 to make programmatic use of the field's value. We wouldn't need both in
 order to decide when to tag a credential as writeonly.

The whole website would have to have connect-src policies stricly
 restricting the exfiltration of data from the domain. Is there any
 (relevant/important) website where locking down connect-src for the whole
 origin is possible or feasible? Or are we assuming every website has their
 login form in a unique origin? (Are you expecting to have per-page
 suborigins implemented before this is feasible?).

 4. Why are sites that echo passwords in password fields shooting
 themselves in the foot?

 Echoing sensitive data in a place where an injected XHR can read it is a
 general problem. This proposal doesn't make that worse.


 Sensitive data is echoed in places where XHRs can read all the time. Your
 email, your authentication credentials (OAuth, etc), your bank statements.
 This isn't a general problem.


 It is a general problem, given the attack vector you're proposing: if I
 can inject a same-origin XHR, I can read sensitive data. That includes
 passwords, if the passwords are echo'd out in the page's contents. I'm
 agreeing with you that writeonly doesn't solve this problem.

 I'd suggest that sites themselves could solve it by echoing a nonce rather
 than the original password, but that's obviously up to them and not
 something we could spec.

 Sounds like a great way of hardening sign-in. It's not clear to me how
 that negates the problem of password theft, as I'd still need to be able to
 sign in on my friend's computer when I accidentally spill coke on my
 laptop, right?


 How is the password manager able to sync your password to your friend's
 computer?


 I wasn't clear: the server still needs to accept a bare username/password
 pair, as it's not the case that users only log in a) using a password
 manager, b) on a machine that's syncing. As long as that's the case, I
 don't see how tying passwords to a cookie makes password theft less
 problematic.


The user would just have to type the full password (including the extra
cookie-random value that the password manager backed up). The caveat of
course is that the user has to write this suffix somewhere, but that is
what makes the password strong anyway.

And anyway, to be clear, this is just a discussion on an alternate design

Re: [whatwg] Proposal: Write-only submittable form-associated controls.

2014-10-15 Thread Eduardo' Vela Nava
Yea the keyup/down/press restrictions are definitely not useful, at least
for password protection since the user has clearly no way to know if the
field is safe or not.

The tainting is never gonna work reliably and consistently as Michal hinted
(say, a blob: URL would run in the same origin but not be bound to CSP
which could be used to exfiltrate the data).

The dependency on CSP also makes this a bit aspirational. I can agree that
the login form could be CSP protected but the whole origin is not as
likely. Specially for connect- and form-. And even that might be
insufficient (its a common practice to echo back typed passwords when the
user doesnt answer a captcha or has it wrong, for example - a same-origin
XHR will leak the password).

There are other solutions that seem more likely to work to solve the
problem of password managers (something like a synced channelId, for
example, or even FIDO style solutions). This proposal seems to try to
achieve a mixture of just replacing canary values at the network layer
based on destination with an httpOnly-style model for user-supplied data.

If we have a password manager and are gonna ask authors to modify their
site, we should just use it to transfer real credentials, not passwords..
Passwords need to die anyway.


Re: [whatwg] AppCache Content-Type Security Considerations

2014-05-13 Thread Eduardo' Vela Nava
On Tue, May 13, 2014 at 9:38 AM, Ian Hickson i...@hixie.ch wrote:

 On Mon, 12 May 2014, Eduardo' Vela\ Nava wrote:
  On Mon, May 12, 2014 at 4:17 PM, Ian Hickson i...@hixie.ch wrote:
  
   Note that there _is_ still a content type check with appcache, it's
   just done on the first few bytes of the file instead of on the
   metadata. (This is IMHO how all file typing should work.)
 
  This seems to imply MIME types should always be ignored, is that
  actually the case?

 Only for Appcache and other formats that have unambiguous signatures, like
 PNG, GIF, etc.


  I mean, it's clearly possible to have a file that is both, valid CSS and
  valid JS.

 CSS, JS, HTML, etc, are IMHO poorly designed text formats, since they
 don't have a defined fixed signature. So for those, we need Content-Type.


   (The Content-Type is a bit of a red herring here. If you can prevent
   the attacker from overriding the Content-Type, you can prevent them
   from sending the appcache signature also.)
 
  The author feedback that made the CT check to be dropped is that it was
  too hard for authors to set CT, but it was easy to set the appcache
  signature.
 
  The attacker will usually have the same (and more) constrains than
  what the author has (although, attackers might be more tech savy than
  their victim site owners in some cases).
 
  In usual (and complex) web applications, it's not as common to be able
  to set the Content Type as it is to be able to control the first few
  bytes (think JSONP endpoints for example - but exporting data in text
  format is another example).

 I agree that you're less likely to be able to control the headers. But I
 don't think that's enough. A big part of the reason that authors find it
 hard to set HTTP headers is that doing so is technically complicated, not
 that it's impossible. If an attacker is putting files on an Apache server
 because there's some upload vulnerability, it becomes trivial to set the
 HTTP headers: just upload a .htaccess file.


Uploading a .htaccess file is a significantly greater vulnerability than
XSS, as it allows RCE, and we are concerned here about vulnerabilities that
don't just allow the user to upload files, but rather to serve files from a
web service. The later are more common than the former.


 My point, though, was from the other angle. If you can _prevent_ someone
 from setting HTTP headers somehow, you can equally prevent them from
 uploading files that match a signature.

It's not that simple, it's not just about uploading files, it's about
serving content that looks like manifest files, but actually aren't.
Similar to XSS (note the docs attached to my OP like
http://www.cse.chalmers.se/~andrei/ccs13.pdf for examples).

In any case, requiring HTTP headers for appcache was very poorly received.
 I don't think we should return to requiring them. Aside from the fact that
 it would break a lot of sites, it would also mean ignoring pretty clear
 author feedback.

I totally agree, I apologize if the email seemed to imply I was asking for
the decision to be revoked (I tried to ensure it didn't, but it did).

   Cookie Bombing (causing the user agent to send an HTTP request
   that's bigger than the server accepts) should IMHO be resolved by
   setting an upper limit on what clients can send in cookies, and having
   user agent enforce this limit. Server would then know that they need
   to support that much and no more for cookies.
 
  Yes, I agree. This was an example of a simple client DoS attack.

 Is fixing Cookie Bombing being tracked by anyone?

This would be in IETF I assume? I don't know how that process works, we can
follow up offline.


On Mon, 12 May 2014, Eduardo' Vela\ Nava wrote:
   
One idea is with a special CSP policy that forbids manifest files
from working without the right CT
  
   I don't think limiting it to Content-Types is a real fix, but we could
   mark, with CSP, that the domain doesn't support appcache at all.
 
  The problem I see with the CSP approach is that only pages that have CSP
  will be protected.

 Well, that's true of anything involving CSP.

Yes, but we need to protect all of Google services (in an origin =/).

   We could, though, say that a manifest can only do fallbacks for URLs
   that are within the subpath that the manifest finds itself in. That
   would be an interesting way of scoping manifests on shared domains.
 
  This is how crossdomain.xml works, so it might make sense. But I'm not
  sure if that would be sufficient.

 Well, all of this is defense in depth, essentially. So strictly speaking
 none of it is necessary. Obviously the deeper the defense, the better; but
 we shouldn't go so deep as to make the feature unusable. (I mean, we could
 have really _good_ defense by just dropping the feature entirely.)


 So, moving forward, what do we want to do? Should I add the path
 restriction to fallback? (Is that compatible enough? Do we have data on
 that?) Should we add CSP directives

Re: [whatwg] AppCache Content-Type Security Considerations

2014-05-13 Thread Eduardo' Vela Nava
Thanks!

Just to ensure this wasn't lost in the thread.

What about X-Content-Type-Options: nosniff?

Could we formalize it and remove the X and disable sniffing all together?


On Tue, May 13, 2014 at 12:06 PM, Ian Hickson i...@hixie.ch wrote:

 On Tue, 13 May 2014, Eduardo' Vela\ Nava wrote:
  
   I agree that you're less likely to be able to control the headers. But
   I don't think that's enough. A big part of the reason that authors
   find it hard to set HTTP headers is that doing so is technically
   complicated, not that it's impossible. If an attacker is putting files
   on an Apache server because there's some upload vulnerability, it
   becomes trivial to set the HTTP headers: just upload a .htaccess file.
 
  Uploading a .htaccess file is a significantly greater vulnerability than
  XSS, as it allows RCE, and we are concerned here about vulnerabilities
  that don't just allow the user to upload files, but rather to serve
  files from a web service. The later are more common than the former.

 It doesn't necessarily allow RCE, but sure.


   My point, though, was from the other angle. If you can _prevent_
   someone from setting HTTP headers somehow, you can equally prevent
   them from uploading files that match a signature.
 
  It's not that simple, it's not just about uploading files, it's about
  serving content that looks like manifest files, but actually aren't.
  Similar to XSS (note the docs attached to my OP like
  http://www.cse.chalmers.se/~andrei/ccs13.pdf for examples).

 Well, you have to upload two files for this vulnerability, right: an
 HTML file with a manifest= that points to the manifest, and the manifest
 itself. Both are things that could be detected. Of course, the assumption
 is that there's a vulnerability in the first place, so we can just assume
 that any mitigations to detect these uploads are also broken...

 The reason I was initially talking about detecting these files is that I
 was imagining a situation where there was a site used for shared static
 hosting, where one of the people legitimately uploading files was
 (intentionally or not) causing the whole domain to get caught in their
 manifest's fallback rule. In that situation, one can just block all
 manifests by scanning for them (or for HTML files with manifest=
 attributes). Also, in those situations, MIME type checks are less likely
 to be helpful since you'd need to give these users the ability to set MIME
 types. For this kind of situation, path restrictions would work well, I
 think, assuming you isolate each user to a different path.

 But in the case of arbitrary upload vulnerabilities, I agree that these
 mitigations are moot.

 In the case of arbitrary upload vulnerabilities, I don't really think any
 solution is going to be convincing short of dropping the FALLBACK feature,
 because fundamentally being able to capture the entire domain to persist
 the existence of content into the future when it's no longer being served
 is the entire point of the feature.


 Cookie Bombing (causing the user agent to send an HTTP request
 that's bigger than the server accepts) should IMHO be resolved by
 setting an upper limit on what clients can send in cookies, and
 having user agent enforce this limit. Server would then know that
 they need to support that much and no more for cookies.
   
Yes, I agree. This was an example of a simple client DoS attack.
  
   Is fixing Cookie Bombing being tracked by anyone?
 
  This would be in IETF I assume? I don't know how that process works, we
  can follow up offline.

 We should make sure this is indeed followed up upon.


  Before we add changes, let's figure out what's the best path forward. I
  think that's a reasonable proposal, but I am hoping we can come up with
  a better one, as I don't feel it's yet sufficient.

 Ok. Let me know when you want me to change the spec. :-)


   Are the lessons learnt here being reported to the Service Worker team?
 
  Yup, who is that?

 I believe Alex Russel is the point person on that technology.

 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: [whatwg] AppCache Content-Type Security Considerations

2014-05-13 Thread Eduardo' Vela Nava
(for context [tests]
http://philip.html5.org/tests/ie8/cases/content-type-nosniff.html)


Re: [whatwg] AppCache Content-Type Security Considerations

2014-05-13 Thread Eduardo' Vela Nava
On Tue, May 13, 2014 at 1:06 PM, Ian Hickson i...@hixie.ch wrote:

 On Tue, 13 May 2014, Eduardo' Vela\ Nava wrote:
 
  Thanks!
 
  Just to ensure this wasn't lost in the thread.
 
  What about X-Content-Type-Options: nosniff?
 
  Could we formalize it and remove the X and disable sniffing all
  together?

 Do you mean for manifests specifically, or more generally?

I agree it's wrong to do it as a one-off, so was hoping to make it more
generally (since there seems to be a move on moving out of the CT model).

If that's not OK, then CSP is probably a reasonable way forward (I'll take
a look at the Service Worker thread to ensure we have a similar mitigation
in place).

For manifests specifically, it seems like a very odd feature. Manifests
 don't have a MIME type normally, but if served with this header, then you
 should also change how you determine if a manifest is a manifest?

 If we just want a way to prevent pages that aren't supposed to be
 manifests from being treated as manifests, I think it'd be better to have
 a CSP directive that disables manifests. Then you would apply it to any
 resource you know you don't want cached, don't want to be treated as being
 able to declare a manifests, and don't want treated as a manifest.

 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: [whatwg] AppCache Content-Type Security Considerations

2014-05-13 Thread Eduardo' Vela Nava
(for the sake of completeness)


On Tue, May 13, 2014 at 12:06 PM, Ian Hickson i...@hixie.ch wrote:

 On Tue, 13 May 2014, Eduardo' Vela\ Nava wrote:
  
   I agree that you're less likely to be able to control the headers. But
   I don't think that's enough. A big part of the reason that authors
   find it hard to set HTTP headers is that doing so is technically
   complicated, not that it's impossible. If an attacker is putting files
   on an Apache server because there's some upload vulnerability, it
   becomes trivial to set the HTTP headers: just upload a .htaccess file.
 
  Uploading a .htaccess file is a significantly greater vulnerability than
  XSS, as it allows RCE, and we are concerned here about vulnerabilities
  that don't just allow the user to upload files, but rather to serve
  files from a web service. The later are more common than the former.

 It doesn't necessarily allow RCE, but sure.


Yes, not in all situations, but in some of them it does:
https://github.com/wireghoul/htshells/tree/master/shell


Re: [whatwg] AppCache Content-Type Security Considerations

2014-05-13 Thread Eduardo' Vela Nava
If CSS, JS and plugins had magic numbers at the beginning of the file, then
that would prevent the issues that you are discussing right?

I think that's Ian's point, that for those file types, we need CT, but for
others, like manifest files, and image and plugins we shouldn't need.

PDFs, and JARs are a special case here, since they scan the content (first
N bytes, or last N bytes) for the signature, but if the content match was
done for the exact first byte, then this would help prevent security issues
I think?




On Tue, May 13, 2014 at 8:00 PM, Michal Zalewski lcam...@coredump.cxwrote:

  I disagree. Much of the Web actually relies on this today, and for the
  most part it works. For example, when you do:
 
 img src=foo ...
 
  ...the Content-Type is ignored except for SVG.

 Well, img is actually a fairly special case of content that is
 difficult for attackers to spoof and that can't be easily read back
 across domains without additional CORS headers. But I believe that in
 Chrome and in Firefox, C-T checks or other mitigations have been
 recently added at least script, link rel=stylesheet, and object
 / embed, all of which lead to interesting security problems when
 they are used to load other types of documents across origins. Similar
 changes are being made also for a couple of other cases, such as a
 download.

 /mz



Re: [whatwg] AppCache Content-Type Security Considerations

2014-05-13 Thread Eduardo' Vela Nava
So today, we need CT for JSONP and CSV. Those are the ones we *need* CT.

The idea is to train the browser to recognize the CTs of formats that are
ambiguous.



On Tue, May 13, 2014 at 8:26 PM, Michal Zalewski lcam...@coredump.cxwrote:

  I think that's Ian's point, that for those file types, we need CT, but
 for
  others, like manifest files, and image and plugins we shouldn't need.

 If we take this route, I think we'd be essentially making sure that
 many web applications that are safe today will gradually acquire new
 security bugs out of the blue as the UA magic signature detection
 logic is extended in the future (as it inevitably will - to account
 for new plugins, new formats with scripting capabilities or other
 interesting side effects, etc).

 An out-of-band signalling mechanism has far superior security
 properties compares to an in-band one, given how many if not most web
 apps are designed today. It may be that they are designed the wrong
 way, but the security rules were never particularly clear, and serving
 content off-domain added a lot of complexity around topics such as
 auth, so I think it's best to be forgiving and accommodate that. The
 examples of CSV exports, text documents, and several more exotic
 things aside, most JSONP APIs give the attacker broad control over the
 first few bytes of the response.

 /mz



Re: [whatwg] AppCache Content-Type Security Considerations

2014-05-13 Thread Eduardo' Vela Nava
@Ian, is there a way to find out what was the Content-Type that the authors
that complained were getting?

Hopefully we can figure out a list of Content-Types that are unlikely to
cause security problems?


On Tue, May 13, 2014 at 8:32 PM, Eduardo' Vela Nava e...@google.comwrote:

 So today, we need CT for JSONP and CSV. Those are the ones we *need* CT.

 The idea is to train the browser to recognize the CTs of formats that are
 ambiguous.



 On Tue, May 13, 2014 at 8:26 PM, Michal Zalewski lcam...@coredump.cxwrote:

  I think that's Ian's point, that for those file types, we need CT, but
 for
  others, like manifest files, and image and plugins we shouldn't need.

 If we take this route, I think we'd be essentially making sure that
 many web applications that are safe today will gradually acquire new
 security bugs out of the blue as the UA magic signature detection
 logic is extended in the future (as it inevitably will - to account
 for new plugins, new formats with scripting capabilities or other
 interesting side effects, etc).

 An out-of-band signalling mechanism has far superior security
 properties compares to an in-band one, given how many if not most web
 apps are designed today. It may be that they are designed the wrong
 way, but the security rules were never particularly clear, and serving
 content off-domain added a lot of complexity around topics such as
 auth, so I think it's best to be forgiving and accommodate that. The
 examples of CSV exports, text documents, and several more exotic
 things aside, most JSONP APIs give the attacker broad control over the
 first few bytes of the response.

 /mz





[whatwg] AppCache Content-Type Security Considerations

2014-05-12 Thread Eduardo' Vela Nava
Hi!

In the following bug:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=14701the Content-Type
requirement for AppCache manifest files was dropped, and
the security implications of such change probably weren't fully understood
at that time, and we want to start a discussion on this topic to ensure we
aren't doing something that might put some websites in danger.

First of all, to provide some background and establish a common ground. The
only security mechanism effectively provided by the browser is the web
origin[1], and those protections are commonly used across the industry to
isolate and protect user-provided content[2]. This allows, for example,
users to have blogs and websites well isolated from each other.  We can
easily justify creating an isolated origin for active content like HTML
files and so on, and every web author should do just that.

However, some files can be created in a safe format (such as a CSV file, or
a JSON response, or even an HTML file), and the Content-Type can be set
correctly (instructing the browser how it intends the data to be rendered)
and give the impression the file hosting is safe, when it actually isn't.
We used to see this problem before for Content-Type sniffing[3], in which
the browser would disregard the Content-Type of an HTTP request, but new
mechanisms such as X-Content-Type-Options: nosniff would come and save the
day. Today, for instance, TDC files[4], or PDF files[5], and before we had
JAR files[6], and we've seen similar problems arise from SWF files[7] and
crossdomain.xml files[8] resulting in a similar consequence. Historically
these are all have been browser plugins and extensions that create these
problems. We worked with the plugin maintainers to respect Content-Type, or
provide mechanisms to allow web authors to opt-out of the default insecure
behavior[9] (which is arguably very bad and wrong, as anyone that doesn't
know about this will be easily vulnerable =/, but I digress). There are
also arbitrary rules that make development somewhat safer against future
problems (thou shall not let the user control the first byte of an HTTP
response, thou shall not let the user inject NULL bytes or malformed UTF-8
sequences, thou shall not let the user control the content-type of an HTTP
response).

Now, with appcache manifest files, we are introducing a security-sensitive
change based on a file with special powers (more on this later), and while
before they were guarded by a Content-Type check, this isn't the case
anymore. We understand a lot of author feedback came into play when making
this decision of ignoring Content-Type, and we are not asking to revert it.
But rather, we want to ensure the risk is clearly understood, and the
solution codified in the standard balances the security risk with the
usability of the feature by web authors.

To clearly explain the risk, I'll lay down two use cases where this is
going to worsen things somewhat:

 1. When there is an XSS in your favorite website, and you are infected
with it. Thanks to FALLBACK the attacker can make it so that you can never
recover from the XSS (and be infected for ever). The way to do this is by
finding an endpoint in the vulnerable web application that lets you
generate a file that looks like a manifest file (this isn't hard - really),
and then force the browser to use it via your XSS. The way you make sure
FALLBACK triggers every time (and not just when the user is offline) is by
means of Cookie Bombing [10]. While similar attacks were already possible
before via other mechanisms [11] such as localStorage and the FileSystem
API and such, they were always kept at bay, and this makes things worse.
 2. CDNs and similar sites can be completely taken over for all the HTML
hosted in them. They will effectively break and reach an irrecoverable
state. While this isn't the case for resources like JS files, HTML files
enough are already a concern.

This attacks are particularly scary because they allow attackers to do
things that before wasn't possible. In specific persist access to their
victim's web application. We can clearly mitigate the risk in many ways.
One idea is with a special CSP policy that forbids manifest files from
working without the right CT, or by means of per-page suborigins[12] (which
reaches a similar deal-with-the-devil as crossdomain.xml files and Adobe),
but in an ideal world, we wouldn't have this problem in the first place.
Another alternative is to enforce and codify the semantics of
X-Content-Type-Options: nosniff for AppCache manifest files.

If we do nothing, it will significantly change the way we all have been
treating secure web content hosting, so let's to make sure all the security
implications are understood now, rather than later.

Thanks!!

[1] http://tools.ietf.org/html/rfc6454
[2] https://plus.sandbox.google.com/+EduardoVelaNava/posts/4riSfqjx2bj
[3] http://msdn.microsoft.com/en-us/library/ms775147.aspx
[4]

Re: [whatwg] AppCache Content-Type Security Considerations

2014-05-12 Thread Eduardo' Vela Nava
On Mon, May 12, 2014 at 4:17 PM, Ian Hickson i...@hixie.ch wrote:

 On Mon, 12 May 2014, Eduardo' Vela\ Nava wrote:
 
  Now, with appcache manifest files, we are introducing a
  security-sensitive change based on a file with special powers (more on
  this later), and while before they were guarded by a Content-Type check,
  this isn't the case anymore.

 Note that there _is_ still a content type check with appcache, it's just
 done on the first few bytes of the file instead of on the metadata. (This
 is IMHO how all file typing should work.)


This seems to imply MIME types should always be ignored, is that actually
the case? I mean, it's clearly possible to have a file that is both, valid
CSS and valid JS.

I think what you actually mean is that the browser (with the context it
has, such as, being included in a script or a @import) should be able to
make decisions re the way to parse the data it receives.

 1. When there is an XSS in your favorite website, and you are infected
  with it. Thanks to FALLBACK the attacker can make it so that you can
  never recover from the XSS (and be infected for ever). The way to do
  this is by finding an endpoint in the vulnerable web application that
  lets you generate a file that looks like a manifest file (this isn't
  hard - really), and then force the browser to use it via your XSS.

 I don't really see a way to really fix this short of dropping FALLBACK
 support entirely.


I agree it's a tough problem, and I was hoping we could find another
solution :)

(The Content-Type is a bit of a red herring here. If you can prevent the
 attacker from overriding the Content-Type, you can prevent them from
 sending the appcache signature also.)


The author feedback that made the CT check to be dropped is that it was too
hard for authors to set CT, but it was easy to set the appcache signature.

The attacker will usually have the same (and more) constrains than what
the author has (although, attackers might be more tech savy than their
victim site owners in some cases).

In usual (and complex) web applications, it's not as common to be able to
set the Content Type as it is to be able to control the first few bytes
(think JSONP endpoints for example - but exporting data in text format is
another example).

 The way you make sure FALLBACK triggers every time (and not just when
  the user is offline) is by means of Cookie Bombing. While similar
  attacks were already possible before via other mechanisms such as
  localStorage and the FileSystem API and such, they were always kept at
  bay, and this makes things worse.

 Cookie Bombing (causing the user agent to send an HTTP request that's
 bigger than the server accepts) should IMHO be resolved by setting an
 upper limit on what clients can send in cookies, and having user agent
 enforce this limit. Server would then know that they need to support that
 much and no more for cookies.


Yes, I agree. This was an example of a simple client DoS attack.

  2. CDNs and similar sites can be completely taken over for all the HTML
  hosted in them. They will effectively break and reach an irrecoverable
  state. While this isn't the case for resources like JS files, HTML files
  enough are already a concern.

 When you're online, only resources that actually specify the given
 manifest can be taken over.

  One idea is with a special CSP policy that forbids manifest files from
  working without the right CT

 I don't think limiting it to Content-Types is a real fix, but we could
 mark, with CSP, that the domain doesn't support appcache at all.

The problem I see with the CSP approach is that only pages that have CSP
will be protected.

  or by means of per-page suborigins

 I don't know that we need to go to that length.

 We could, though, say that a manifest can only do fallbacks for URLs that
 are within the subpath that the manifest finds itself in. That would be an
 interesting way of scoping manifests on shared domains.


This is how crossdomain.xml works, so it might make sense. But I'm not sure
if that would be sufficient.

  --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'