Re: breach attack
Stefan Fritsch wrote: Am Dienstag, 6. August 2013, 10:24:15 schrieb Paul Querna: 1) Disabling HTTP compression 2) Separating secrets from user input 3) Randomizing secrets per request 4) Masking secrets (effectively randomizing by XORing with a random secret per request) 5) Protecting vulnerable pages with CSRF 6) Length hiding (by adding random amount of bytes to the responses) 7) Rate-limiting the requests Many of these are firmly in the domain of an application developer, but I think we should work out if there are ways we can either change default configurations or add new features to help application developers be successful: IMNSO, we are way past the point where we should patch up even more issues that are caused by the broken security module of web browsers. Instead, browsers vendors should fix this issue, for example by offering a way to easily opt out of sending credentials with cross- domain requests (maybe analogous to Strict Transport Security). I am against putting any mitigation measures into httpd that adversely affect normal use, like adding chunk extensions that will likely break lots of clients, or like making mod_deflate much less efficient. Though if somebody comes up with a clever scheme that has no negative side effects, that would be of course fine. Or if we add some rate limiting facility that would be useful for many purposes. +1. Well put. Regards RĂ¼diger
Re: breach attack
On Fri, Aug 9, 2013 at 12:11 AM, Ruediger Pluem rpl...@apache.org wrote: Stefan Fritsch wrote: Am Dienstag, 6. August 2013, 10:24:15 schrieb Paul Querna: 1) Disabling HTTP compression 2) Separating secrets from user input 3) Randomizing secrets per request 4) Masking secrets (effectively randomizing by XORing with a random secret per request) 5) Protecting vulnerable pages with CSRF 6) Length hiding (by adding random amount of bytes to the responses) 7) Rate-limiting the requests Many of these are firmly in the domain of an application developer, but I think we should work out if there are ways we can either change default configurations or add new features to help application developers be successful: IMNSO, we are way past the point where we should patch up even more issues that are caused by the broken security module of web browsers. Instead, browsers vendors should fix this issue, for example by offering a way to easily opt out of sending credentials with cross- domain requests (maybe analogous to Strict Transport Security). I am against putting any mitigation measures into httpd that adversely affect normal use, like adding chunk extensions that will likely break lots of clients, or like making mod_deflate much less efficient. Though if somebody comes up with a clever scheme that has no negative side effects, that would be of course fine. Or if we add some rate limiting facility that would be useful for many purposes. +1. Well put. I strongly disagree. We are a component in an ecosystem that consists of Browsers and Servers. We are part of that broken security model, even if the root cause is from the browsers. In this case, I don't know if any of the proposed mitigations help; I'd love to have an easy way to validate that, so we could bring data to the discussion: If it increases the attack by multiple hours, and causes a 1% performance drop, isn't that the kind of thing that is useful? We should strive as a community help, not to just throw the browsers under the bus.
Re: breach attack
On Fri, Aug 09, 2013 at 09:14:51AM -0700, Paul Querna wrote: In this case, I don't know if any of the proposed mitigations help; I'd love to have an easy way to validate that, so we could bring data to the discussion: If it increases the attack by multiple hours, and causes a 1% performance drop, isn't that the kind of thing that is useful? I sympathise with Stefan but I agree we should do something if we can find something cheap, effective and reliable. Length hiding seems the most promising avenue. The paper notes that that simply adding rand(0..n) bytes to the response only increases the cost (time/requests) of executing the attack. Adding a random number of leading zeroes to the chunk-size line would be be perhaps reliable (i.e. least likely to have interop issues), though we could only introduce relatively small variability of the total response length. We could maybe 0-5 leading zeroes per chunk, safely? Possibly that breaks some client already. It's probably not effective. We could randomly vary the maximum bytes of application data per TLS message using the 2.4 mod_ssl coalesce filter too. I'm not sure if that actually produces length hiding at the right level though, and it hurts performance. (Crypto experts listening?) It's kind of really a TLS problem. Crypto experts should solve this in TLS! :) Regards, Joe
Re: breach attack
On Tue, Aug 6, 2013 at 1:32 PM, Eric Covener cove...@gmail.com wrote: On Tue, Aug 6, 2013 at 1:24 PM, Paul Querna p...@querna.org wrote: Hiya, Has anyone given much thought to changes in httpd to help mitigate the recently publicized breach attack: http://breachattack.com/ From an httpd perspective, looking at the mitigations http://breachattack.com/#mitigations 1) Disabling HTTP compression 2) Separating secrets from user input 3) Randomizing secrets per request 4) Masking secrets (effectively randomizing by XORing with a random secret per request) 5) Protecting vulnerable pages with CSRF 6) Length hiding (by adding random amount of bytes to the responses) 7) Rate-limiting the requests Many of these are firmly in the domain of an application developer, but I think we should work out if there are ways we can either change default configurations or add new features to help application developers be successful: 1) Has anyone given any thought to changing how we do chunked encoding?Chunking is kinda like arbitrary padding we an insert into a response, without having to know anything about the actual content of the response. What if we increased the number of chunks we create, and randomly placed them -- this wouldn't completely ruin some of the attacks, but could potentially increase the number of requests needed significantly. (We should figure out the math here? How many random chunks of how big are an effective mitigation?) Another option in this neighborhood is small/varying deflate blocks. But that probably limits the usefulness of deflate to the same extent that it helps. The idea is to make it less likely that the user input and secret get compressed together. A filter could vary the size of what goes down the filter chain impacting deflate blocks or chunk sizes or SSL compression, at some extra expense. Some rule-based introspection into the response could provide guidance in some situations ??? Isn't that a strength of mod_security? 2) Disabling TLS Compression by default, even in older versions. Currently we changed the default to SSLCompression off in =2.4.3, I'd like to see us back port this to 2.2.x. I think it recently made it to 2.2.x, post the last release. 3) Disable mod_default compression for X content types by default on encrypted connections. Likely HTML, maybe JSON, maybe Javascript content types? In the code, or default conf / manual? It's the cautious thing to do, but I'm not yet sure if making everyone opt back in would really mean much (e.g. what number would give their app the required scrutiny before opting back in) -- Born in Roswell... married an alien... http://emptyhammock.com/
Re: breach attack
Am Dienstag, 6. August 2013, 10:24:15 schrieb Paul Querna: 1) Disabling HTTP compression 2) Separating secrets from user input 3) Randomizing secrets per request 4) Masking secrets (effectively randomizing by XORing with a random secret per request) 5) Protecting vulnerable pages with CSRF 6) Length hiding (by adding random amount of bytes to the responses) 7) Rate-limiting the requests Many of these are firmly in the domain of an application developer, but I think we should work out if there are ways we can either change default configurations or add new features to help application developers be successful: IMNSO, we are way past the point where we should patch up even more issues that are caused by the broken security module of web browsers. Instead, browsers vendors should fix this issue, for example by offering a way to easily opt out of sending credentials with cross- domain requests (maybe analogous to Strict Transport Security). I am against putting any mitigation measures into httpd that adversely affect normal use, like adding chunk extensions that will likely break lots of clients, or like making mod_deflate much less efficient. Though if somebody comes up with a clever scheme that has no negative side effects, that would be of course fine. Or if we add some rate limiting facility that would be useful for many purposes.
breach attack
Hiya, Has anyone given much thought to changes in httpd to help mitigate the recently publicized breach attack: http://breachattack.com/ From an httpd perspective, looking at the mitigations http://breachattack.com/#mitigations 1) Disabling HTTP compression 2) Separating secrets from user input 3) Randomizing secrets per request 4) Masking secrets (effectively randomizing by XORing with a random secret per request) 5) Protecting vulnerable pages with CSRF 6) Length hiding (by adding random amount of bytes to the responses) 7) Rate-limiting the requests Many of these are firmly in the domain of an application developer, but I think we should work out if there are ways we can either change default configurations or add new features to help application developers be successful: 1) Has anyone given any thought to changing how we do chunked encoding?Chunking is kinda like arbitrary padding we an insert into a response, without having to know anything about the actual content of the response. What if we increased the number of chunks we create, and randomly placed them -- this wouldn't completely ruin some of the attacks, but could potentially increase the number of requests needed significantly. (We should figure out the math here? How many random chunks of how big are an effective mitigation?) 2) Disabling TLS Compression by default, even in older versions. Currently we changed the default to SSLCompression off in =2.4.3, I'd like to see us back port this to 2.2.x. 3) Disable mod_default compression for X content types by default on encrypted connections. Likely HTML, maybe JSON, maybe Javascript content types? Thoughts? Other Ideas? Thanks, Paul
Re: breach attack
On Tue, Aug 6, 2013 at 1:24 PM, Paul Querna p...@querna.org wrote: Hiya, Has anyone given much thought to changes in httpd to help mitigate the recently publicized breach attack: http://breachattack.com/ From an httpd perspective, looking at the mitigations http://breachattack.com/#mitigations 1) Disabling HTTP compression 2) Separating secrets from user input 3) Randomizing secrets per request 4) Masking secrets (effectively randomizing by XORing with a random secret per request) 5) Protecting vulnerable pages with CSRF 6) Length hiding (by adding random amount of bytes to the responses) 7) Rate-limiting the requests Many of these are firmly in the domain of an application developer, but I think we should work out if there are ways we can either change default configurations or add new features to help application developers be successful: 1) Has anyone given any thought to changing how we do chunked encoding?Chunking is kinda like arbitrary padding we an insert into a response, without having to know anything about the actual content of the response. What if we increased the number of chunks we create, and randomly placed them -- this wouldn't completely ruin some of the attacks, but could potentially increase the number of requests needed significantly. (We should figure out the math here? How many random chunks of how big are an effective mitigation?) Another option in this neighborhood is small/varying deflate blocks. But that probably limits the usefulness of deflate to the same extent that it helps. The idea is to make it less likely that the user input and secret get compressed together. 2) Disabling TLS Compression by default, even in older versions. Currently we changed the default to SSLCompression off in =2.4.3, I'd like to see us back port this to 2.2.x. I think it recently made it to 2.2.x, post the last release. 3) Disable mod_default compression for X content types by default on encrypted connections. Likely HTML, maybe JSON, maybe Javascript content types? In the code, or default conf / manual? It's the cautious thing to do, but I'm not yet sure if making everyone opt back in would really mean much (e.g. what number would give their app the required scrutiny before opting back in)
Re: breach attack
On Tue, Aug 6, 2013 at 10:32 AM, Eric Covener cove...@gmail.com wrote: On Tue, Aug 6, 2013 at 1:24 PM, Paul Querna p...@querna.org wrote: Hiya, Has anyone given much thought to changes in httpd to help mitigate the recently publicized breach attack: http://breachattack.com/ From an httpd perspective, looking at the mitigations http://breachattack.com/#mitigations 1) Disabling HTTP compression 2) Separating secrets from user input 3) Randomizing secrets per request 4) Masking secrets (effectively randomizing by XORing with a random secret per request) 5) Protecting vulnerable pages with CSRF 6) Length hiding (by adding random amount of bytes to the responses) 7) Rate-limiting the requests Many of these are firmly in the domain of an application developer, but I think we should work out if there are ways we can either change default configurations or add new features to help application developers be successful: 1) Has anyone given any thought to changing how we do chunked encoding?Chunking is kinda like arbitrary padding we an insert into a response, without having to know anything about the actual content of the response. What if we increased the number of chunks we create, and randomly placed them -- this wouldn't completely ruin some of the attacks, but could potentially increase the number of requests needed significantly. (We should figure out the math here? How many random chunks of how big are an effective mitigation?) Another option in this neighborhood is small/varying deflate blocks. But that probably limits the usefulness of deflate to the same extent that it helps. The idea is to make it less likely that the user input and secret get compressed together. 2) Disabling TLS Compression by default, even in older versions. Currently we changed the default to SSLCompression off in =2.4.3, I'd like to see us back port this to 2.2.x. I think it recently made it to 2.2.x, post the last release. 3) Disable mod_default compression for X content types by default on encrypted connections. Likely HTML, maybe JSON, maybe Javascript content types? In the code, or default conf / manual? It's the cautious thing to do, but I'm not yet sure if making everyone opt back in would really mean much (e.g. what number would give their app the required scrutiny before opting back in) In the code. Configurations take order of magnitude more years to trickle down to vendors compared to a code change... :-)
Re: breach attack
Good instructive and advisable config: https://community.qualys.com/blogs/securitylabs/2013/08/05/configuring-apache-nginx-and-openssl-for-forward-secrecy On Tuesday 06/08/2013 at 19:24, Paul Querna wrote: Hiya, Has anyone given much thought to changes in httpd to help mitigate the recently publicized breach attack: http://breachattack.com/ From an httpd perspective, looking at the mitigations http://breachattack.com/#mitigations 1) Disabling HTTP compression 2) Separating secrets from user input 3) Randomizing secrets per request 4) Masking secrets (effectively randomizing by XORing with a random secret per request) 5) Protecting vulnerable pages with CSRF 6) Length hiding (by adding random amount of bytes to the responses) 7) Rate-limiting the requests Many of these are firmly in the domain of an application developer, but I think we should work out if there are ways we can either change default configurations or add new features to help application developers be successful: 1) Has anyone given any thought to changing how we do chunked encoding?Chunking is kinda like arbitrary padding we an insert into a response, without having to know anything about the actual content of the response. What if we increased the number of chunks we create, and randomly placed them -- this wouldn't completely ruin some of the attacks, but could potentially increase the number of requests needed significantly. (We should figure out the math here? How many random chunks of how big are an effective mitigation?) 2) Disabling TLS Compression by default, even in older versions. Currently we changed the default to SSLCompression off in =2.4.3, I'd like to see us back port this to 2.2.x. 3) Disable mod_default compression for X content types by default on encrypted connections. Likely HTML, maybe JSON, maybe Javascript content types? Thoughts? Other Ideas? Thanks, Paul
Re: breach attack
On Tue, Aug 6, 2013 at 10:38 AM, Steffen i...@apachelounge.com wrote: Good instructive and advisable config: https://community.qualys.com/blogs/securitylabs/2013/08/05/configuring-apache-nginx-and-openssl-for-forward-secrecy Well, Forward-Secrecy is really about the NSA capturing your traffic and decrypting it later; the Breach attack stuff is about a chosen plaintext attack on compressed response bodies -- afaik they have not overlapping mitigations? But in general, we should rev our defaults in configuration to help with all of the above :) On Tuesday 06/08/2013 at 19:24, Paul Querna wrote: Hiya, Has anyone given much thought to changes in httpd to help mitigate the recently publicized breach attack: http://breachattack.com/ From an httpd perspective, looking at the mitigations http://breachattack.com/#mitigations 1) Disabling HTTP compression 2) Separating secrets from user input 3) Randomizing secrets per request 4) Masking secrets (effectively randomizing by XORing with a random secret per request) 5) Protecting vulnerable pages with CSRF 6) Length hiding (by adding random amount of bytes to the responses) 7) Rate-limiting the requests Many of these are firmly in the domain of an application developer, but I think we should work out if there are ways we can either change default configurations or add new features to help application developers be successful: 1) Has anyone given any thought to changing how we do chunked encoding?Chunking is kinda like arbitrary padding we an insert into a response, without having to know anything about the actual content of the response. What if we increased the number of chunks we create, and randomly placed them -- this wouldn't completely ruin some of the attacks, but could potentially increase the number of requests needed significantly. (We should figure out the math here? How many random chunks of how big are an effective mitigation?) 2) Disabling TLS Compression by default, even in older versions. Currently we changed the default to SSLCompression off in =2.4.3, I'd like to see us back port this to 2.2.x. 3) Disable mod_default compression for X content types by default on encrypted connections. Likely HTML, maybe JSON, maybe Javascript content types? Thoughts? Other Ideas? Thanks, Paul
Re: breach attack
On 06.08.2013 19:36, Paul Querna wrote: On Tue, Aug 6, 2013 at 10:32 AM, Eric Covener cove...@gmail.com wrote: On Tue, Aug 6, 2013 at 1:24 PM, Paul Querna p...@querna.org wrote: Hiya, Has anyone given much thought to changes in httpd to help mitigate the recently publicized breach attack: http://breachattack.com/ From an httpd perspective, looking at the mitigations http://breachattack.com/#mitigations 1) Disabling HTTP compression 2) Separating secrets from user input 3) Randomizing secrets per request 4) Masking secrets (effectively randomizing by XORing with a random secret per request) 5) Protecting vulnerable pages with CSRF 6) Length hiding (by adding random amount of bytes to the responses) 7) Rate-limiting the requests The attack seems to focus on response bodies. So secrets transferred via cookies seem to be out of scope here. Any one time tokens should be OK to, e.g. using nonces to protect against CSRF like we do in Apache or Tomcat for some of the pages that need protection. It is protected itself and by reducing the rates with which one can probe a site it should also protect the rest of the page against the attack. The examples I saw where attacking a user id that was included in app pages after login. The user id itself was not sufficient for a login so you would still need to attack the password. Of course you are now down to one unknown token instead of two, but the login name often is pretty easy to get by other means. I think there are two different cases: a) Secrets in the response body which are not displayed to the user by the user agent. That should typically be secrets related to credentials, session IDs. An example are URL encoded session IDs, like e.g. Java apps support them when cookies are off (;jsessionid=...). They are part of the response bodies and you can misuse them for session takeover. Or some continuation id contained in the body of a data request that's not meant to get rendered. Or hidden fields in forms. This kind of information could in principle be twisted by the server as long as it can undo the twist when the data is send (URI, query string, form param) and before it is handed over to the application. As far as I understood the attack, this twist doesn't have to be cryptographically secure, it would only be used to increase the combinatorial complexity of the attack. E.g. a session id could be prefixed by a random token that is used to XOR the rest of the session id. Didn't yet think deeper about that though. b) Private data that gets rendered and displayed. Here the twisting without cooperation by the user agent would break the application display. Example could be the login name of a user displayed on every page or simply some private data like the account number or the amount of money on your bank account when doing online banking. Either you would need a cooperating piece of JavaScript etc., or we could only try to do some twist to the full response that's transparent to the UA but not transparent for the exact compressed byte content. The answer could be something generic - for that I don't know enough about the exact algorithm in deflate - or it could be something depending on mime type - adding stuff like random comments etc. From your list I find the rate limiting interesting. It would also have more uses than only making the attack less feasible. Here we would need to find a good pattern though, e.g. would it be enough to focus on client IP and people using proxies need to switch the protection behind a proxy off. Many of these are firmly in the domain of an application developer, but I think we should work out if there are ways we can either change default configurations or add new features to help application developers be successful: 1) Has anyone given any thought to changing how we do chunked encoding?Chunking is kinda like arbitrary padding we an insert into a response, without having to know anything about the actual content of the response. What if we increased the number of chunks we create, and randomly placed them -- this wouldn't completely ruin some of the attacks, but could potentially increase the number of requests needed significantly. (We should figure out the math here? How many random chunks of how big are an effective mitigation?) Another option in this neighborhood is small/varying deflate blocks. But that probably limits the usefulness of deflate to the same extent that it helps. The idea is to make it less likely that the user input and secret get compressed together. 2) Disabling TLS Compression by default, even in older versions. Currently we changed the default to SSLCompression off in =2.4.3, I'd like to see us back port this to 2.2.x. I think it recently made it to 2.2.x, post the last release. Yes, last Saturday but after 2.2.25: http://svn.apache.org/viewvc?view=revisionrevision=1510043 3) Disable mod_default compression for X content types by default