Re: [TLS] Last Call: draft-kanno-tls-camellia-00.txt (Additionx

2011-03-15 Thread Marsh Ray

On 03/14/2011 05:49 PM, Martin Rex wrote:


The MD5 output is 128 bits = 16 bytes, and the input is *MUCH* larger
than 128 bits.  The master_secret should is 48 bytes alone.  Even if one is
successful at inverting MD5, one can not undo the collisions from
the Finished computation caused by the compression of a much larger
input into a 128 bit output value.


You could accumulate multiple samples, perhaps even with session 
resumption where the Finished message is sent by the server without the 
chance to authenticate the client first.


Normally you even don't get to see the Finished.verify_data without 
breaking the encryption or downgrading to no encryption. But 40-bit 
encryption and integrity only connections were fully supported use 
cases back in those days.


If they had really wanted to leverage the 16 or 20 byte bottleneck of 
MD5 and SHA-1, they should have padded the master_secret from 384 to 512 
bits (the input block size) before putting it into the hash function.


- Marsh
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [TLS] Last Call: draft-kanno-tls-camellia-00.txt (Additionx

2011-03-09 Thread Marsh Ray

On 03/08/2011 09:59 AM, Martin Rex wrote:


To me, Truncating the output of a SHA-384 PRF to 12 Octets looks like
unreasonable cutdown of the security margin for the Finished messages.


I agree.

Last I looked into it, I came to the conclusion that collisions of any 
efficient 96 bit hash function are likely within range of today's 
supercomputers and botnets.


But the logistics of it probably make it impractical for an actual 
attack. You need the master secret to manipulate the verify_data in any 
valid way (and if the attacker had that there'd be no security left to 
attack anyway). Otherwise, a useful attack on the finished message 
probably has to involve 2^48 or so live network connections to collide 
among.


- Marsh
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [TLS] Last Call: draft-kanno-tls-camellia-00.txt (Additionx

2011-03-09 Thread Marsh Ray

On 03/08/2011 11:33 AM, Eric Rescorla wrote:

On Tue, Mar 8, 2011 at 9:20 AM, Martin Rexm...@sap.com  wrote:

Eric Rescorla wrote:


I don't understand this reasoning. Why does the output size of the
pre-truncated PRF
influence the desirable length of the verify_data (provided that the
output size is  than
the length of the verify_data of course).


One of the purposes of a cryptographic hash function is to protect
from collisions (both random and fabricated collisions).


As I mentioned, that appears not to be the case here, but I have found 
no written rationale for this design subtlety. I could be missing something.


It's also possible that some cipher suite or mode could be added in the 
future that makes a collision on the verify_data significant.



Cutting down the SHA-384 output from 48 to 12 octets significantly impairs
its ability to protect from collisions.  It's comparable to
truncating the SHA-1 output from 20 to 5 octets.


I think it's more comparable truncating SHA-1 down to 12 octets.


I don't understand this analysis. Consider two ideal PRFs:

* R-160 with a 160-bit output
* R-256 with a  256-bit output

Now, consider the function R-256-Reduced, which takes the first 160
bits of R-256.
Are you arguing that R-256-Reduced is weaker than R-160? If so, why?


I think he's arguing that anything cut down to 96 bits represents a 
lousy hash function allowing practical collisions on today's hardware.


The implication for TLS protocol evolution (and anything else 
interpreting bytes-on-the-wire) is to keep in mind that the 
Finished.verify_data is simply not designed to resist offline collision 
attacks.


Remember to bring this up the next time somebody suggests permitting 
endpoints to use lousy RNGs, particularly in conjunction with 
certificates with fixed DH parameters.


- Marsh
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [TLS] Last Call: draft-kanno-tls-camellia-00.txt (Additionx

2011-03-09 Thread Marsh Ray

On 03/08/2011 12:45 PM, Martin Rex wrote:


RFC-5746 TLS extension Renegotiation indication


Yes, it would be bad if those could be collided.


I'm sorry, but I think it is a bad idea to use a flawed design for
the TLS finished message by subverting the collision resistence
of stronger secure hash functions that are used for the PRF.


Here's how my reasoning goes. I could be wrong.

The Finished.verify_data represents a 96-bit MAC, not a general-purpose 
cryptographic hash function.


The attacker is presumed to have the ability to modify data at some 
points in the protocol stream. The protocol relies on correct 
validiation of the verify_data to detect this condition and abort.


Master_secret is not known to the attacker, if the connection is 
properly authenticated. The authentication only depends on the 
verify_data to prevent downgrade attacks. But if the authentication is 
broken, the attacker can simply issue the proper MAC and has no need to 
collide them.


Master_secret is very long, changes every full handshake, and is 
required to compute the MAC.


Any offline precomputation the attacker could do in support of collision 
finding requires the ability to compute the MAC function for the actual 
master_secret (as computed by the client or the server including 
locally-generated entropy not received from the attacker).


Therefore, collisions on the Finshed.verify_data require active sessions 
to be useful and the attacker will need to amass a pool of about 2^48 of 
them to collide between.


- Marsh
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [TLS] Last Call: draft-ietf-tls-ssl2-must-not-03.txt

2010-12-13 Thread Marsh Ray

On 12/10/2010 03:35 PM, Sean Turner wrote:

On 12/3/10 3:27 PM, Sean Turner wrote:


negotiate means returning a ServerHello handshake message with that
version number (neither an SSL 2.0 SERVER-HELLO, nor an SSLv3
ServerHello with a server version of { 0x02,0x00 }).

use means to successfully complete the handshake and start exchanging
application data protected under protocol version {0x02,0x00}.


How could you ever use it without negotiating it first?
It seems like a distinction without a difference in this document.


So it's been proposed that I better integrate the text after the bullets
into the bullets and better explain negotiate and use. I'm game.

For the client, all I've ever wanted to do is change the TLS 1.2
clients SHOULD NOT support SSL 2.0 to a MUST NOT.


Sounds good to me.


For me, client
support meant sending SSL 2.0 CLIENT-HELLO, accepting SSL 2.0
SERVER-HELLO, and then proceeding using SSL 2.0 records. I can see that
people might not make the leap that client support meant accepting the
SSL 2.0 SERVER-HELLO and using SSL 2.0 records because the above-quoted
sentence was in a warning that discussed 2.0 CLIENT-HELLOs.


RFC2246: TLS 1.0 clients that support SSL Version 2.0 servers
RFC2246: must send SSL Version 2.0 client hello messages [SSL2].

IMHO, this is statement on its own is ambiguous to the point of being 
wrong. The problem is that is muddies up the essential distinction:

The term SSL Version 2.0 refers to a completely different thing
than an SSLv2-format compatible Client Hello message.

But it is cleared up in subsequent wording:
RFC 2246: TLS servers should accept either client hello format if

This uses the word format to make that critical distinction.

RFC2246: they wish to support SSL 2.0 clients on the same
RFC2246: connection port. The only deviations from the Version 2.0
RFC2246: specification are the ability to specify a version with a
RFC2246: value of three and the support for more ciphering types
RFC2246: in the CipherSpec.

It's really imprecise to talk about a Version 2.0 Client Hello message 
because one of the primary purposes of the initial hello message 
exchange is to negotiate exactly which protocol will be used for all 
subsequent messages (i.e., the protocol version).


The Client Hello, in effect, has its own protocol version which has 
evolved somewhat independently over the years:


SSLv2
SSLv2 with SCSV
SSLv3/TLS
SSLv3/TLS with SCSV
SSLv3/TLS with extensions


For servers, my thinking has evolved. I'm now at the point where I'd
like to have TLS servers not have to support SSL 2.0 SERVER-HELLOs and
SSL 2.0 records after the handshake (RFC 5246 is clear that TLS
1.*/SSLv3 servers can accept a SSL 2.0 CLIENT-HELLO).


Isn't an SSL 2.0 SERVER-HELLO is equivalent to negotiating the use of 
the SSL 2.0 protocol? Isn't that the thing for which we're trying to put 
the final nail in the coffin?



If we prohibited TLS client's sending SSL 2.0 CLIENT-HELLO and TLS
servers sending SSL 2.0 SERVER-HELLO, then I'd be happy because clients
won't send SSL 2.0, servers won't respond with SSL 2.0, and therefore
clients won't end up agreeing to send SSL 2.0 records and the server
won't have to do SSL 2.0 records.


Really all you have to do is prohibit the client from sending an 
SSLv2-compatible client hello.


It may not hurt to reiterate that this implies that the server will not 
be subsequently negotiating the use of SSL 2.0 either. But the client 
hello messages and the server hello messages have completely different 
considerations, so there's no point in trying to make the language 
symmetric.



How the following replacement text for Section 3:

Because of the deficiencies noted in the previous section:

* TLS Clients MUST NOT propose SSL 2.0 to a TLS server by sending
an initial SSL 2.0 CLIENT-HELLO handshake message with protocol
version { 0x02, 0x00 }.


How about

TLS clients MUST NOT send the SSL version 2.0 compatible CLIENT-HELLO 
message format. Clients MUST NOT send any client hello message which 
specifies a protocol version less than { 0x03, 0x00 }. As previously 
stated by the definitions of all previous versions of TLS, the client 
SHOULD specify the highest protocol version it supports.



* According to [RFC5246] Appendix E.2, TLS Servers, even when not
implementing SSL 2.0, MAY accept an initial SSL 2.0 CLIENT-HELLO
as the first handshake message of a SSLv3 or TLS handshake.


TLS servers MAY continue to accept client hello messages in the version 
2 client hello format as specified in TLS [RFC2246]. Note that this does 
not contradict the prohibition against actually negotiating the use of 
SSL 2.0.



[RFC5246] allowed TLS servers to respond with a SSL 2.0
SERVER-HELLO message.


I don't see where it says that explicitly.

I would have interpreted RFC5246 as being mostly irrelevant once each 
endpoint learns that the highest mutually-supported version was 
something less than TLS 1.2.


AFAICT, this is the first document in the SSL/TLS 

Re: [TLS] Last Call: draft-ietf-tls-ssl2-must-not-03.txt (Prohibiting SSL Version 2.0) to Proposed Standard

2010-12-02 Thread Marsh Ray

On 12/02/2010 08:01 AM, Glen Zorn wrote:


Maybe I just don't understand the word use.  It seems like if a
server accepts a protocol message it's using the protocol...


Hard to argue with that logic...but... :-)

The Client Hello message is the first message sent in the protocol. Its
format changed completely from SSLv2 to what is used by SSLv3 thorough
TLS 1.2.

The number one job of the Hello messages (in any protocol) is generally 
to negotiate the version of the protocol used for all subsequent 
messages. Everything else in the Hello message is, in a sense, put there 
as an optimization to save some round trips.


Since the ancient v2 servers obviously couldn't understand one bit of a
v3 or later Client Hello (some would break quite badly), there was an 
option in the SSLv3 spec to lead off the handshake with a v2-compatible 
Client Hello. A direct transliteration of the v3 Client Hello message to 
a v2 was defined The only reason that worked was because the v3 Hello 
didn't introduce any significant new information (when it was first 
defined).


So it is, in fact, possible to use a SSLv2 Client Hello message with
later protocols, even if neither endpoint is willing to speak SSLv2 for
the actual connection. There is a significant percentage of handshakes 
today which lead off with a v2 Hello, even though the vast majority of 
servers support TLS 1.0.


The renegotiation problem required a means to signal at least one new
flag (roughly, I'm patched) in the initial Hello message. This is
probably still fresh on everyone's minds, so the fact that a means was
found to signal this bit in the SSLv2-format Client Hello message still
feels relevant. If it had not been possible to squeeze that extra flag
in, the text we have been discussing would be much different.

On 12/01/2010 08:31 PM, Glen Zorn wrote:

Section 3 says TLS clients MUST NOT send SSL 2.0 CLIENT-HELLO
messages. and TLS servers MUST NOT negotiate or use SSL 2.0 and
later TLS servers that do not support SSL 2.0 MAY accept version
2.0 CLIENT-HELLO messages as the first message of a TLS handshake for
interoperability with old clients. Taken together, I find these
statements quite confusing, if not outright self-contradictory.


I don't see any problem with them. Sometimes the wording in RFCs reads a 
bit like a bullet-point list of standalone requirements that got 
formatted into a paragraph. I find this style to actually be quite 
comforting when you go to implement something. You can turn it into an 
implementation checklist with less chance that you might lose something 
written between the lines.



Maybe, a However might fix the problem, though:

TLS servers MUST NOT negotiate or use SSL 2.0; however, TLS servers
MAY accept SSL 2.0 CLIENT-HELLO messages as the first message of a
TLS handshake in order to maintain interoperability with legacy
clients.


I do like your wording better. But I don't think it's enough of a 
technical improvement to necessitate change during last call.


- Marsh
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [TLS] [certid] [secdir] secdir review of draft-saintandre-tls-server-id-check-09

2010-09-24 Thread Marsh Ray

On 09/23/2010 01:10 PM, Richard L. Barnes wrote:

There is no black magic here, only the magic of the TLS server_name
extension. If the client provides server_name=gmail.com, the server
provides a gmail.com cert, otherwise it defaults to mail.google.com.
 Your browser is following two secure delegations before it lands at
 www.google.com (gmail.com - mail.google.com - www.google.com).


I'd not even considered SNI.


My guess based on the anecdotes in the thread is that IE8 doesn't
support it.


Not IE8, but the pre-Vista Windows I was testing it on that doesn't do
extensions by default.

Which is why I'd not considered that gmail would depend on SNI for
its operation. I'd forgotten that this is Google we were talking about 
and not any other company in the world that would put support for MSIE 
on Windows XP ahead of protocol standards. :-)



(You should also be more careful about your HTTP emulation! A client
 MUST include a Host header field in all HTTP/1.1 request messages
.)


Yep, that's why I requested HTTP/1.0.

- Marsh
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [TLS] [certid] [secdir] secdir review of draft-saintandre-tls-server-id-check-09

2010-09-23 Thread Marsh Ray

On 09/22/2010 01:31 PM, ArkanoiD wrote:

BTW, slightly offtopic here: whenever i connect to gmail.com, i get certificate
for mail.google.com. But i've yet to see any web browser to complain! Where is 
the magic?


Seems totally relevant to me.

Going to https://gmail.com/ I get some kind of redirection to 
https://www.google.com/accounts/ServiceLogin...


I can confirm the silent redirect behavior on FF, an associate reports 
it on IE9. I tried IE8 but get the expected cert was issued for a 
different website's address error.


Hopefully I'm overlooking something simple, but at first glance it would 
seem like either of these two conditions are true:


1. Multiple vendors are putting some kind of override table in their 
browsers with an entry for gmail.com.


2. Browsers are running script from badly authenticated sources.

So what does gmail.com have in this situation that an attacker couldn't 
obtain for phonygmail.com?


- Marsh


ma...@lamb:/tmp$ dig -t any gmail.com

;  DiG 9.7.0-P1  -t any gmail.com
;; global options: +cmd
;; Got answer:
;; -HEADER- opcode: QUERY, status: NOERROR, id: 44091
;; flags: qr rd ra; QUERY: 1, ANSWER: 11, AUTHORITY: 0, ADDITIONAL: 2

;; QUESTION SECTION:
;gmail.com. IN  ANY

;; ANSWER SECTION:
gmail.com.  300 IN  A   74.125.227.22
gmail.com.  300 IN  A   74.125.227.21
gmail.com.  300 IN  A   74.125.227.24
gmail.com.  300 IN  A   74.125.227.23
gmail.com.  86400   IN  NS  ns4.google.com.
gmail.com.  86400   IN  NS  ns1.google.com.
gmail.com.		86400	IN	SOA	ns1.google.com. dns-admin.google.com. 1427981 
21600 3600 1209600 300

gmail.com.  3600IN  MX  40 
alt4.gmail-smtp-in.l.google.com.
gmail.com.  3600IN  MX  5 gmail-smtp-in.l.google.com.
gmail.com.  3600IN  MX  20 
alt2.gmail-smtp-in.l.google.com.
gmail.com.  300 IN  TXT v=spf1 
redirect=_spf.google.com

;; ADDITIONAL SECTION:
ns4.google.com. 85092   IN  A   216.239.38.10
ns1.google.com. 85092   IN  A   216.239.32.10

;; Query time: 54 msec
;; SERVER: 192.168.1.3#53(192.168.1.3)
;; WHEN: Wed Sep 22 14:26:29 2010
;; MSG SIZE  rcvd: 330



ma...@lamb:/tmp$ openssl s_client -connect gmail.com:443
...
subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=mail.google.com
issuer=/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA
...
---
GET / HTTP/1.0

HTTP/1.0 200 OK
Date: Wed, 22 Sep 2010 19:31:43 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
Set-Cookie: 
PREF=ID=8614650b9dda6802:TM=1285183903:LM=1285183903:S=B88jR4IHVEMJ7oJ7; 
expires=Fri, 21-Sep-2012 19:31:43 GMT; path=/; domain=.google.com
Set-Cookie: 
NID=39=nR1SfxSCd9I9frwdHUXGHtOKWCI2yKMLaVWVnRZk50jDJv4InnuJPuhruGHy2j8hWeKdBfO18SCZzEm6N0qMW_flPF6tF6i-CvhRU1DrDDYvExygPnpew69GRLaWZeI0; 
expires=Thu, 24-Mar-2011 19:31:43 GMT; path=/; domain=.google.com; HttpOnly

Server: gws
X-XSS-Protection: 1; mode=block

!doctype htmlhtmlheadmeta http-equiv=content-type 
content=text/html; 
charset=ISO-8859-1titleGoogle/titlescriptwindow.google={kEI:n1maTNKCA5O8zAXDpJFW,kEXPI:24956,26758,kCSI:{e:24956,26758,ei:n1maTNKCA5O8zAXDpJFW,expi:24956,26758},ml:function(){},kHL:en,time:function(){return(new 
Date).getTime()},log:function(b,d,c){var a=new 
Image,e=google,g=e.lc,f=e.li;a.onerror=(a.onload=(a.onabort=function(){delete 
g[f]}));g[f]=a;c=c||/gen_204?atyp=ict=+b+cad=+d+zx=+google.time();a.src=c;e.li=f+1},lc:[],li:0,Toolbelt:{}};
window.google.sn=webhp;window.google.timers={load:{t:{start:(new 
Date).getTime()}}};try{}catch(u){}window.google.jsrt_kill=1;
var _gjwl=location;function _gjuc(){var 
e=_gjwl.href.indexOf(#);if(e=0){var 
a=_gjwl.href.substring(e);if(a.indexOf(q=)0||a.indexOf(#q=)=0){a=a.substring(1);if(a.indexOf(#)==-1){for(var 
c=0;ca.length;){var d=c;if(a.charAt(d)==)++d;var 
b=a.indexOf(,d);if(b==-1)b=a.length;var 
f=a.substring(d,b);if(f.indexOf(fp=)==0){a=a.substring(0,c)+a.substring(b,a.length);b=c}else 
if(f==cad=h)return 0;c=b}_gjwl.href=/search?+a+cad=h;return 
1}}}return 0}function _gjp(){!(window._gjwl.hash

window._gjuc())setTimeout(_gjp,500)};
window._gjp  _gjp()/scriptstyle 
id=gstylebody{margin:0}#gog{padding:3px 8px 
0}td{line-height:.8em}.gac_m 
td{line-height:17px}form{margin-bottom:20px}body,td,a,p,.h{font-family:arial,sans-serif}.h{color:#36c;font-size:20px}.q{color:#00c}.ts 
td{padding:0}.ts{border-collapse:collapse}em{font-weight:bold;font-style:normal}.lst{width:496px}.tiah{width:458px}input{font-family:inherit}a.gb1,a.gb2,a.gb3,a.gb4{color:#11c 
!important}#gog{background:#fff}#gbar,#guser{font-size:13px;padding-top:1px 
!important}#gbar{float:left;height:22px}#guser{padding-bottom:7px 
!important;text-align:right}.gbh,.gbd{border-top:1px solid 

Re: [TLS] [certid] [secdir] secdir review of draft-saintandre-tls-server-id-check-09

2010-09-23 Thread Marsh Ray

On 09/22/2010 01:31 PM, ArkanoiD wrote:
 BTW, slightly offtopic here: whenever i connect to gmail.com, i get
 certificate for mail.google.com. But i've yet to see any web browser
 to complain! Where is the magic?

On 09/22/2010 02:37 PM, Marsh Ray wrote:


Hopefully I'm overlooking something simple, but at first glance it would
seem like either of these two conditions are true:

1. Multiple vendors are putting some kind of override table in their
browsers with an entry for gmail.com.


This search
http://mxr.mozilla.org/mozilla1.9.2/search?string=[^%40]gmail.comregexp=1hitlimit=tree=mozilla1.9.2

Doesn't return any hits. That search page is a little tricky though. It 
kept wanting to change my ^@ to \0! :-)


Which suggests that:

2. Browsers are running script from badly authenticated sources.


- Marsh
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [TLS] [certid] review of draft-saintandre-tls-server-id-check-09

2010-09-23 Thread Marsh Ray

On 09/22/2010 08:48 PM, Martin Rex wrote:

Henry B. Hotz wrote:


[...] For example the user may trust a dedicated discovery service
or identity service that securely redirects requests from the source
to a target domain.


Thinking about it, I feel slightly uneasy about some redirects, such as
https://gmail.com  -  301 -   https://mail.google.com/mail

I think these should never go without a warning.


That bugs me too. Lots of sites do it though, usually with Javascript.


If my banks online-banking portal (https://www.mybank.de)
would suddently redirect me to https://www.mybank.com before
asking me for credentials and transaction authorization codes,
that would be a real security problem, because www.mybank.com
is not leased by my bank (it is apparently not currently leased to anyone)


Yep. There are often ways to make that happen with just a blind 
plaintext injection capability, too.



A hacker that breaks into a web-site in order to do trap
victims


The site is now 100% (to use the technical term) pwned.

It's not possible for a network security protocol to survive the 
compromise of one of the endpoints. We can no longer reason about Alice 
and Bob if Bob is allowed to be under the hypnotic control of Eve.


I think it's dangerous to try. We're likely to optimize for cases of 
dubious security at the expense of some properly functioning cases.



might be less easily detected if he doesn't subvert
the entire site and tries to send collected data to external
places, but instead puts redirects into place that browsers
will blindly and silently follow, maybe additionally filtering
the clients that will be redirected based on their origin,
so that the helpdesk and security guys can not immediately
repro it with their browsers.

Should a users decision to trust a particular service with
a particular issue always imply that this particular service
is a fully trusted naming service (i.e. one that performs
secure name transformations)?


There are lots of ways a site can delegate its security like that. They 
could load Javascript or HTML from external sites, for example. They can 
use headers and script to broaden the origin which the browser trusts.


http://code.google.com/p/browsersec/wiki/Part2

Not to mention that they could simply proxy the requests on the server 
side, outsource their servers to someone untrustworthy, add insecure 3rd 
party tracking cookies, redirect via non-SSL HTTP, and so on.


Once your browser trusts a server to serve some domain/origin, there's 
almost nothing that server can't do with its identity, including 
delegate it to someone else (intentionally or not).


- Marsh
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [TLS] Last Call: draft-hoffman-tls-additional-random-ext (Additional Random

2010-04-27 Thread Marsh Ray
On 4/23/2010 12:12 PM, Nicolas Williams wrote:
 
 Irrelevant: if the random octets being sent don't add entropy (because
 they are sent in cleartext) then this extension is completely orthogonal
 to PRNG failures.

Even though they are sent in-the-clear, the random data do serve the
same useful purpose as the existing [cs]_random data.

(Mathemeticians and professional cryptographers should probably avert
their eyes from the fast-and-loose reasoning which follows.)

Because they are unpredictable they make offline precomputation harder.
I think of it as adding entropy into offline computation, without adding
any to the online computation.

I would think that the current 224-256 bits is enough to thwart offline
attacks. The attacker would need something proportional to 2**224
storage to store the results of his precomputation, no?

Assume attacker can knock off 2**42 using rainbow table techniques (he
has a 1024 unit cluster of CPUs which can each compute one result online
every clock at 4GHz). So he needs to store something like 2**182 results
from his precomputation. Assuming 1 bit per result, probably you'd need
more. Raw HDDs are the cheapest form of mass storage today at $75/TB
(10**12 bytes?). Such a system would cost
 $ 57468582782470832188438013518137000.00
today. Of course those costs are likely to decline over time.

Again, this is the cost you impose on the attacker today by simply
ensuring you use the current protocol as intended.

 I do believe it's mostly harmless; I am concerned that 2^16 max octets
 seems like a bit much, possibly a source of DoS attacks.  I believe it's
 also useless.  As such I'm not opposed to it as an Experimental or even
 Informational RFC.

There is a danger with this proposal. In no way do I mean to suggest
that Paul has any unstated motivations here.

One aspect of saying that a data area is random is saying that the RFCs
can impose no restrictions on it. Allowing arbitrary unstructured
random data in the protocol opens the door for private extensions to
be added by various parties.

For example, it appears that 4 of the 32 bytes originally specified for
random data got repurposed for GMT leaving this is GMT but the clock is
not required to be right in the spec.

Once a few more of these accumulate in the protocol without central
coordination we end up with incompatibilities that the IETF process can
no longer prevent.

- Marsh
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [TLS] Last Call: draft-hoffman-tls-additional-random-ext (Additional Random

2010-04-27 Thread Marsh Ray
On 4/26/2010 4:36 PM, Nicolas Williams wrote:
 On Mon, Apr 26, 2010 at 04:18:33PM -0500, Marsh Ray wrote:
 Taking ietf@ietf.org off of CC list as this seems to be very TLS specific.
 
 This is an IETF LC, not a WG LC; IETF LC comments should be sent to
 i...@ietf.org.  If anything, we might want to drop t...@ietf.org.

That makes sense.

 Thus ISTM that we should first consider either whether the client_random
 and server_random fields are sufficient _assuming_ compliant [P]RNGs or
 consider how draft-hoffman-tls-additional-random-ext can ameliorate TLS
 implementations that have poor [P]RNGs.

I think the current space in the protocol of 224-256 bits in each
direction is sufficient. Well-known techniques exist for compressing
whatever format of entropy is available into that space.

 Ah!  Perhaps what's happening here is that Paul intends for the
 additional random inputs to be provided by the _application_, from
 outside the TLS implementation.  In that case an application could make
 secure use of TLS even when the underlying TLS implementation has a poor
 [P]RNG.  That would make draft-hoffman-tls-additional-random-ext much
 more interesting (combined with some editing I'd drop my objections).

But that facility could be provided by the implementation API without
any need to extend the TLS protocol. Indeed, OpenSSL provides a function
to contribute entropy into its RNG.

Thus I do not think draft-hoffman-tls-additional-random-ext should be
advanced as a standard.

- Marsh
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [TLS] Metadiscussion on changes in draft-ietf-tls-renegotiation

2010-02-01 Thread Marsh Ray
Stefan Santesson wrote:
 
 I totally agree in principle. However, if a renegotiating client sends a
 full RI extension with valid data AND the client also sends SCSV, then what
 security problem is introduced by allowing the renegotiation (i.e. Not
 aborting)
 
 No matter how hard I try, I can't find the security problem and I can't find
 the interoperability advantage.

 Hence, the MUST abort requirement seems like an unmotivated restriction.
 I'm not saying that we have to change the current draft, I'm just curious to
 understand the real benefits of this requirement.

In a sense it allows a consistent definition of the semantics of SCSV:
The presence of SCSV is equivalent to an empty RI extension. Under such
a definition, the presence of multiple conflicting RIs (especially an
empty RI during a renegotiation) is clearly an abort-able offense!

This permitted, though NOT RECOMMENDED practice of sending SCSV during
insecure renegotiation can be (charitably) thought of as not changing
the semantics of SCSV, but instead being a sneaky manipulation in order
to probe the target server to double check that he's still not upgraded.
 This probably does not add any real security (MitM could just drop the
RI from the Server Hello and the client might not be able to detect it
until after he sent his Finished message). Seems like implementations
wishing to use this kind of trickery could do so without it having been
explicitly permitted by the spec.

 Again, based on what I have seen, I would not be surprised.
 I don't think being accountable to the market is a very strong threat

Seriously, got anything better in mind? Security researchers dropping
0-days maybe?

 if a
 safe adjustment to a more liberal processing helps customers to avoid
 interop problems while maintaining the security of the protocol.
 
 Hence, if there IS a security threat that motivates this, then it is
 extremely important to spell it out in the clear.

At one time some there seemed to be some ambiguity in the specs (or at
least some interpreted it that way) as to whether or not extension data
on the Hellos was actually to included in calculations for the Finished
message verify_data. Perhaps some vendors included it and some did not,
and no one noticed at the time because there were no extensions defined.
I vaguely recall somebody suggesting on this list that (for
interoperability of course) some implementation calculated the
verify_data both ways and accepted either one that matched.

If this is in fact the case, it seems plausible that there might be
implementations (of older specs like SSLv3) out there for which
extension data can be dropped or changed by MitM.

I wonder if considerations for these systems has anything to do with it?

- Marsh
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [TLS] Metadiscussion on changes in draft-ietf-tls-renegotiation

2010-01-29 Thread Marsh Ray
Stefan Santesson wrote:
 This makes no sense to me.
 
 Developers tend to live by the rule to be liberal in what you accept as it
 tends to generate less interop problems.

Not in my experience.

HTML is perhaps the ultimate example of a liberal implementation of a
spec. To this day, many years later, it's common to find pages that
render badly with one or more implementations.

Whenever an application is actively being liberal in what it accepts,
the sending application is in fact relying on undocumented behavior.
This is what causes the ineterop problems, not strictness.

In practice, if protocol receiving applications are consistently strict
about what they accept, then mutant protocol sending applications do not
get out to pollute the ecosystem.

 It makes no sense to abort a TLS handshake just because it contains an SCSV
 if everything else is OK.

This is a cryptographic data security protocol for which
interoperability must be secondary to security. If anything is malformed
or suspicious, the handshake should be aborted.

 So This MUST NOT requirement will likely be
 ignored by some implementations.

They should expect the market to hold them accountable if problems result.

The implementers on this list have indicated they could produce
interoperable implementations of this spec. And they appear to have
proven it by demonstrating implementations which actually interoperate.

 03 gives SCSV somewhat double and conflicting semantics.
 
 1) Present in an initial handshake it signals client support for secure
 renegotiation.
 
 2) Present in a renegotiation handshake it signals that the client DOES NOT
 support secure renegotiation (Handshake MUST be aborted).
 
 I think this is asking for trouble.

Yep, it's an inelegant hack. None of this is how any of us would have
designed it that way from the beginning.

I would recommend that clients always send a proper RI extension and
don't even mess with sending SCSV. Extensions-intolerant servers should
just go away. Servers just treat SCSV like an empty RI (clearly wrong
for a renegotiation) without adding much complexity.

As for [Upgraded] Clients which choose to renegotiate [] with an
un-upgraded server, they deserve whatever they get.

SCSV is just for those implementers that want to make insecure
connections with old and broken peers. That task has always involved
added complexity and they know the work they're buying for themselves.

- Marsh
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [TLS] Chatter

2010-01-28 Thread Marsh Ray

Yngve Nysaeter Pettersen wrote:


I prefer publishing the specification as-is.


Additional comment:

The SCSV is a temporary fallback, one that will not be needed when 
clients enter strict mode, since when that happens servers have to 
support the RI extension.  Its use should therefore be kept to the 
minimum needed to provide the extra security it is designed to provide.


IMO the only situations in which the SCSV should be sent are:

   1) Initial connections when the server is known to not tolerate TLS 
Extensions, or this is not known


And you're willing to put the client at risk by possibly connecting to a 
vulnerable server which may have buffered input data from MitM and is 
willing to renegotiate.



, and
   2) During renegotiation with a server known not to support the RI 
extension _and_ that does not tolerate TLS Extensions


In that case, you were already pwned at the initial connection.

, just in case this 
is a MITM attack  and the real server actually supports RI, in which case
we want the connection shut down. If the server tolerates extensions 
then only the RI extension should be sent.


As a general-purpose TLS client:
1. Don't connect to a server without RI.
2. If you ignore (1), don't send SCSV when you renegotiate.

As a general-purpose TLS server:
1. Don't allow connections without RI.
2. If you ignore (1), don't renegotiate.

- Marsh

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [TLS] draft-ietf-tls-renegotation: next steps

2009-12-17 Thread Marsh Ray
Eric Rescorla wrote:

   It is possible that un-upgraded servers will request that the client
   renegotiate.  It is RECOMMENDED that clients refuse this
   renegotiation request.  Clients which do so MUST respond to such
   requests with a no_renegotiation alert [RFC 5246 requires this
   alert to be at the warning level.]  It is possible that the
   apparently un-upgraded server is in fact an attacker who is then
   allowing the client to renegotiate with a different, legitimate,
   upgraded server.  In order to detect this attack, clients which
   choose to renegotiate MUST provide either the
   TLS_RENEGO_PROTECTION_REQUEST SCSV or renegotiation_info in their
   ClientHello.  In a legitimate renegotiation with an un-upgraded
   server, either of these signals will be ignored by the server.
   However, if the server (incorrectly) fails to ignore extensions,
   sending the renegotiation_info extension may cause a handshake
   failure.


 Thus, it is permitted, though NOT RECOMMENDED, for the
   client to simply send the SCSV.  This is the only situation in which
   clients are permitted to not send the renegotiation_info extension
   in a ClientHello which is used for renegotiation.

This is bad because it greatly complicates the simple definition of the
SCSV as exactly the same semantics as an empty 'renegotiation_info'
extension. In fact, it tends to convey exactly the message that the
attacker wants this is an initial handshake.

   Note that in the case of this downgrade attack attack above, if this
   is the initial handshake from the server's perspective, then use of
   the SCSV from the client precludes detection of this attack by the
   server.  However, the attack will be detected by the client when the
   server sends an empty renegotiation_info extension and the client
   is expecting one containing the previous verify data.

It doesn't seem very robust to me. A very long chain of things have to
go just so (some even at the option of the attacker) for the client to
detect this attack.

 By contrast,
   if the client sends the renegotiation_info extension, then the
   server will immediately detect the attack.
 
 After flip-flopping on this in my head a few times,

You're not the only one. :-)

 however, my
 personal view, is that I think this goes too far in the direction of
 accomodating broken servers.

Not so much that we want to specifically not be accommodating, but this
case introduces complexities and internal contradictions at the expense
of non-crusty clients and servers.

There's also something to be said for specifically declining to
accommodate things in a spec that are already broken twice or thrice
over (depending on which spec you read) in the first place.

It's not even fixing things half-way. It may be fixing things
one-third-way or not at all depending on your philosophy.

IMHO, the IETF shouldn't endorse the making of insecure TLS connections
by attempting to secure them poorly.

In this case, however, the specs should leave enough room for
applications to do that sort of thing on their own if they so choose.
But in doing so, application vendors, admins, and users alike should
know that they are engaging in risky behavior in public. It's visible on
the wire.

 Sending RI in this instance only creates
 an interop problem when a server (1) is doing something we know to be
 really unsafe and (2) can't even ignore extensions correctly. We've
 seen a number of suggestions that we actually forbid renegotiation in
 case (1) and while I suspect WG consensus doesn't go that far, it's not
 clear to me that we need to not only allow it but also compensate for
 servers which are broken in other respects.

 So, my preference would
 be to simply mandate RI with the previous verify_data here as in
 all other cases.

+1 for simplicity and consistency

- Marsh

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [TLS] Last Call: draft-ietf-tls-renegotiation (Transport Layer

2009-12-02 Thread Marsh Ray
Martin Rex wrote:
 Chris Newman wrote:
 Evaluation relative to draft-mrex-tls-secure-renegotiation-03:

 Kudos to Martin Rex for producing such a good alternate proposal.  The 
 introductory text up to and including section 4.1 is very good and would 
 improve draft-ietf-tls-renegotiation.  While I would support a consensus to 
 publish the mrex document as the solution, I presently prefer 
 draft-ietf-tls-renegotiation-01 for four reasons:

 1. Running code: multiple implementations and interop testing was
performed on an earlier version of draft-ietf-tls-renegotiation.
 
 Even EKR admitted that implementing the update is an insignificant
 amount of work.
 
 Pushing this point, that there were interoperable implementations
 when this proposal was made in the IETF smells very much like
 a request for rubber stamping. 

My goal had been to present a usable solution along with the disclosure
of the vulnerability. When it comes to closing exploitable security
holes, a usable solution today is better than a perfect solution next
year. If that's rubber stamping to you, well sorry.

 2. Impact to core protocol handshake: The mrex proposal alters the 
handshake to include data that is not exchanged in-protocol.
If this impacts PKCS#11 hardware tokens or other SSL accelerators
(an issue mentioned by Dr Stephen Henson on the TLS list on
that could severely impact deployment.
 
 Crypto hardware in general and PKCS#11 are not affected.  What I am
 changing is the plaintext input to a hash or hash-like function.

Agree.

 What might be affected is hardware implementations of SSL/TLS
 or hardware-support for SSL/TLS in TLS endpoints.  It might be
 a firmware rather than hardware issue there.
 
 So far, I haven't seen any vendor/implementor that is facing such
 difficulties raising such concerns.
 
 Which could mean two things:
  1. there is no problem
  2. those vendors/implementors have not looked at / evaluated
 the current proposals.
 
 (1) would make it a non-issue and (2) would indicate than a decision
 in the IETF is _premature_, we would have to ask some of those
 vendors/implementers for feedback before making a decision.

I am a bit disappointed that we have not had better participation from
the vendors, particularly hardware. My suspicion is that they're
reticent to discuss what they're up against either publicly or privately.

We can't delay the solution for the large percentage of affected
deployments that are represented by vendors who did choose to
participate in the discussion for the sake of those who did not.

 4. The mrex proposal requires use of TLS_RENEGO_PROTECTION_REQUEST in some 
circumstances.  That approach is untested in the field and I have
concerns it will negatively impact middleboxes.
 
 Huh?  That sounds extremely unbelievable.

I suspect that some inspecting firewall or IDS box somewhere is going to
take issue with the new cipher suite value and have to be reconfigured.
I also believe that it's a minimal concern and it's the least-breaking
of all the possible signaling methods.

 Do any of the browser vendors cut down on ciphersuites in their
 current reconnect fallbacks -- and remove ciphersuites like
 those with AES?
 
 A quick glance indicates that Firefox 3.0.15 happily sends
 a list of 18 ciphersuites, including Camellia, in an SSLv2 ClientHello
 on reconnect fallback.

My experiments were showing that some clients would advertise a reduced
list of cipher suites in the renegotiation hello.

As draft-ietf-tls-renegotiation allows use of either the cipher suite
value or the extension for C-S signaling, that mitigates the concern -- 
the field can choose the mechanism that works best.
 
 I consider this a defect in draft-ietf-tls-renegotiation-01.
 There should be exactly **ONE** signaling mechanism for the initial
 ClientHello on a connection so that extensions-tolerant but
 extensions-free Servers will not be force to wade through lists
 of extensions sent by clients.

It's a three-to-five line 'for' loop.

 The existing fallbacks or conservative approaches give you a hint
 where you may expect interop problems.  TLS extensions are a known
 interop problem.

Those servers need to be patched or you wouldn't want to trust
connecting to them anyway. Client implementations can use the MCSV
(magic cipher suite value) on the initial hello if they prefer.

 My take on some controversial issues:

 * There may not be a sufficient number of extension intolerant SSL/TLS 
 servers in operation to justify the added complexity of 
 TLS_RENEGO_PROTECTION_REQUEST.  However, I do not object to inclusion as 
 it's possibly helpful for some alleged extension intolerant servers 
 compliant with early drafts of SSLv3 and it helped move closer to WG 
 consensus.
 
 That cipher suite value is definitely not added complexity.
 On the contrary, it is significantly reduced complexity.

Well, there are now three ways to signal in an initial client hello. The

Re: [TLS] Last Call: draft-ietf-tls-renegotiation (Transport Layer Security (TLS) Renegotiation Indication Extension) to Proposed Standard

2009-12-01 Thread Marsh Ray
Bodo Moeller wrote:
 
 There is no good reason for this asymmetry of having just the client's
 verify_data in the client's message, but both the client's and the
 server's verify_data in the server's message -- the latter is pure
 overhead, both in the specification and in the on-the-wire protocol.  To
 remove the overhead from both, the second sentence should be saying,
 For ServerHellos which are renegotiating, this field contains the
 verify_data sent by the server on the immediately previous handshake
 (according changes to dependent parts of the specification then should
 be made).

Good point.

 For earlier Internet Drafts in the series preceding
 draft-ietf-tls-renegotiation-01.txt
 (draft-rescorla-tls-renegotiation.00.txt, etc.), a reason to keep the
 current text would have been to maintain compatibility with early
 implementations of those Internet Drafts.  However,
 draft-ietf-tls-renegotiation-01 has a newer
 TLS_RENEGO_PROTECTION_REQUEST cipher suite and thus breaks compatibility
 with those early implementations anyway.
 
 (This means that now, changing the ServerHello extension definition as
 suggested above gives the beneficial side-effect of exposing premature
 implementations, besides merely avoiding overhead.  Those incomplete
 implementations will be more obviously incompatible because they'd still
 be using the protocol variant with overhead, instead of just differing
 subtly in the behavior if faced with TLS_RENEGO_PROTECTION_REQUEST.)

I don't think any of those implementations have shipped, and I don't
think anyone has advertised that code as anything but experimental.

It might not hurt for the IANA to assign a different number to the
extension than what was used for testing.

 2. The Security Considerations section of
 draft-ietf-tls-renegotiation-01.txt (section 7) includes language that
 unnecessarily restricts the flexibility that the TLS protocol
 specifically provides for, regulating areas that the TLS standard
 intentionally does not specify (RFC 5246 section 1) -- the current
 Internet Draft says:
 
 By default, TLS implementations conforming to this document MUST verify
 that once the peer has been identified and authenticated within the TLS
 handshake, the identity does not change on subsequent renegotiations.
 For certificate based cipher suites, this means bitwise equality of the
 end-entity certificate. If the other end attempts to authenticate with a
 different identity, the renegotiation MUST fail. If the server_name
 extension is used, it MUST NOT change when doing renegotiation.
 
 There is no security reason for this restriction.

+1 Agree

 This sounds like a
 bad and incomplete workaround for certain problems with TLS that the
 updated protocol does not need a workaround for at all, because the
 protocol update is all about properly *solving* those problems.
 
 Keeping this language in the specification would have the weird
 implication that applications that *need* the flexibility provided by
 the TLS protocol (e.g., to allow renegotiation handshakes to switch to a
 different ciphersuite, which may require a different certificate) would
 have to to *skip* implementing the protocol update, and thus stay
 vulnerable.

Excellent point.

Although the sentence after the one you quoted seems to say that
implementations are allowed to do it anyway if they choose. Which raises
the question as to what this language actually means for the wire
protocol, anyway.

 The new restriction that draft-ietf-tls-renegotiation-01.txt tries to
 sneak in thus seems harmful both for interoperability and for security. 
 I haven't seen any signs of working group consensus on including this
 new restriction.

Details about which certificates endpoints should be allowed or
forbidden to negotiate is outside the scope of this bugfix.

We should remove it.

- Marsh

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf