Re: TLS break

2009-11-16 Thread Eric Rescorla
At Tue, 10 Nov 2009 20:11:50 -0500,
d...@geer.org wrote:
 
 
  | 
  | This is the first attack against TLS that I consider to be
  | the real deal. To really fix it is going to require a change to
  | all affected clients and servers. Fortunately, Eric Rescorla
  | has a protocol extension that appears to do the job.
  | 
 
 ...silicon...

Is the relevant silicon really this unprogrammable?

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 collisions now at 2^{52}?

2009-05-02 Thread Eric Rescorla
At Sat, 02 May 2009 21:53:40 +1200,
Peter Gutmann wrote:
 
 Perry E. Metzger pe...@piermont.com writes:
 Greg Rose g...@qualcomm.com writes:
  It already wasn't theoretical... if you know what I mean. The writing
  has been on the wall since Wang's attacks four years ago.
 
 Sure, but this should light a fire under people for things like TLS 1.2.
 
 Why?
 
 Seriously, what threat does this pose to TLS 1.1 (which uses HMAC-SHA1 and
 SHA-1/MD5 dual hashes)?  Do you think the phishers will even notice this as
 they sort their multi-gigabyte databases of stolen credentials?

Again, I don't want to get into a long argument with peter about TLS 1.1 vs.
TLS 1.2, but TLS 1.2 also defines an extension that lets the client tell
the server that it would take a SHA-256 certificate. Absent that, it's
not clear how the server would know. 

Of course, you could use that extension with 1.1 and maybe that's what the
market will decide...

-Ekr





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 collisions now at 2^{52}?

2009-05-02 Thread Eric Rescorla
At Sat, 2 May 2009 15:00:36 -0400,
Matt Blaze wrote:
 The serious concern here seems to me not to be that this particular
 weakness is a last straw wedge that enables some practical attack
 against some particular protocol -- maybe it is and maybe it isn't.
 What worries me is that SHA-1 has been demonstrated to not have a
 property -- infeasible to find collisions -- that protocol designers
 might have relied on it for.
 
 Security proofs become invalid when an underlying assumption is
 shown to be invalid, which is what has happened here to many
 fielded protocols that use SHA-1. Some of these protocols may well
 still be secure in practice even under degraded assumptions, but to
 find out, we'd have to analyze them again.  And that's a non-trivial
 task that as far as I know has not been done yet (perhaps I'm wrong
 and it has).  They'll never figure out how to exploit it is not,
 sadly, a security proof.

Without suggesting that collision-resistance isn't an important property,
I'd observe that we don't have anything like a reduction proof of
full TLS, or, AFAIK, any of the major security protocols in production
use. Really, we don't even have a good analysis of the implications
of relaxing any of the (soft) assumptions people have made about
the security of various primitives (though see [1] and [2] for some
handwaving analysis).

It's not clear this should make you feel any better when a primitive is
weakened, but then you probably shouldn't have felt that great to start
with.

-Ekr



[1] http://www.rtfm.com/dimacs.pdf 
[2] http://www.cs.columbia.edu/~smb/papers/new-hash.pdf


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


SHA-1 collisions now at 2^{52}?

2009-04-30 Thread Eric Rescorla
McDonald, Hawkes and Pieprzyk claim that they have reduced the collision
strength of SHA-1 to 2^{52}.

Slides here:
http://eurocrypt2009rump.cr.yp.to/837a0a8086fa6ca714249409ddfae43d.pdf

Thanks to Paul Hoffman for pointing me to this.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: MD5 considered harmful today, SHA-1 considered harmful tomorrow

2009-01-24 Thread Eric Rescorla
At Sat, 24 Jan 2009 14:55:15 +1300,
Peter Gutmann wrote:
 Yes, the changes between TLS 1.1 and TLS 1.2 are about as big as those
 between SSL and TLS. I'm not particularly happy about that either, but it's
 what we felt was necessary to do a principled job.
 
 It may have been a nicely principled job but what actual problem is the switch
 in hash algorithms actually solving?  Making changes of such magnitude to a
 very, very widely-deployed protocol is always a tradeoff between the necessity
 of the change and the pain of doing so.  In TLS 1.2 the pain is proportionate
 to the scale of the existing deployed base (i.e. very large) and the necessity
 of doing so appears to be zero.  I don't know of any attack or threat to the
 existing dual-hash mechanism that TLS 1.2 addresses, and it may even make
 things worse by switching from the redundant dual-hash (a testament to the
 original SSL designers) to a single algorithm.  This is why I called the
 changes gratuitous, there is no threat that they address - it can even be
 argued (no doubt endlessly) that they make the PRF weaker rather than stronger
 - but they come at considerable cost.

I agree that given the current set of attacks on SHA-1 and MD5,
there was no existing attack on the protocol. However, that doesn't
mean that improvements in analysis wouldn't lead to such attacks
and so the general feeling of the community was to err on the
side of safety. No doubt if we hadn't done so, there would be
people screaming about how TLS still used MD5. Indeed, that's
how this thread started. So, once again, I don't share your
opinions about these changes being gratuitous.

Moreover, the bulk of the changes weren't to the PRF. That's actually
quite a trivial change to implement, but rather to have accurate
signalling about what combinations of hashes and signatures
implementations could support--something that was painfully
non-orthogonal in SSLv3 and earlier versions of TLS. Again,
one could argue that we could have hacked around this and indeed 
the original Bellovin-Rescorla paper described a number of such
hacks, but the general feeling of the TLS WG was that it was
more important to get it right.


 SSL/TLS is (and has been for many years) part of the Internet infrastructure.
 You don't make significant, totally incompatible changes to the infrastructure
 without very carefully weighing the advantages and disadvantages. 

Which we did--including having the very discussion we are having
now--and concluded that the course of action we took was the right
one. You're of course free to weigh the evidence yourself and come to
a different conclusion, and even to hold the opinion that those who
agree with you are complete fools, but it's simply not accurate to
imply, as you do here, that we didn't think about it.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: MD5 considered harmful today, SHA-1 considered harmful tomorrow

2009-01-23 Thread Eric Rescorla
At Tue, 20 Jan 2009 17:57:09 +1300,
Peter Gutmann wrote:
 
 Steven M. Bellovin s...@cs.columbia.edu writes:
 
 So -- who supports TLS 1.2?
 
 Not a lot, I think.  The problem with 1.2 is that it introduces a pile of
 totally gratuitous incompatible changes to the protocol that require quite a
 bit of effort to implement (TLS 1.1 - 1.2 is at least as big a step, if not a
 bigger step, than the change from SSL to TLS), complicate an implementation,
 are difficult to test because of the general lack of implementations
 supporting it, and provide no visible benefit.  Why would anyone rush to
 implement this when what we've got now works[0] just fine?

Ordinarily I wouldn't bother to respond to Peter's curmudgeon act, but
since I was obviously heavily involved with TLS 1.2, I think a bit
of context is in order.

Nearly all the changes to TLS between 1.1 and 1.2 were specifically designed
to accomodate new digest algorithms throughout the protocol. For those
of you who aren't TLS experts, TLS had MD5 and SHA-1 wired all throughout
the protocol and we had to arrange to strip them out, plus find a way
to signal that you were willing to support the newer algorithms. To
avoid this becoming a huge pile of hacks, we had to restructure some of
the less orthogonal negotiation mechanisms. The other major (and totally
optional) change was the addition of combined cipher modes like GCM.
That change was made primarily because we were in there and there was
some demand for those modes. So, no, I don't consider these changes
gratuitous, though of course they are incompatible. Yes, there were
simpler things we could have done, such as just specifying a new set of
fixed digest algorithms to replace MD5 and SHA-1, but I and others felt
that this was unwise from a futureproofing perspective.

Yes, the changes between TLS 1.1 and TLS 1.2 are about as big as those
between SSL and TLS. I'm not particularly happy about that either, but
it's what we felt was necessary to do a principled job.

-Ekr







-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: once more, with feeling.

2008-09-21 Thread Eric Rescorla
At Sat, 20 Sep 2008 15:55:12 -0400,
Steven M. Bellovin wrote:
 
 On Thu, 18 Sep 2008 17:18:00 +1200
 [EMAIL PROTECTED] (Peter Gutmann) wrote:
 
  - Use TLS-PSK, which performs mutual auth of client and server
  without ever communicating the password.  This vastly complicated
  phishing since the phisher has to prove advance knowledge of your
  credentials in order to obtain your credentials (there are a pile of
  nitpicks that people will come up with for this, I can send you a
  link to a longer writeup that addresses them if you insist, I just
  don't want to type in pages of stuff here).
  
 Once upon a time, this would have been possible, I think.  Today,
 though, the problem is the user entering their key in a box that is (a)
 not remotely forgeable by a web site that isn't using the browser's
 TLS-PSK mechanism; and (b) will *always* be recognized by users, even
 dumb ones.  Today, sites want *pretty* login screens, with *friendly*
 ways to recover your (or Palin's) password, and not just generic grey
 boxes.  Then imagine the phishing page that displays an artistic but
 purely imaginary login screen, with a message about NEW!  Better
 naviation on our login page!

This is precisely the issue.

There are any number of cryptographic techniques that would allow
clients and servers to authenticate to each other in a phishing
resistant fashion, but they all depend on ensuring that the
*client* has access to the password and that the attacker can't
convince the user to type their password into some dialog
that the attacker controls. That's the challenging technical
issue, but it's UI, not cryptographic.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [OpenID] rfc2817: https vs http

2008-09-01 Thread Eric Rescorla
At Mon, 1 Sep 2008 21:00:55 +0100,
Ben Laurie wrote:
 The core issue is that HTTPS is used to establish end-to-end security,
 meaning, in particular, authentication and secrecy. If the MitM can
 disable the upgrade to HTTPS then he defeats this aim. The fact that
 the server declines to serve an HTTP page is irrelevant: it is the
 phisher that will be serving the HTTP page, and he will have no such
 compunction.

 The traditional fix is to have the client require HTTPS, which the
 MitM is powerless to interfere with. Upgrades would work fine if the
 HTTPS protocol said connect on port 80, ask for an upgrade, and if
 you don't get it, fail, however as it is upgrades work at the behest
 of the server. And therefore don't work.

Even without an active attacker, this is a problem if there is
sensitive information in the request, since that will generally
be transmitted prior to discovering the server can upgrade.


 Why don't we? Cost. It takes far more tin to serve HTTPS than HTTP.
 Even really serious modern processors can only handle a few thousand
 new SSL sessions per second. New plaintext sessions can be dealt with
 in their tens of thousands.
 
 Perhaps we should focus on this problem: we need cheap end-to-end
 encryption. HTTPS solves this problem partially through session
 caching, but it can't easily be shared across protocols, and sessions
 typically last on the order of five minutes, an insanely conservative
 figure.

Session caches are often dialed this low, but it's not really necessary
in most applications. First, a session cache entry isn't really that
big. It easily fits into 100 bytes on the server, so you can serve
a million concurrent user for a measly 100M. Second, you can use
CSSC/Tickets [RFC5077] to offload all the information onto the client.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [OpenID] rfc2817: https vs http

2008-09-01 Thread Eric Rescorla
At Mon, 1 Sep 2008 21:56:52 +0100,
Ben Laurie wrote:
 
 On Mon, Sep 1, 2008 at 9:49 PM, Eric Rescorla [EMAIL PROTECTED] wrote:
  At Mon, 1 Sep 2008 21:00:55 +0100,
  Ben Laurie wrote:
  The core issue is that HTTPS is used to establish end-to-end security,
  meaning, in particular, authentication and secrecy. If the MitM can
  disable the upgrade to HTTPS then he defeats this aim. The fact that
  the server declines to serve an HTTP page is irrelevant: it is the
  phisher that will be serving the HTTP page, and he will have no such
  compunction.
 
  The traditional fix is to have the client require HTTPS, which the
  MitM is powerless to interfere with. Upgrades would work fine if the
  HTTPS protocol said connect on port 80, ask for an upgrade, and if
  you don't get it, fail, however as it is upgrades work at the behest
  of the server. And therefore don't work.
 
  Even without an active attacker, this is a problem if there is
  sensitive information in the request, since that will generally
  be transmitted prior to discovering the server can upgrade.
 
 Obviously we can fix this at the protocol level.
 
  Why don't we? Cost. It takes far more tin to serve HTTPS than HTTP.
  Even really serious modern processors can only handle a few thousand
  new SSL sessions per second. New plaintext sessions can be dealt with
  in their tens of thousands.
 
  Perhaps we should focus on this problem: we need cheap end-to-end
  encryption. HTTPS solves this problem partially through session
  caching, but it can't easily be shared across protocols, and sessions
  typically last on the order of five minutes, an insanely conservative
  figure.
 
  Session caches are often dialed this low, but it's not really necessary
  in most applications. First, a session cache entry isn't really that
  big. It easily fits into 100 bytes on the server, so you can serve
  a million concurrent user for a measly 100M.
 
 But if the clients drop them after five minutes, this gets you
 nowhere.

Agreed. I thought we were contemplating protocol changes in
any case, so I figured having clients just use a longer session
cache (5 minutes is silly for a client anyway, since the amount
of memory consumed on the client is miniscule) wasn't much
of an obstacle.


 BTW, sessions are only that small if there are no client
 certs.

True enough, though that's the common case right now.


  Second, you can use
  CSSC/Tickets [RFC5077] to offload all the information onto the client.
 
 Likewise.

Except that CSSC actually looks better when client certs are used, since
you can offload the entire cert storage to the client, so you get
more memory savings.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Decimal encryption

2008-08-28 Thread Eric Rescorla
At Thu, 28 Aug 2008 17:32:10 +1200,
Peter Gutmann wrote:
 
 Eric Rescorla [EMAIL PROTECTED] writes:
 
 There are a set of techniques that allow you to encrypt elements of arbitrary
 sets back onto that set.
 
 ... and most of them seem to be excessively complicated for what they end up
 achieving.  Just for reference the mechanism from the sci.crypt thread of more
 than a decade ago was:

[Description of reduced-range stream cipher elided]


 Another advantage of the KSG use is that you can precalculate the key stream
 offline, the implementation I used at the time pre-generated 4K of keystream
 and then used it to encrypt bursty text messages with real-time constraints
 that didn't allow for pauses to run the cipher.
 
 (The thread contains lots of tweaks and variations of this).

There's noting inherently wrong with this mechanism, but like all
stream ciphers, it can't be used if you want to encrypt multiple
independent values, e.g., credit cards in a database--without
a randomizer (which implies expansion) you have the usual two-time
pad problems. A B-R style block cipher can, albeit with lookup
table issues.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Decimal encryption

2008-08-27 Thread Eric Rescorla
At Wed, 27 Aug 2008 17:05:44 +0200,
Philipp Gühring wrote:
 
 Hi,
 
 I am searching for symmetric encryption algorithms for decimal strings.
 
 Let's say we have various 40-digit decimal numbers:
 2349823966232362361233845734628834823823
 3250920019325023523623692235235728239462
 0198230198519248209721383748374928601923
 
 As far as I calculated, a decimal has the equivalent of about 3,3219
 bits, so with 40 digits, we have about 132,877 bits.
 
 Now I would like to encrypt those numbers in a way that the result is a
 decimal number again (that's one of the basic rules of symmetric
 encryption algorithms as far as I remember).
 
 Since the 132,877 bits is similar to 128 bit encryption (like eg. AES),
 I would like to use an algorithm with a somewhat comparable strength to AES.
 But the problem is that I have 132,877 bits, not 128 bits. And I can't
 cut it off or enhance it, since the result has to be a 40 digit decimal
 number again.
 
 Does anyone know a an algorithm that has reasonable strength and is able
 to operate on non-binary data? Preferrably on any chosen number-base?

There are a set of techniques that allow you to encrypt elements of
arbitrary sets back onto that set. 

The original paper on this is:
John Black and Phillip Rogaway. Ciphers with arbitrary ?nite domains. In 
CT-RSA, pages 114?130, 2002. 

For a modern proposal to make this a NIST mode, see:
http://csrc.nist.gov/groups/ST/toolkit/BCM/documents/proposedmodes/ffsem/ffsem-spec.pdf

-Ekr

Full Disclosure: Terence Spies, the author of the FFSEM proposal,
works for Voltage, Voltage has a product based on this technology.
and I'm on Voltage's TAB and have done some work for them.
 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Decimal encryption

2008-08-27 Thread Eric Rescorla
At Wed, 27 Aug 2008 16:10:51 -0400 (EDT),
Jonathan Katz wrote:
 
 On Wed, 27 Aug 2008, Eric Rescorla wrote:
 
  At Wed, 27 Aug 2008 17:05:44 +0200,
  There are a set of techniques that allow you to encrypt elements of
  arbitrary sets back onto that set.
 
  The original paper on this is:
  John Black and Phillip Rogaway. Ciphers with arbitrary ?nite domains. In
  CT-RSA, pages 114?130, 2002.
 
 But he probably wants an encryption scheme, not a cipher.

Hmm... I'm not sure I recognize the difference between encryption
scheme and cipher. Can you elaborate?


 Also, correct me if I am wrong, but Black and Rogaway's approach is not 
 efficient for large domains. But if you use their approach for small 
 domains then you open yourself up to dictionary attacks.

I suppose it depends what you mean by small and large.

A lot of the relevant values are things like SSNs, CCNs, etc.
which fall in the 10-20 digit category, where the Luby-Rackoff
approach is efficient. As I understand the situation, the
cycle following approach is efficient as long as the set
is reasonably close to the L-R block size. 

As far as dictionary attacks go, for any small domain permutation
you have to worry about table construction attacks. The only 
defense I know of is randomized encryption which defeats the
non-expansion requirement.

WRT to the security of the L-R construction, Spies claims that
I believe that Patarin's 2004 result [0] is relevant here, but
I'm not qualified to evaluate it. Anyway, the reference I provided
earlier [1] provides a summary of the claimed security properties
of L-R + Cycle Following.

-Ekr

[0] Jacques Patarin. Security of random feistel schemes with 5 or more rounds. 
In Matthew K. Franklin, editor, CRYPTO, volume 3152 of Lecture Notes in 
Computer Science, pages 106?122. Springer, 2004. 

[1] http://csrc.nist.gov/groups/ST/toolkit/BCM/documents/proposedmodes/
ffsem/ffsem-spec.pdf

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Some notes the Debian OpenSSL PRNG bug and DHE

2008-08-22 Thread Eric Rescorla
Some colleagues (Hovav Shacham, Brandon Enright, Scott Yikel, and
Stefan Savage) and I have been doing some followup work on the Debian
OpenSSL PRNG bug. Perry suggested that some cryptography readers
might be interested in our preliminary analysis of the DHE angle,
which can be found here:

http://www.educatedguesswork.org/2008/08/the_debian_openssl_prng_bug_an.html

Also, Hovav gave a WIP on this topic at USENIX Security. The slides are at:

http://cs.ucsd.edu/~hovav/dist/debianwip.pdf


-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [p2p-hackers] IETF rejects Obfuscated TCP

2008-08-20 Thread Eric Rescorla
At Tue, 19 Aug 2008 20:57:33 -0700,
Alex Pankratov wrote:
 
 CC'ing cryptography mail list as it may be of some interest to the 
 folks over there.
 
  -Original Message-
  From: [EMAIL PROTECTED] [mailto:p2p-hackers-
  [EMAIL PROTECTED] On Behalf Of Lars Eggert
  Sent: August 19, 2008 5:34 PM
  To: David Barrett; theory and practice of decentralized computer
  networks
  Subject: Re: [p2p-hackers] IETF rejects Obfuscated TCP
  
  On 2008-8-19, at 17:20, ext David Barrett wrote:
   On Tue, 19 Aug 2008 4:19 pm, Lars Eggert wrote:
   Actually, in 1994, the IETF standardized Transactional TCP (T/TCP)
  in
   RFC1644, which allows just that. However, there are serious DDoS
   issues with T/TCP which have prevented it seeing significant
   deployment.
  
   Hm, I'm sorry I don't know the history there -- why is this more
   costly
   or abusive than SSL over standard TCP?  Is it due to something
   specific
   to SSL, or due to it a simple lack of congestion control on those
   first
   payloads?
  
  
  The issue is unrelated to a specific kind of SYN payload (SSL or
  otherwise.) The issue is that a SYN flood of SYNs with data consumes
  much more memory on the receiver than a regular SYN flood, because the
  receiver is obligated to cache the data if a T/TCP liveness check
  fails. You can't use SYN cookies with data SYNs, either.
 
 This is just a quick thought, but a variation of SYN cookies for TLS
 appears to be quite easy to do. It does require defining new record 
 type, but latter is permitted by TLS spec as per Section 6, RFC 2246.
 
 The idea, obviously, is to include a copy of ClientHello message in a
 second batch of records sent by the client. This should allow server
 to generate ServerKeyExchange parameters from the original ClientHello
 message (ClientHello.random + IP/port quintet + server cookie secret),
 then discard ClientHello and delay creating the state .. exactly the
 same way SYN cookies mechanism does it.

May I ask what you're trying to accomplish? Recall that TLS doesn't
start until a TCP connection has been established, so there's
aready a proof of the round trip.

That said, a mechanism of this type has already been described
for DTLS (RFC 4347), so no new invention would be needed.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Eric Rescorla
At Fri, 8 Aug 2008 11:50:59 +0100,
Ben Laurie wrote:
 However, since the CRLs will almost certainly not be checked, this
 means the site will still be vulnerable to attack for the lifetime of
 the certificate (and perhaps beyond, depending on user
 behaviour). Note that shutting down the site DOES NOT prevent the
 attack.
 
 Therefore mitigation falls to other parties.
 
 1. Browsers must check CRLs by default.

Isn't this a good argument for blacklisting the keys on the client
side?

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Eric Rescorla
At Fri, 8 Aug 2008 17:31:15 +0100,
Dave Korn wrote:
 
 Eric Rescorla wrote on 08 August 2008 16:06:
 
  At Fri, 8 Aug 2008 11:50:59 +0100,
  Ben Laurie wrote:
  However, since the CRLs will almost certainly not be checked, this
  means the site will still be vulnerable to attack for the lifetime of
  the certificate (and perhaps beyond, depending on user
  behaviour). Note that shutting down the site DOES NOT prevent the attack.
  
  Therefore mitigation falls to other parties.
  
  1. Browsers must check CRLs by default.
  
  Isn't this a good argument for blacklisting the keys on the client
  side?
 
   Isn't that exactly what Browsers must check CRLs means in this context
 anyway?  What alternative client-side blacklisting mechanism do you suggest?

It's easy to compute all the public keys that will be generated
by the broken PRNG. The clients could embed that list and refuse
to accept any certificate containing one of them. So, this
is distinct from CRLs in that it doesn't require knowing 
which servers have which cert...

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Eric Rescorla
At Fri, 08 Aug 2008 10:43:53 -0700,
Dan Kaminsky wrote:
 Eric Rescorla wrote:
  It's easy to compute all the public keys that will be generated
  by the broken PRNG. The clients could embed that list and refuse
  to accept any certificate containing one of them. So, this
  is distinct from CRLs in that it doesn't require knowing 
  which servers have which cert...
 Funnily enough I was just working on this -- and found that we'd end up 
 adding a couple megabytes to every browser.  #DEFINE NONSTARTER.  I am 
 curious about the feasibility of a large bloom filter that fails back to 
 online checking though.  This has side effects but perhaps they can be 
 made statistically very unlikely, without blowing out the size of a browser.

Why do you say a couple of megabytes? 99% of the value would be
1024-bit RSA keys. There are ~32,000 such keys. If you devote an
80-bit hash to each one (which is easily large enough to give you a
vanishingly small false positive probability; you could probably get
away with 64 bits), that's 320KB.  Given that the smallest Firefox
build (Windows) is 7.1 MB, this doesn't sound like a nonstarter to me
at all, especially since the browser could download it in the
background.


 Updating the filter could then be something we do on a 24 hour 
 autoupdate basis.  Doing either this, or doing revocation checking over 
 DNS (seriously), is not necessarily a bad idea.  We need to do better 
 than we've been.

Yes, there are a number of approaches to more efficient CRL
checking, I think that's a separate issue.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Eric Rescorla
At Fri, 8 Aug 2008 15:52:07 -0400 (EDT),
Leichter, Jerry wrote:
 
 |   Funnily enough I was just working on this -- and found that we'd
 |   end up adding a couple megabytes to every browser.  #DEFINE
 |   NONSTARTER.  I am curious about the feasibility of a large bloom
 |   filter that fails back to online checking though.  This has side
 |   effects but perhaps they can be made statistically very unlikely,
 |   without blowing out the size of a browser.
 |  Why do you say a couple of megabytes? 99% of the value would be
 |  1024-bit RSA keys. There are ~32,000 such keys. If you devote an
 |  80-bit hash to each one (which is easily large enough to give you a
 |  vanishingly small false positive probability; you could probably get
 |  away with 64 bits), that's 320KB.  Given that the smallest Firefox
 |  [...]
 You can get by with a lot less than 64 bits.  People see problems like
 this and immediately think birthday paradox, but there is no birthday
 paradox here:  You aren't look for pairs in an ever-growing set,
 you're looking for matches against a fixed set.  If you use 30-bit
 hashes - giving you about a 120KB table - the chance that any given
 key happens to hash to something in the table is one in a billion,
 now and forever.  (Of course, if you use a given key repeatedly, and
 it happens to be that 1 in a billion, it will hit every time.  So an
 additional table of known good keys that happen to collide is worth
 maintaining.  Even if you somehow built and maintained that table for
 all the keys across all the systems in the world - how big would it
 get, if only 1 in a billion keys world-wide got entered?)

I don't believe your math is correct here. Or rather, it would
be correct if there was only one bad key.

Remember, there are N bad keys and you're using a b-bit hash,
which has 2^b distinct values. If you put N' entries in the
hash table, the probability that a new key will have the
same digest as one of them is N'/(2^b). If b is sufficiently
large to make collisions rare, then N'=~N and we get 
N/(2^b).

To be concrete, we have 2^15 distinct keys, so, the
probability of a false positive becomes (2^15)/(2^b)=2^(b-15).
To get that probability below 1 billion, b+15 = 30, so
you need about 45 bits. I chose 64 because it seemed to me
that a false positive probability of 2^{-48} or so was better.

-Ekr




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The PKC-only application security model ...

2008-07-24 Thread Eric Rescorla
At Wed, 23 Jul 2008 17:32:02 -0500,
Thierry Moreau wrote:
 
 
 
 Anne  Lynn Wheeler wrote about various flavors of certificateless 
 public key operation in various standards, notably in the financial 
 industry.
 
 Thanks for reporting those.
 
 No doubt that certificateless public key operation is neither new nor 
 absence from today's scene.
 
 The document I published on my web site today is focused on fielding 
 certificateless public operations with the TLS protocol which does not 
 support client public keys without certificates - hence the meaningless 
 security certificate. Nothing fancy in this technique, just a small 
 contribution with the hope to facilitate the use of client-side PKC.

DTLS-SRTP 
(http://tools.ietf.org/html/draft-ietf-sip-dtls-srtp-framework-02,
http://tools.ietf.org/html/draft-ietf-avt-dtls-srtp)
uses a similar technique: certificates solely as a key 
carrier authenticated by an out-of-band exchange.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: how bad is IPETEE?

2008-07-16 Thread Eric Rescorla
At Tue, 15 Jul 2008 18:33:10 -0400 (EDT),
Leichter, Jerry wrote:
 For an interesting discussion of IPETEE, see:
 
 www.educatedguesswork.org/moveabletype/archives/2008/07/ipetee.html
 
 Brief summary:  This is an initial discussion - the results of a
 drinking session - that got leaked as an actual proposal.  The
 guys behind it are involved with The Pirate Bay.  The goal is
 to use some form of opportunistic encryption to make as much
 Internet traffic as possible encrypted as quickly as possible -
 which puts all kinds of constraints on a solution, which in
 turn also necessarily weakens the solution (e.g., without some
 required configuration, there's no way you can avoid MITM
 attacks) and forces odd compromises.

I also have a followup post at:
http://www.educatedguesswork.org/movabletype/archives/2008/07/more_on_ipetee.html

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: how bad is IPETEE?

2008-07-10 Thread Eric Rescorla
At Thu, 10 Jul 2008 18:10:27 +0200,
Eugen Leitl wrote:
 
 
 In case somebody missed it, 
 
 http://www.tfr.org/wiki/index.php?title=Technical_Proposal_(IPETEE)

 I'm not sure what the status of http://postel.org/anonsec/
 is, the mailing list traffic dried up a while back.

This is the first I have heard of this.

That said, some initial observations:

- It's worth asking why, if you're doing per-connection keying,
  it makes sense to do this at the IP layer rather than the
  TCP/UDP layer. 

- Why not simply use TLS or DTLS?

- The uh, novel nature of the cryptographic mechanisms is
  pretty scary. Salsa-20? AES-CBC with implicit IV?
  A completely new cryptographic handshake? Why not use
  IPsec?

- A related idea was proposed a while back (by Lars Eggert,
  I believe). See S 6.2.3.1 of:

  
https://svn.resiprocate.org/rep/ietf-drafts/ekr/draft-rescorla-tcp-auth-arch.txt

-Ekr



  

  

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Using a MAC in addition to symmetric encryption

2008-06-29 Thread Eric Rescorla
At Fri, 27 Jun 2008 07:52:59 -0700 (PDT),
Erik Ostermueller wrote:
 If I exchange messages with a system and the messages are encrypted
 with a symmetric key, what further benefit would we get by using a
 MAC (Message Authentication Code) along with the message encryption?
 Being new to all this, using the encrytpion and MAC together seem
 redundant.

Encryption doesn't necessarily provide integrity.

Consider the case of a stream cipher like RC4, where you have
a function RC4(K) which generates a string of bytes from the
key K.

The encryption function is then:

Ciphertext[i] = RC4(K)[i] XOR Plaintext[i]


It should be apparent that an attacker can make targeted
modifications to the plaintext. Say he knows that plaintext
byte i is 'A' and he wants it to be 'B', he just changed
Ciphertext[i]' = Ciphertext[i] XOR 'A' XOR 'B'. Mission
accomplished.

-Ekr



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: blacklisting the bad ssh keys?

2008-05-22 Thread Eric Rescorla
At Wed, 14 May 2008 19:52:58 -0400,
Steven M. Bellovin wrote:
 
 Given the published list of bad ssh keys due to the Debian mistake (see
 http://metasploit.com/users/hdm/tools/debian-openssl/), should sshd be
 updated to contain a blacklist of those keys?  I suspect that a Bloom
 filter would be quite compact and efficient.

I've been having a similar thought. This also probably applies to SSL
keys, given the rather lack attitude that most clients have about
checking CRLS.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-05 Thread Eric Rescorla
At Sun, 04 May 2008 20:14:42 -0400,
Perry E. Metzger wrote:
 
 
 Marcos el Ruptor [EMAIL PROTECTED] writes:
  All this open-source promotion is a huge waste of time. Us crackers
  know exactly how all the executables we care about (especially all
  the crypto and security related programs) work.
 
 With respect, no, you don't. If you did, then all the flaws in Windows
 would have been found at once, instead of trickling out over the
 course of decades as people slowly figure out new unintended
 behaviors. Anything sufficiently complicated to be interesting simply
 cannot be fully understood by inspection, end of story.

Without taking a position on the security of open source vs. closed
source (which strikes me as an open question), I agree with Perry
that deciding whether a given piece of software has back doors is
not really possible for a nontrivial piece of software. Note that
this is a very different problem from finding a single vulnerability
or answering specific (small) questions about the code [0].

-Ekr

[0] That said, I don't think that determining whether a nontrivial
piece of software security vulnerabilities is difficult. The
answer is yes.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-09 Thread Eric Rescorla
At Thu, 7 Feb 2008 10:34:42 -0500 (EST),
Leichter, Jerry wrote:
 | Since (by definition) you don't have a copy of the packet you've lost,
 | you need a MAC that survives that--and is still compact. This makes
 | life rather more complicated. I'm not up on the most recent lossy
 | MACing literature, but I'm unaware of any computationally efficient
 | technique which has a MAC of the same size with a similar security
 | level. (There's an inefficient technique of having the MAC cover all
 | 2^50 combinations of packet loss, but that's both prohibitively
 | expensive and loses you significant security.)
 My suggestion for a quick fix:  There's some bound on the packet loss
 rate beyond which your protocol will fail for other reasons.  If you
 maintain separate MAC's for each k'th packet sent, and then deliver k
 checksums periodically - with the collection of checksums itself MAC'ed,
 a receiver should be able to check most of the checksums, and can reset
 itself for the others (assuming you use a checksum with some kind of
 prefix-extension property; you may have to send redundant information
 to allow that, or allow the receiver to ask for more info to recover).

So, this issue has been addressed in the broadcast signature context
where you do a two-stage hash-and-sign reduction (cf. [PG01]), but
when this only really works because hashes are a lot more efficient
than signatures. I don't see why it helps with MACs.


 Obviously, if you *really* use every k'th packet to define what is in
 fact a substream, an attacker can arrange to knock out the substream he
 has chosen to attack.  So you use your encryptor to permute the
 substreams, so there's no way to tell from the outside which packet is
 part of which substream.  Also, you want to make sure that a packet
 containing checksums is externally indistinguishable from one containing
 data.  Finally, the checksum packet inherently has higher - and much
 longer-lived - semantic value, so you want to be able to request that
 *it* be resent.  Presumably protocols that are willing to survive data
 loss still have some mechanism for control information and such that
 *must* be delivered, even if delayed.

This basically doesn't work for VoIP, where latency is a real issue.


-Ekr

[PG01] Philippe Golle, Nagendra Modadugu: Authenticating Streamed Data in the 
Presence of
Random Packet Loss. NDSS 2001

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-09 Thread Eric Rescorla
At Thu, 7 Feb 2008 14:42:36 -0500 (EST),
Leichter, Jerry wrote:
 |  Obviously, if you *really* use every k'th packet to define what is in
 |  fact a substream, an attacker can arrange to knock out the substream he
 |  has chosen to attack.  So you use your encryptor to permute the
 |  substreams, so there's no way to tell from the outside which packet is
 |  part of which substream.  Also, you want to make sure that a packet
 |  containing checksums is externally indistinguishable from one containing
 |  data.  Finally, the checksum packet inherently has higher - and much
 |  longer-lived - semantic value, so you want to be able to request that
 |  *it* be resent.  Presumably protocols that are willing to survive data
 |  loss still have some mechanism for control information and such that
 |  *must* be delivered, even if delayed.
 | 
 | This basically doesn't work for VoIP, where latency is a real issue.
 It lets the receiver to make a choice:  Deliver the data immediately,
 avoiding the latency at the cost of possibly releasing bogus data (which
 we'll find out about, and report, later); or hold off on releasing the
 data until you know it's good, at the cost of introducing audible
 artifacts.  In non-latency-sensitive designs, the prudent approach is to
 never allow data out of the cryptographic envelope until you've
 authenticated it.  Here, you should probably be willing to do that, on
 the assumption that the application layer - a human being - will know
 how to react if you tell him authentication has failed, please
 disregard what you heard in the last 10 seconds.

Well, since there's a much simpler procedure accept ~5-10% overhead, this 
doesn't seem like a particularly attractive design.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-06 Thread Eric Rescorla
At Mon, 4 Feb 2008 09:33:37 -0500 (EST),
Leichter, Jerry wrote:
 
 Commenting on just one portion:
 | 2. VoIP over DTLS
 | As Perry indicated in another message, you can certainly run VoIP
 | over DTLS, which removes the buffering and retransmit issues 
 | James is alluding to. Similarly, you could run VoIP over IPsec
 | (AH/ESP). However, for performance reasons, this is not the favored
 | approach inside IETF.
 | 
 | The relevant issue here is packet size. Say you're running a 
 | low bandwidth codec like G.729 at 8 kbps. If you're operating at
 | the commonly used 50 pps, then each packet is 160 bits == 20 bytes.
 | The total overhead of the IP, UDP, and RTP headers is 40 bytes,
 | so you're sending 60 byte packets. 
 | 
 | - If you use DTLS with AES in CBC mode, you have the 4 byte DTLS
 |   header, plus a 16 byte IV, plus 10 bytes of MAC (in truncated MAC
 |   mode), plus 2 bytes of padding to bring you up to the AES block
 |   boundary: DTLS adds 32 bytes of overhead, increasing packet
 |   size by over 50%. The IPsec situation is similar.
 | 
 | - If you use CTR mode and use the RTP header to form the initial
 |   CTR state, you can remove all the overhead but the MAC itself,
 |   reducing the overhead down to 10 bytes with only 17% packet
 |   expansion (this is how SRTP works)
 If efficiency is your goal - and realistically it has to be *a* goal -
 then you need to think about the semantics of what you're securing.  By
 the nature of VOIP, there's very little semantic content in any given
 packet, and because VOIP by its nature is a real-time protocol, that
 semantic content loses all value in a very short time.  Is it really
 worth 17% overhead to provide this level of authentication for data that
 isn't, in and of itself, so significant?  At least two alternative
 approach suggest themselves:

   - Truncate the MAC to, say, 4 bytes.  Yes, a simple brute
   force attack lets one forge so short a MAC - but
   is such an attack practically mountable in real
   time by attackers who concern you?

In fact, 32-bit authentication tags are a feature of
SRTP (RFC 3711). 



   - Even simpler, send only one MAC every second - i.e.,
   every 50 packets, for the assumed parameters.
   Yes, an attacker can insert a second's worth
   of false audio - after which he's caught.  I
   suppose one could come up with scenarios in
   which that matters - but they are very specialized.
   VOIP is for talking to human beings, and for
   human beings in all but extraordinary circumstances
   a second is a very short time.

Not sending a MAC on every packet has difficult interactions with
packet loss. If you do the naive thing and every N packets send a MAC
covering the previous N packets, then if you lose even one of those
packets you can't verify the MAC. But since some packet loss is
normal, an attacker can cover their tracks simply by removing one out
of every N packets.

Since (by definition) you don't have a copy of the packet you've lost,
you need a MAC that survives that--and is still compact. This makes
life rather more complicated. I'm not up on the most recent lossy
MACing literature, but I'm unaware of any computationally efficient
technique which has a MAC of the same size with a similar security
level. (There's an inefficient technique of having the MAC cover
all 2^50 combinations of packet loss, but that's both prohibitively
expensive and loses you significant security.)


 The NSA quote someone - Steve Bellovin? - has repeated comes to mind:
 Amateurs talk about algorithms.  Professionals talk about economics.
 Using DTLS for VOIP provides you with an extremely high level of
 security, but costs you 50% packet overhead.  Is that worth it to you?
 It really depends - and making an intelligent choice requires that
 various alternatives along the cost/safety curve actually be available.

Which there are, as indicated above and in my previous message. 

-Ekr



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-06 Thread Eric Rescorla
At Mon, 04 Feb 2008 14:29:50 +1000,
James A. Donald wrote:
 
 James A. Donald wrote:
   I have figured out a solution, which I may post here
   if you are interested.
 
 Ian G wrote:
   I'm interested.  FTR, zooko and I worked on part of
   the problem, documented briefly here:
   http://www.webfunds.org/guide/sdp/index.html
 
 I have posted How to do VPNs right at
 http://jim.com/security/how_to_do_VPNs.html
 
 It covers somewhat different ground to that which your
 page covers, focusing primarily on the problem of
 establishing the connection.
 
   humans are not going to carry around large
   strong secrets every time either end of the
   connection restarts.  In fact they are not going
   to transport large strong secrets any time ever,
   which is the flaw in SSL and its successors such
   as IPSec and DTLS

This paragraph sure is confused.

1. IPsec most certainly is not a successor to SSL. On
   the contrary, IPsec predates SSL.

2. TLS doesn't require you to carry around strong secrets.
   I refer you to TLS-SRP [RFC 5054]

3. For that matter, even if you ignore SRP, TLS supports
   usage models which never require you to carry around
   strong secrets: you preconfigure the server's public
   key and send a password over the TLS channel. Since
   this is the interface SSH uses, the claim that humans
   won't do it is manifestly untrue.


-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-03 Thread Eric Rescorla
At Sun, 03 Feb 2008 12:51:25 +1000,
James A. Donald wrote:
 
  --
 Ivan Krstic' wrote:
   The wider point of Peter's writeup -- and of the
   therapy -- is that developers working on security
   tools should _know_ they're working in a notoriously,
   infamously hard field where the odds are
   _overwhelmingly_ against them if they choose to
   engineer new solutions.
 
 That point is of course true.  But the developers wanted
 to transport IP and UDP.  Peter should have known that
 SSL is incapable of transporting IP and UDP, because it
 will introduce large, unpredictable, and variable
 delays.
 
 If, for example, VOIP goes over SSL, the speakers would
 become entirely unintelligible.

For those who haven't already made up their minds, the situation with
VoIP and TCP (SSL doesn't really change the situation) is actually a
bit more complicated than this.

1. VoIP over TCP
If you have a reasonably fast loss-free channel (this isn't that
uncommon) then it doesn't actually make an enormous amount of
difference whether you're running TCP or UDP, especially if you're
running a high-bandwidth codec like G.711. It helps to turn off the
Nagle algorithm, of course, since it reduces the amount of buffering
in the sending TCP stack.

That said, any significant amount of packet loss does tend to create
some pretty significant artifacts, since you need to stall the
receiving TCP while you wait for the retransmit.  So, as a practical
matter nearly all interactive VoIP systems use UDP and some kind of
packet loss concealment (interpolation, etc.).

That's not to say that SSL/TLS is totally innocent here. The designers
of SSL/TLS *could* have chosen to design a protocol which would work
over datagram transport as well as stream transport, but they didn't.
DTLS (RFC 4347) is such a protocol. That said, if you compare DTLS to
TLS, there is a small amount of additional complexity in DTLS, so it's
arguable that it was a good design choice to go for the sweet spot of
stream transport, since that's what SSL was really intended for.


2. VoIP over DTLS
As Perry indicated in another message, you can certainly run VoIP
over DTLS, which removes the buffering and retransmit issues 
James is alluding to. Similarly, you could run VoIP over IPsec
(AH/ESP). However, for performance reasons, this is not the favored
approach inside IETF.

The relevant issue here is packet size. Say you're running a 
low bandwidth codec like G.729 at 8 kbps. If you're operating at
the commonly used 50 pps, then each packet is 160 bits == 20 bytes.
The total overhead of the IP, UDP, and RTP headers is 40 bytes,
so you're sending 60 byte packets. 

- If you use DTLS with AES in CBC mode, you have the 4 byte DTLS
  header, plus a 16 byte IV, plus 10 bytes of MAC (in truncated MAC
  mode), plus 2 bytes of padding to bring you up to the AES block
  boundary: DTLS adds 32 bytes of overhead, increasing packet
  size by over 50%. The IPsec situation is similar.

- If you use CTR mode and use the RTP header to form the initial
  CTR state, you can remove all the overhead but the MAC itself,
  reducing the overhead down to 10 bytes with only 17% packet
  expansion (this is how SRTP works)

Note that some (but not all) of the gain from SRTP can be obtained
by swapping CTR for CBC. But you're still getting an advantage
from being willing to overload the RTP header and that's harder
to optimize out (though Nagendra Modadugu and I spent some time
thinking about this).

I don't propose to get into an extended debate about whether it is
better to use SRTP or to use generic DTLS. That debate has already
happened in IETF and SRTP is what the VoIP vendors are doing. However,
the good news here is that you can use DTLS to key SRTP
(draft-ietf-avt-dtls-srtp), so there's no need to invent a new
key management scheme.

-Ekr















-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-01 Thread Eric Rescorla
At Fri, 01 Feb 2008 18:42:03 +1000,
James A. Donald wrote:
 
 Guus Sliepen wrote:
  Peter's write-up was the reason I subscribed to this cryptography
  mailing list. After a while the anger/hurt feelings I had disappeared.
  I knew then that Peter was right in his arguments. Nowadays I can look
  at Peter's write-up more objectively and I can see that it is not as
  ad-hominem as it felt back then, although the whole soundwave paragraph
  still sounds very childish ;)
  
  When tinc 2.0 will ever come out (unfortunately I don't have a lot of
  time to work on it these days), it will probably use the GnuTLS library
  and authenticate and connect daemons with TLS. For performance reasons,
  you want to tunnel network packets via UDP instead of TCP, so hopefully
  there is a working DTLS implementation as well then.
 
 I have been considering the problem of encrypted channels over UDP or 
 IP.  TLS will not work for this, since it assumes and provides a 
 reliable, and therefore non timely channel, whereas what one wishes to 
 provide is a channel where timeliness may be required at the expense of 
 reliability.

DTLS: RFC 4347.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fixing SSL (was Re: Dutch Transport Card Broken)

2008-01-31 Thread Eric Rescorla
At Thu, 31 Jan 2008 03:04:00 +0100,
Philipp Gühring wrote:
 
 Hi,
 
  Huh? What are you claiming the problem with sending client certificates
  in plaintext is 
 
 * It´s a privacy problem
 * It´s a security problem for people with a security policy that requires the 
 their identities to be kept secret, and only to be used to authenticate to 
 the particular server they need
 * It´s an availability problem for people that need high-security 
 authentication mechanisms, combined with high-privacy demands
 * It´s a identity theft problem in case the certificate contains personal 
 data 
 that can be used for identity theft

I don't find this at all convincing. There are a variety of different
threat vectors here:

1. Phishing.
2. Pharming (DNS spoofing).
3. Passive attacks.

In the case of phishing, the fact that the client sends its certificates
in the clear is totally irrelevant, since the client would simply send
its identity encrypted under the server's certificate. The only
fix for this alleged privacy leak in the phishing context is for
the client to refuse to deliver his certificate to anyone but people
who present valid certs that he otherwise trusts.

Now, this is potentially an attack if the attacker is passive but
on-path, either via pharming or via subverting some router, but
I'm unaware of any evidence that this is used as a certificate
disclosure attack vector.


  (as if anyone uses client certificates anyway)? 
 
 Guess why so few people are using it ...
 If it were secure, more people would be able to use it.

No, if it were *convenient* people would use it. I know of absolutely
zero evidence (nor have you presented any) that people choose not
to use certs because of this kind of privacy issue--but I know
of plenty that they find getting certs way too inconvenient.


  That the phisher gets to see the client's identity?
 
 Validated email addresses for spamming. Spear-phishing perhaps, ...

Validated email addresses are not exactly hard to obtain.


  It doesn't let them impersonate the client to anyone. 
 
 It does let them impersonate the client to anyone who doesn´t care about the 
 public key. (There are applications that just use the DN+Issuer information 
 that they normally extract out of the certificates, ...)

If those applications do not force the client to do proof of possession
of the private key, then they are fatally broken. It's not our job
to fix them.


   We have the paradox situation that I have to tell people that they should
   use HTTPS with server-certificates and username+password inside the HTTPS
   session, because that´s more secure than client certificates ...
 
  No it isn't more secure.
 
 Using username+password inside HTTPS does not leak the client´s identity in 
 cleartext on the line. (If I am wrong and HTTPS leaks usernames sent as HTTP 
 Forms or with HTTP Basic Authentication, please tell me)

No, it just leaks the password to the phishing server. Yeah, that's totally
a lot better.



  This gets discussed on the TLS mailing list occasionally, but the
  arguments for making this change aren't very convincing.
 
 Yes, there are regularly people popping up there that need it, but they 
 always 
 get ignored there, it seems.

Because the arguments they present are handwavy and unconvincing, just like
yours.



  If you have 
  an actual credible security argument you should post it to
  [EMAIL PROTECTED]
 
 Do you think the the security arguments I summed up above qualify on the tls 
 list?

It's an open list. Feel free to make these arguments.


 Should I go into more detail? Present practical examples?

I would certainly find practical examples more convincing than the ones
you've presented.



 I see several possible options:
 * We fix SSL  
 Does anyone have a solution for SSL/TLS available that we could propose on 
 the 
 TLS list? 
 If not: Can anyone with enough protocol design experience please develop it?

There's already a solution: double handshake. You do an ordinary
handshake with server auth only and then you do a second handshake
with client auth. This hides the certificate perfectly well.  Yes, you
have to do two private key ops on the server, but if this issue is as
important as you say, this is a tradeoff you should be happy to make.
I've pointed this out on the TLS mailing list a number of times, but
maybe you missed it.


 * We change the rules of the market, and tell the people that they MUST NOT 
 ask for additional data in their certificates anymore

Fundamentally, this *is* the fix. Even if SSL guaranteeed that nobody
but the person you were handshaking with got the certificate, this
is still incredibly brittle because any random server can ask you
for your cert and users can't be trusted not to hand them over.
The basic premise of certs is that they're public info. If you
want to carry private data around in them then you should encrypt
that data.



   TCP could need some stronger integrity protection. 8 Bits of checksum
   isn´t 

Re: Dutch Transport Card Broken

2008-01-30 Thread Eric Rescorla
At Wed, 30 Jan 2008 09:04:37 +1000,
James A. Donald wrote:
 
 Ivan Krstic' wrote:
   Some number of these muppets approached me over the
   last couple of years offering to donate a free license
   for their excellent products. I used to be more polite
   about it, but nowadays I ask that they Google the
   famous Gutmann Sound Wave Therapy[0] and mail me
   afterwards.
 
   Gutmann Sound Wave Therapy: Gutmann recommends:
 : :   Whenever someone thinks that they can replace
 : :   SSL/SSH with something much better that they
 : :   designed this morning over coffee, their
 : :   computer speakers should generate some sort
 : :   of penis-shaped sound wave and plunge it
 : :   repeatedly into their skulls until they
 : :   achieve enlightenment.
 
 On SSL, Gutmann is half wrong:
 
 SSL key distribution and management is horribly broken,
 with the result that everyone winds up using plaintext
 when they should not.
 
 SSL is layered on top of TCP, and then one layers one's
 actual protocol on top of SSL, with the result that a
 transaction involves a painfully large number of round
 trips.

 We really do need to reinvent and replace SSL/TCP,
 though doing it right is a hard problem that takes more
 than morning coffee.

I can't believe I'm getting into this with James.

Ignoring the technical question of broken, I know of no evidence
whatsoever that round trip latency is in any way a limiting factor for
people to use SSL/TLS.  I've heard of people resisting using SSL for
performance concerns, but they're almost always about the RSA
operation on the server (and hence the cost of server hardware).

If you have some evidence I'd be interested in hearing it.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fixing SSL (was Re: Dutch Transport Card Broken)

2008-01-30 Thread Eric Rescorla
At Wed, 30 Jan 2008 17:59:51 -,
Dave Korn wrote:
 
 On 30 January 2008 17:03, Eric Rescorla wrote:
 
 
  We really do need to reinvent and replace SSL/TCP,
  though doing it right is a hard problem that takes more
  than morning coffee.
  
  TCP could need some stronger integrity protection. 8 Bits of checksum isn´t
  enough in reality. (1 out of 256 broken packets gets injected into your TCP
  stream)  Does IPv6 have a stronger TCP?
  
  Whether this is true or not depends critically on the base rate
  of errors in packets delivered to TCP by the IP layer, since
  the rate of errors delivered to SSL is 1/256th of those delivered
  to the TCP layer. 
 
   Out of curiosity, what kind of TCP are you guys using that has 8-bit
 checksums?

You're right. It's 16 bit, isn't it. I plead it being early in 
the morning. I think my point now applies even moreso :)



  Since link layer checksums are very common,
  as a practical matter errored packets getting delivered to protocols
  above TCP is quite rare.
 
   Is it not also worth mentioning that TCP has some added degree of protection
 in that if the ACK sequence num isn't right, the packet is likely to be
 dropped (or just break the stream altogether by desynchronising the seqnums)?

Right, so this now depends on the error model...

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Good to see the FBI follows procedures

2007-12-20 Thread Eric Rescorla
Ryan Singel reports that despite the rather lax standards required for
wiretaps, some FBI agents seem to have decided that they could skip
procedure:

The revelation is the second this year showing that FBI employees
bypassed court order requirements for phone records. In July, the
FBI and the Justice Department Inspector General revealed the
existence of a joint investigation into an FBI counter-terrorism
office, after an audit found that the Communications Analysis Unit
sent more than 700 fake emergency letters to phone companies
seeking call records. An Inspector General spokeswoman declined to
provide the status of that investigation, citing agency policy.

The June 2006 e-mail (.pdf) was buried in more than 600-pages of
FBI documents obtained by the Electronic Frontier Foundation, in a
Freedom of Information Act lawsuit.

The message was sent to an employee in the FBI's Operational
Technology Division by a technical surveillance specialist at the
FBI's Minneapolis field office -- both names were redacted from
the documents. The e-mail describes widespread attempts to bypass
court order requirements for cellphone data in the Minneapolis
office.

Remarkably, when the technical agent began refusing to cooperate,
other agents began calling telephone carriers directly, posing as
the technical agent to get customer cellphone records.

Federal law prohibits phone companies from revealing customer
information unless given a court order, or in the case of an
emergency involving physical danger.

Singel's report is at:
   http://www.wired.com/politics/onlinerights/news/2007/12/fbi_cell

You can read the actual document:
   http://blog.wired.com/27bstroke6/files/minneapolisemail.pdf

It's worth noting that a lot of what's going on here is device
and call tracking, not content capture, so even if you have end-to-end
crypto in your handset, it's only of modest value.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: picking a hash function to be encrypted

2006-05-17 Thread Eric Rescorla
Travis H. [EMAIL PROTECTED] writes:

 On 5/14/06, Victor Duchovni [EMAIL PROTECTED] wrote:
 Security is fragile. Deviating from well understood primitives may be
 good research, but is not good engineering. Especially fragile are:

 Point taken.  This is not for a production system, it's a research thing.

 TLS (available via OpenSSL) provides integrity and authentication, any
 reason to re-invent the wheel? It took multiple iterations of design
 improvements to get TLS right, even though it was designed by experts.

 IIUC, protocol design _should_ be easy, you just perform some
 finite-state analysis and verify that, assuming your primitives are
 ideal, no protocol-level operations break it.  The 7th Usenix Security
 Symposium has a paper where the authors built up SSL 3.0 to find out
 what attack each datum was meant to prevent.  They used mur-phi, which
 has been used for VLSI verification (i.e. large numbers of states).
 ATT published some code to do it too (called SPIN).  It's effective
 if the set of attacks you're protecting against is finite and
 enumerable (for protocol design, I think it should be; reflection,
 replay, reorder, suppress, inject, etc.).  I wouldn't consider
 fielding a protocol design without sanity-checking it using such a
 tool.  Was there an attack against TLS which got past FSA, or did the
 experts not know about FSA?

There have been a number of attacks on TLS since Mitchell et al's
paper was published in 1998. The most well known are the attacks
on CBC mode described in http://www.openssl.org/~bodo/tls-cbc.txt.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: hamachi p2p vpn nat-friendly protocol details

2006-02-28 Thread Eric Rescorla
Travis H. [EMAIL PROTECTED] writes:

 On 2/24/06, Alex Pankratov [EMAIL PROTECTED] wrote:
 Tero Kivinen wrote:
  Secondly I cannot find where it
  authenticates the crypto suite used at all (it is not included in the
  signature of the AUTH message).

 Crypto suite is essentially just a protocol number. It requires
 no authentication. If the server side responds with HELO.OK, it
 means that it can comprehend specified protocol revision. Similar
 to what happens during the SSH handshake.

 In SSL, the lack of authentication of the cryptosuite could be used to
 convince a v3 client that it is communicating with a v2 server, and
 the v3 server that it is communicating with a v2 client, causing them
 to communicate using SSL v2, which is called the version rollback
 attack.

This isn't quite accurate.

SSLv2 didn't do any kind of downgrade protection at all, for the
version number, cipher suite, or anything else. SSLv3 used a MAC
across the entire handshake. The tricky problem is to protect
downgrade from SSLv3 to SSLv2, which obviously can't be done with the
SSLv3 mechanisms. The trick that SSLv3 used was that when falling back
to SSLv2, SSLv3-capable clients would pad their RSA PKCS#1 blocks
in a special way that SSLv3 servers would detect. If they detected
it, that meant there had been a downgrade.

Unfortunately, not all clients correctly generate this padding
and the check wasn't universally implemented correctly:

http://www.openssl.org/news/secadv_20051011.txt


-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: EDP (entropy distribution protocol), userland PRNG design

2006-02-08 Thread Eric Rescorla
Travis H. [EMAIL PROTECTED] writes:

 On 2/4/06, Eric Rescorla [EMAIL PROTECTED] wrote:
 Look, this design just reduces to a standard cryptographic PRNG with
 some of the seed being random and periodically being reseeded by the
 random network stream you're sending around. There's no need to
 worry about the integrity or confidentiality of the random stream
 because anyone who controls the network already knows this input. The
 only information they don't have is your random private key.

 How do you figure?  If the random stream conveys 1kB/s, and I'm
 reading 1kB/s from /dev/random, and the network traffic is not
 observed, then I am not stretching the bits in any way, and the result
 should be equivalent to reading from the HWRNG, right?

Well, for starters the assumption that nobody is monitoring the
network traffic is in general unwarranted. 

However, the equivalence (or lack thereof) to a HWRNG depends entirely
on the details of the mixing function in /dev/random, network
buffering, etc. But since /dev/random is basically a PRNG, it's
not clear why you think there's any difference between your and
my designs.

-Ekr



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: EDP (entropy distribution protocol), userland PRNG design

2006-02-04 Thread Eric Rescorla
Travis H. [EMAIL PROTECTED] writes:
 That leaves me with the following design:

 That random numbers be sent en clair from the system that can generate
 them to the system that needs them, where they are decrypted using a
 random key (generated locally by /dev/random) and fed into the system
 that needs them, in this case the pool used by /dev/random (where they
 will be hashed together with interrupt timings and other complex
 phenomena before being used).

 If the attacker has no access to the LAN traffic, then it gives the
 benefit of a local HWRNG.  If the attacker has access to all the
 network traffic and a great deal of the output of /dev/random on the
 receiving machine, he has at best, a ciphertext and the hash of the
 (completely random) plaintext to work with.  In actuality it is
 liable to be less clear than that, as /dev/random will scramble it
 with a bunch of low-level stuff and give the hash of that.  State
 remains in the /dev/random pool, so that the next transmission will be
 mixed with the pool created by the first transmission and so on.  So
 in practice an attacker wouldn't even have the hash of the plaintext.

 Does anyone see any problem with the reasoning or resultant design? 
 I'd prefer to not argue over the assumptions.  Does anyone have any
 ideas about how to handle authentication/integrity?

Look, this design just reduces to a standard cryptographic PRNG with
some of the seed being random and periodically being reseeded by the
random network stream you're sending around. There's no need to
worry about the integrity or confidentiality of the random stream
because anyone who controls the network already knows this input. The
only information they don't have is your random private key.

That said, frankly, this is all rather silly. A good cryptographic
PRNG seeded with a few hundred bytes of high-quality randomness is
good enough for bits of randomness is good enough for practically any
purpose. Practically the only thing it's not useful for is for
generating OTPs, which, as people have repeatedly told you on this
list, you shouldn't be doing anyway.

Note further that no CPRNG can be safely used to generate OTPs--except
for rather short ones--because the entropy of the resulting randomness
stream is bounded by the size of the CPRNG state no matter how many 
bits of entropy you feed into it. The technical term for this is a 
stream cipher.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Hey kids, come join the NSA!

2005-12-28 Thread Eric Rescorla
Hey boys and girls! Want to help your country defeat that mean old
Osama? Then check out the National Security Agency's CryptoKids web site
(http://www.nsa.gov/kids/):

On this site, you can learn all about codes and ciphers, play lots
of games and activities, and get to know each of us - Crypto Cat,
Decipher Dog, Rosetta Stone, Slate, Joules, T.Top, and, of course,
our leader CSS Sam.

You can also learn about the National Security Agency/Central
Security Service - they're Americas real codemakers and
codebreakers. Our Nation's leaders and warfighters count on the
technology and information they get from NSA/CSS to get their jobs
done. Without NSA/CSS, they wouldnt be able to talk to one another
without the bad guys listening and they wouldnt be able to figure
out what the bad guys were planning.

We hope you have lots of fun learning about cryptology and
NSA/CSS. You might be part of the next generation of Americas
codemakers and codebreakers.

The site comes complete with a bunch of material on making and breaking
simple codes (cool), resources to teach kids about crypto (also cool),
and detailed biographies of the CryptoKids characters (kind of
creepy). Here's some of what CryptoCat does for fun:

I'm usually hanging out with my friends at the mall or catching the
latest movie. I love helping people so I find different ways to help
out around the community. Right now, I volunteer as a swim coach for
children with special needs. Its a lot of fun AND I get to spend
extra time with my sister who has Downs Syndrome.

The NSA Gifted and Talented program looks pretty cool, though.

-Ekr


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: browser vendors and CAs agreeing on high-assurance certificat es

2005-12-24 Thread Eric Rescorla
Ben Laurie [EMAIL PROTECTED] writes:

 Ian G wrote:
 Ben Laurie wrote:
 ...
 Hopefully over the next year, the webserver (Apache)
 will be capable of doing the TLS extension for sharing
 certs so then it will be reasonable to upgrade.


 In fact, I'm told (I'll dig up the reference) that there's an X509v3
 extension that allows you to specify alternate names in the certificate.
 I'm also told that pretty much every browser supports it.
 
 The best info I know of on the subject is here:
 
 http://wiki.cacert.org/wiki/VhostTaskForce
 
 Philipp has a script which he claims automates
 the best method(s) described within to create
 the alt-names cert.
 
 (The big problem of course is that you can use
 one cert to describe many domains only if they
 are the same administrative entity.)

 If they share an IP address (which they must, otherwise there's no
 problem), then they must share a webserver, which means they can share a
 cert, surely?

Actually, the big problem if you run a virtual hosting server
is that every time you add a new virtual domain you need a new
cert with that domain in it. And that applies even if you put
all the names in one cert.

Really, the ServerHostName extension is better.


 What we really need is for the webservers to
 implement the TLS extension which I think is
 called server name indication.
 
 And we need SSL v2 to die so it doesn't interfere
 with the above.

 Actually, you just disable it in the server. I don't see why we need
 anything more than that.

The problem is that the ServerHostName extension that signals
which host the client is trying to contact is only available
in the TLS ClientHello.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Session Key Negotiation

2005-11-30 Thread Eric Rescorla
Will Morton [EMAIL PROTECTED] writes:
 I am designing a transport-layer encryption protocol, and obviously wish
 to use as much existing knowledge as possible, in particular TLS, which
 AFAICT seems to be the state of the art.

 In TLS/SSL, the client and the server negotiate a 'master secret' value
 which is passed through a PRNG and used to create session keys.

May I ask why you don't just use TLS?


 My question is: why does this secret need to be negotiated?  Why can one
 side or another (preference for client) not just pick a secret key and
 use that?

Well, in TLS in RSA mode, the client picks the secret value (technical
term: PreMaster Secret) but both sides contribute randomness to ensure
that the Master Secret secret is unique. This is a clean way to
ensure key uniqueness and prevent replay attack.

In DH mode, of course, both sides contribute shares, but that's
just how DH works.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Another entry in the internet security hall of shame....

2005-08-27 Thread Eric Rescorla
Dave Howe [EMAIL PROTECTED] writes:

 Ian G wrote:
 none of the above.  Using SSL is the wrong tool
 for the job.
 For the one task mentioned - transmitting the username/password pair
 to the server - TLS is completely appropriate.  However, hash based
 verification would seem to be more secure, require no encryption
 overhead on the channel at all, and really connections and crypto
 should be primarily P2P (and not server relayed) anyhow.

Well, it's still attractive to have channel security in order to
prevent hijacking. (Insert usual material about channel bindings 
here...)

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Another entry in the internet security hall of shame....

2005-08-25 Thread Eric Rescorla
Ian G [EMAIL PROTECTED] writes:

 Trei, Peter wrote:

 Self-signed certs are only useful for showing that a given
 set of messages are from the same source - they don't provide
 any trustworthy information as to the binding of that source
 to anything.

 Perfectly acceptable over chat, no?  That is,
 who else would you ask to confirm that your
 chatting to your buddy?

Most chat protocols (and Jabber in particular) are server-oriented
protocols. So, the SSL certificate in question isn't that of your
buddy but rather of your Jabber server. 

-Ekr


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Menezes on HQMV

2005-07-01 Thread Eric Rescorla
There's an interesting paper up on eprint now:
http://eprint.iacr.org/2005/205

Another look at HMQV
Alfred Menezes

HMQV is a `hashed variant' of the MQV key agreement protocol. It
was recently introduced by Krawczyk, who claimed that HMQV has
very significant advantages over MQV: (i) a security proof under
reasonable assumptions in the (extended) Canetti-Krawczyk model
for key exchange; and (ii) superior performance in some
situations.

In this paper we demonstrate that HMQV is insecure by presenting
realistic attacks in the Canetti-Krawczyk model that recover a
victim's static private key. We propose HMQV-1, a patched
version of HMQV that resists our attacks (but does not have any
performance advantages over MQV). We also identify the fallacies
in the security proof for HMQV, critique the security model, and
raise some questions about the assurances that proofs in this
model can provide.

Obviously, this is of inherent interest, but it also plays a part
in the ongoing debate about the importance of proof as a technique
for evaluating cryptographic protocols.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: expanding a password into many keys

2005-06-14 Thread Eric Rescorla
Ian G [EMAIL PROTECTED] writes:

 I'd like to take a password and expand it into
 several keys.  It seems like a fairly simple operation
 of hashing the concatonatonation of the password
 with each key name in turn to get each key.

 Are there any 'gotchas' with that?

 iang

 PS: some psuedo code if the above is not clear.

 for k in {set of keys needed}
 do
 key[k] = sha1( pass | k );
 done

Some terminology first. Let's assume that we have a password P and
that we want to generate a series of n keys K_1, K_2, ... K_n, each of
which has a label L_1, L_2, ... L_n. What we want is a function
F(P,L_i) that produces K_i values. There are a number of desirable
elements that one would like to incorporate in such a scheme.

The most basic one is that it the best attack should be to brute force
the password space.

So, this means that:

1. You shouldn't be able to compute P from K_i in any less 
   time than exhaustive (or at least dictionary) search of P.
2. You shouldn't be able to compute K_j from K_i (for i!=j)
   in less time than search of P.

Hash-based constructions are the standard here, but I'm generally
leary of using a pure hash. Probably the best basic function is to use
HMAC(P,L_i) or perhaps HMAC(H(P),L_i), since HMAC wasn't designed to
be used with non-random key values.  You'd need someone with a better
understanding of hash functions than I have to tell you which one of
these is better.

But this only gets you part of the way there. We'd really like
to make it harder to dictionary search the password. We can
do this by making F slower. The standard way to do this is simply
to iterate the underlying function. This is what PKCS #5 does.
This of course slows down the user, but that's barely noticeable
in ordinary operation and it of course slows down the attacker
by a comparable margin.

An additional trick, used by Halderman, Waters, and Felten [1]
(which pretty much embodies the state of the art here)
is to have a two-level system where you substitute K in F with
G(K), where G(K) is computed by a similar, very expensive 
iterative procedure. The idea is that the first time you
use the password generator on a given computer, you compute
G(K) and then cache it. This takes maybe a minute or so,
but in the future all of your authentications are fast and this
obviously really slows down the attacker.

-Ekr

Halderman, Waters, and Felten, A Convenient Method for Securely Managing
Passwords, WWW 2005.
http://www.cs.princeton.edu/~jhalderm/papers/www2005.pdf

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Collisions for hash functions: how to exlain them to your boss

2005-06-13 Thread Eric Rescorla
Stefan Lucks [EMAIL PROTECTED] writes:
 Magnus Daum and myself have generated MD5-collisons for PostScript files:

   http://th.informatik.uni-mannheim.de/people/lucks/HashCollisions/

 This work is somewhat similar to the work from Mikle and Kaminsky, except 
 that our colliding files are not executables, but real documents. 

 We hope to demonstrate how serious hash function collisions should be 
 taken -- even for people without much technical background. And to help 
 you, to explain these issues 

   - to your boss or your management,
   - to your customers,
   - to your children ...

While this is a clever idea, I'm not sure that it means what you imply
it means. The primary thing that makes your attack work is that the
victim is signing a program which he is only able to observe mediated
through his viewer. But once you're willing to do that, you've got a
problem even in the absence of collisions, because it's easy to write
a program which shows different users different content even if you
without hash collisions. You just need to be able to write
conditionals.

For more, including an example, see:
http://www.educatedguesswork.org/movabletype/archives/2005/06/md5_collisions.html

-Ekr




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Collisions for hash functions: how to exlain them to your boss

2005-06-13 Thread Eric Rescorla
Weger, B.M.M. de [EMAIL PROTECTED] writes:

 Technically speaking you're correct, they're signing a program.
 But most people, certainly non-techies like Alice's boss,
 view postscript (or MS Word, or name your favourite document 
 format that allows macros) files not as programs but as static 
 data. In being targeted at non-techies I find this attack more 
 convincing than those of Mikle and Kaminsky, though essentially
 it's a very similar idea.

 Note that opening the postscript files in an ASCII-editor
 (or HEX-editor) immediately reveals the attack. Stefan Lucks
 told me they might be able to obfuscate the postscript code, 
 but again this will only fool the superficial auditor.

Yes, this is all true, but it's kind of orthogonal to my point,
which is that if you're willing to execute a program, this 
attack can be mounted *without* the ability to produce hash
collisions. The fact that so few people regard PS, HTML, Word,
etc. as software just makes this point that much sharper.
As far as I can tell, the ability fo produce hash collisions just
makes the attack marginally worse.

-Ekr



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


ANNOUNCE: PureTLS 0.9b5

2005-06-02 Thread Eric Rescorla
ANNOUNCE: PureTLS version 0.9b5
Copyright (C) 1999-2005 Claymore Systems, Inc.

http://www.rtfm.com/puretls

DESCRIPTION
PureTLS is a free Java-only implementation of the SSLv3 and TLSv1
(RFC2246) protocols. PureTLS was developed by Eric Rescorla for
Claymore Systems, Inc, but is being distributed for free because we
believe that basic network security is a public good and should be a
commodity. PureTLS is licensed under a Berkeley-style license, which
basically means that you can do anything you want with it, provided
that you give us credit.

This is a beta release of PureTLS. Although it has undergone a fair
amount of testing and is believed to operate correctly, it no doubt contains 
significant bugs, which this release is intended to shake out. Please
send any bug reports to the author at [EMAIL PROTECTED].

CHANGES FROM B4
* SECURITY: Zero OPTIONAL values before parsing. This prevents 
  bleedthrough of those values from previously parsed certificates
  into certificates where they are missing. This is a workaround for a 
  bug in the Cryptix ASN.1 kit.

  The only relevant values are Extensions and Algorithm.Parameters.
  In practice this should not be a problem with Algorithm.Parameters
  Since they're NULL in RSA certificates and always present in real 
  DSA certificates. If you rely on Extensions you should upgrade
  as soon as possible.

  Note: extensions processing is still only partially tested (see
  below). 

* Trim all leading zeros from DH shared keys. This fixes a rare
  compatibility problem.

* Fix handling of pathLen constraints. We were off by one, causing
  some valid certificates to be rejected.


We believe that this is the best version of PureTLS available.  Users
are advised to upgrade as soon as possible. In particular, if you rely
on X.509 extension processing you should upgrade as soon as possible.

This will most likely be the last release of PureTLS distributed 
as a standalone package by Claymore Systems. We have given
the BouncyCastle (http://www.bouncycastle.org) permission to
integrate the PureTLS source code with their library and
we expect them to deliver an integrated system in the future.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: MD5 To Be Considered Harmful Someday

2004-12-08 Thread Eric Rescorla
James A. Donald [EMAIL PROTECTED] writes:

 --
 On 6 Dec 2004 at 16:14, Dan Kaminsky wrote:
 * Many popular P2P networks (and innumerable distributed 
 content databases) use MD5 hashes as both a reliable search 
 handle and a mechanism to ensure file integrity.  This makes 
 them blind to any signature embedded within MD5 collisions. 
 We can use this blindness to track MP3 audio data as it 
 propagates from a custom P2P node.

 This seems pretty harmful right now, no need to wait for 
 someday.

 But even back when I implemented Crypto Kong, the orthodoxy was 
 that one should use SHA1, even though it is slower than MD5, so 
 it seems to me that MD5 was considered harmful back in 1997, 
 though I did not know why at the time, and perhaps no one knew 
 why.
Dobbertin's collision in the MD5 compression function was published
in May of 1996.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-12-01 Thread Eric Rescorla
[EMAIL PROTECTED] writes:

 -Original Message-
 From: Eric Rescorla [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, December 01, 2004 7:01 AM
 To: [EMAIL PROTECTED]
 Cc: Ben Nagy; [EMAIL PROTECTED]
 Subject: Re: SSL/TLS passive sniffing
 
 Ian Grigg [EMAIL PROTECTED] writes:
 [...]
  However could one do a Diffie Hellman key exchange and do this
  under the protection of the public key? [...]
 
 Uh, you've just described the ephemeral DH mode that IPsec
 always uses and SSL provides.
 
 Try googling for station to station protocol
 
 -Ekr

 Right. And my original question was, why can't we do that one-sided with
 SSL, even without a certificate at the client end? In what ways would that
 be inferior to the current RSA suites where the client encrypts the PMS
 under the server's public key.

Just to be completely clear, this is exactly whatthey 
TLS_RSA_DHE_* ciphersuites currently do, so it's purely a matter
of configuration and deployment.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: IPsec +- Perfect Forward Secrecy

2004-12-01 Thread Eric Rescorla
John Denker [EMAIL PROTECTED] writes:
 Eric Rescorla wrote:

 Uh, you've just described the ephemeral DH mode that IPsec
 always uses and SSL provides.

 I'm mystified by the word always there, and/or perhaps by
 the definition of Perfect Forward Secrecy.  Here's the dilemma:

 On the one hand, it would seem to the extent that you use
 ephemeral DH exponents, the very ephemerality should do most
 (all?) of what PFS is supposed to do.  If not, why not?

 And yes, IPsec always has ephemeral DH exponents lying around.

 On the other hand, there are IPsec modes that are deemed to
 not provide PFS.  See e.g. section 5.5 of
http://www.faqs.org/rfcs/rfc2409.html

Sorry, when I said IPsec I mean IKE. I keep trying to forget
about the manual keying modes. AFAICT IKE always uses the
DH exchange as part of establishment.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Certificate serial number generation algorithms

2004-10-11 Thread Eric Rescorla
Does anyone know the details of the certificate generation algorithms
used by various CAs? 

In particular, Verisign's is very long and I seem to remember someone telling
me it was a hach but I don't recall the details...

Thanks,
-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


SHA-1 rumors

2004-08-16 Thread Eric Rescorla
Ed Felten's blog is carrying the rumor that a break in SHA-1
is going to be announced soon:

http://www.freedom-to-tinker.com/archives/000661.html

I've also done some off-the-cuff analysis of how bad this
would be in practice, which you can find here:

http://www.rtfm.com/movabletype/archives/2004_08.html#001051

The key question is whether it's just collisions, which would
be embarassing, but which don't affect most applications, or
whether there is forward progress in finding preimages.

Anyone know anything about this rumor?

-Ekr

P.S. AFAIK, although Dobbertin was able to find preimages for
reduced MD4, there still isn't a complete break in MD4. Correct?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


A collision in MD5'

2004-08-16 Thread Eric Rescorla
I've now successfully reproduced the MD5 collision result. Basically
there are some endianness problems.

The first problem is the input vectors. They're given as hex words, but
MD5 is defined in terms of bitstrings. Because MD5 is little-endian, you
need to reverse the written byte order to generate the input data. A
related problem is that some of the words are given as only 7 hex
digits. Assuming that they have a leading zero fixes that
problem. Unfortunately, this still doesn't give you the right hash
value.

The second problem, which was found by Steve Burnett from Voltage
Security, is that they authors aren't really computing MD5. The
algorithm is initialized with a certain internal state, called an
Initialization Vector (IV). This vector is given in the MD5 RFC as:

word A: 01 23 45 67
word B: 89 ab cd ef
word C: fe dc ba 98
word D: 76 54 32 10

but this is little-endian format. So, the actual initialization values
should be 0x67452301, etc...

The authors use the values directly, so they use: 0x01234567,
etc... Obviously, this gives you the wrong hash value. If you use these
wrong IVs, you get a collision... though strangely with a different hash
value than the authors provide. Steve and I have independently gotten
the same result, though of course we could have made mistakes...

So, this looks like it isn't actually a collision in MD5, but rather in
some other algorithm, MD5'. However, there's nothing special about the
MD5 IV, so I'd be surprised if the result couldn't be extended to real
MD5.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Source code for MD5' collisions

2004-08-16 Thread Eric Rescorla
I've posted source code that demonstrates the MD5 collisions
on my web site at:

http://www.rtfm.com/md5coll.tar.gz.

It's just a modified version of the RFC1321 MD5 source code
with the byte-flipping in the state initialization. It also
includes machine readable test vectors and a makefile. Just
run 'make' and you get the following output, at least on
FreeBSD:

gcc -o md5prime -DINVERT_STATE -DMD=5 md5.c mddriver.c
# X1 and X1' with ordinary MD5--no collision
./md5 X1.bin
MD5 (X1.bin) = e115410841d7a06f2913be15e1760fd1
./md5 X1prime.bin
MD5 (X1prime.bin) = 7005ea821bcc0e64d0eb9852f2bec2bd
# X1 and X1' with md5prime--collision
./md5prime X1.bin
MD5 (X1.bin) = 8ada1581c24565adac73a2d27160ca90
./md5prime X1prime.bin
MD5 (X1prime.bin) = 8ada1581c24565adac73a2d27160ca90
echo

# X2 and X2' with ordinary MD5
./md5 X2.bin
MD5 (X2.bin) = 55f94e8f79e8a9795fad79f4c6ab5f11
./md5 X2prime.bin
MD5 (X2prime.bin) = 47aaf6e98d0799f9a85db9fd86cb392a
# X2 and X2' with md5prime
./md5prime X2.bin
MD5 (X2.bin) = 1a2a1d55c87318422367ae3462143fb6
./md5prime X2prime.bin
MD5 (X2prime.bin) = 1a2a1d55c87318422367ae3462143fb6

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Using crypto against Phishing, Spoofing and Spamming...

2004-07-18 Thread Eric Rescorla
Ian Grigg [EMAIL PROTECTED] writes:
 Notwithstanding that, I would suggest that the money
 already lost is in excess of the amount paid out to
 Certificate Authorities for secure ecommerce certificates
 (somewhere around $100 million I guess) to date.  As
 predicted, the CA-signed certificate missed the mark,
 secure browsing is not secure, and the continued
 resistance against revision of the browser's useless
 padlock display is the barrier to addressing phishing.

I don't accept this argument at all.

There are at least three potential kinds of attack here:

(1) Completely passive capture attacks.
(2) Semi-active attacks that don't involve screwing with
the network infrastructure (standard phishing attacks)
(3) Active attacks on the network infrastructure.

SSL does a fine job of protecting against (1) and a fairly adequate
job of protecting against (3). Certainly you could do a better job
against (3) if either:

(a) You could directly connect to sites with SSL a la
https://www.expedia.com/
(b) The identities were more user-friendly as we anticipated back in
the days of S-HTTP rather than being domain names, as required by
SSL. 

It does a lousy job of protecting against (3).

Now, my threat model mostly includes (1), does not really include
(3), and I'm careful not to do things that leave me susceptible
to (2), so SSL does in fact protect against the attacks in my
threat model. I know a number of other people with similar threat
models. Accordingly, I think the claim that secure browsing
is not secure rather overstates the case.

-Ekr






-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Verifying Anonymity

2004-07-16 Thread Eric Rescorla
Ben Laurie [EMAIL PROTECTED] writes:
 The recent conversation on SSL where Eric Rescorla was lampooned for
 saying (in effect) I've tried it on several occasions and it seemed
 to work, therefore it must be trustworthy to which he responded
 actually, that's a pretty reasonable way of assessing safety in
 systems where there's no attacker specifically targeting you prompted
 me to ask this ... if a system claims to give you anonymity, how do
 you (as a user) assess that claim? I find it hard to imagine how you
 can even know whether it seems to work, let alone has some subtle
 problem.

That's clearly a much harder problem--and indeed I suspect it's behind
the general lack of interest that the public has shown in anonymous
systems.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Humorous anti-SSL PR

2004-07-15 Thread Eric Rescorla
J Harper [EMAIL PROTECTED] writes:

 This barely deserves mention, but is worth it for the humor:
 Information Security Expert says SSL (Secure Socket Layer) is Nothing More
 Than a Condom that Just Protects the Pipe
 http://www.prweb.com/releases/2004/7/prweb141248.htm

What's wrong with a condom that protects the pipe? I've used
condoms many times and they seemed to do quite a good job
of protecting my pipe.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Koblitz and Menezes on Provable Security

2004-07-14 Thread Eric Rescorla
If you haven't already, you should check out the Koblitz and Menezes
paper about Provable Security on eprint:

http://eprint.iacr.org/2004/152.pdf

Here's the abstract:
We give an informal analysis and critique of several typical provable
security results. In some cases there are intuitive but convincing argu-
ments for rejecting the conclusions suggested by the formal terminology
and proofs, whereas in other cases the formalism seems to be consistent
with common sense. We discuss the reasons why the search for mathemat-
ically convincing theoretical evidence to support the security of public-key
systems has been an important theme of researchers. But we argue that
the theorem-proof paradigm of theoretical mathematics is of limited rel-
evance here and often leads to papers that are confusing and misleading.
Because our paper is aimed at the general mathematical public, it is self-
contained and as jargon-free as possible.

You can also find my amateur's writeup at:
http://www.rtfm.com/movabletype/archives/2004_07.html#000995

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: EZ Pass and the fast lane ....

2004-07-10 Thread Eric Rescorla
Perry E. Metzger [EMAIL PROTECTED] writes:

 John Gilmore [EMAIL PROTECTED] writes:
 It would be relatively easy to catch someone
 doing this - just cross-correlate with other
 information (address of home and work) and
 then photograph the car at the on-ramp.

 Am I missing something?

 It seems to me that EZ Pass spoofing should become as popular as
 cellphone cloning, until they change the protocol.

 I doubt it.

 All the toll lanes that accept EZ Pass that I've seen are equipped
 with cameras. These cameras are used to identify toll evaders
 already. You point out that doing this would require manual work, but
 in fact several systems (including the one used for handling traffic
 fees in central London) have already demonstrated that automated
 license plate reading systems are feasible. Even without automated
 plate reading, storing photographs is also now astoundingly cheap
 given how cheap storage has gotten, so if anyone ever complained about
 incorrect charges on their bill, finding the plates of the cars that
 went through during the disputed toll collections would be trivial.

Precisely. Moreover, you can presumably use fairly unsophisticated
data mining/fraud detection techniques to detect when a unit has
been cloned and then go back to the photographs to find and punish
the offenders.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is finding security holes a good idea?

2004-06-17 Thread Eric Rescorla
Birger Toedtmann [EMAIL PROTECTED] writes:

 Am Do, den 10.06.2004 schrieb Eric Rescorla um 20:37:
 Cryptography readers who are also interested in systems security may be
 interested in reading my paper from the Workshop on Economics
 and Information Security '04:
 
 Is finding security holes a good idea?
 [...]

 The economic reasoning within the paper misses casualties that arise
 from automated, large scale attacks.

 In figure 2, the graph indicating the Black Hat Discovery Process
 suggests we should expect a minor impact of Private Exploitation only,
 because the offending Black Hat group is small and exploits manually. 
 However, one could also imagine Code Red, Slammer and the like.  Apart
 from having a fix ready or not, when vulnerabilities of this kind are
 not known *at all* to the public (no problem description, no workaround
 like remove file XYZ for a while known), worms can hit the network far
 more severe than they already do with knowledge of vulnerability and
 even fixes available.  I would expect the Intrusion Rate curve to be
 formed radically different at this point.  This also affects the
 discussion about social welfare lost / gained through discloure quite a
 lot.

 I don't see how applying Browne's vulnerability cycle concept to the
 Black Hat Discovery case as it has been done in the paper can reflect
 these threat scenarios correctly.  

It's true that the Browne paper doesn't apply directly, but I don't
actually agree that rapid spreading malware alters the reasoning in
the paper much. None of the analysis on the paper depends on any
particular C_BHD/C_WHD ratio. Rather, the intent is to provide
boundaries for what one must believe about that ratio in order to
think that finding bugs is a good idea.

That said, I don't think that the argument you present above is that
convincing. it's true that a zero-day worm would be bad, but given the
shape of the patching curve [0], a day-5 worm would be very nearly as
bad (and remember that it's the C_BHD/C_WHD ratio we care about).
Indeed, note that all of the major worms so far have been based on
known vulnerabilities. 

-Ekr

[0] E. Rescorla, Security Holes... Who Cares?, Proc. 12th USENIX
Security, 2003.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is finding security holes a good idea?

2004-06-16 Thread Eric Rescorla
Jerrold Leichter [EMAIL PROTECTED] writes:

 | Thor Lancelot Simon [EMAIL PROTECTED] writes:
 |
 |  On Mon, Jun 14, 2004 at 08:07:11AM -0700, Eric Rescorla wrote:
 |  Roughly speaking:
 |  If I as a White Hat find a bug and then don't tell anyone, there's no
 |  reason to believe it will result in any intrusions.  The bug has to
 | 
 |  I don't believe that the premise above is valid.  To believe it, I think
 |  I'd have to hold that there were no correlation between bugs I found and
 |  bugs that others were likely to find; and a lot of experience tells me
 |  very much the opposite.
 |
 | The extent to which bugs are independently rediscovered is certainly
 | an open question which hasn't received enough study. However, the
 | fact that relatively obvious and serious bugs seem to persist for
 | long periods of time (years) in code bases without being found
 | in the open literature, suggests that there's a fair amount of
 | independence.
 I don't find that argument at all convincing.  After all, these bugs *are*
 being found!

Well, SOME bugs are being found. I don't know what you mean by
these bugs. We don't have any real good information about
the bugs that haven't been found. What makes you think that
there aren't 5x as many bugs still in the code that are basically
like the ones you've found?


 It's clear that having access to the sources is not, in and of itself,
 sufficient to make these bugs visible (else the developers of close-source
 software would find them long before independent white- or black-hats).

I don't think that's clear at all. It could be purely stochastic.
I.e. you look at a section of code, you find the bug with some
probability. However, there's a lot of code and the auditing
coverage isn't very deep so bugs persist for a long time. 

-Ekr


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is finding security holes a good idea?

2004-06-16 Thread Eric Rescorla
Damien Miller [EMAIL PROTECTED] writes:

 Eric Rescorla wrote:
 I don't think that's clear at all. It could be purely stochastic.
 I.e. you look at a section of code, you find the bug with some
 probability. However, there's a lot of code and the auditing
 coverage isn't very deep so bugs persist for a long time. 

 I suspect that auditing coverage is usually going to be very similar to
 the search patterns used by blackhats - we are all human and are likely
 to be drawn to similar bugs. Auditing may therefore yield a superlinear
 return on effort. Is that enough to make it a good idea?

I agree that this is a possibility. We'd need further research
to know if it's in fact correct.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is finding security holes a good idea?

2004-06-16 Thread Eric Rescorla
Thor Lancelot Simon [EMAIL PROTECTED] writes:
 On Tue, Jun 15, 2004 at 09:37:42PM -0700, Eric Rescorla wrote:
 If you won't grant that humans experienced in a given field tend to think
 in similar ways, fine.  We'll just have to agree to disagree; but I think
 you'll have a hard time making your case to anyone who _does_ believe that,
 which I think is most people.  If you do grant it, I think it behooves you
 to explain why you don't believe that's the case as regards finding bugs;
 or to withdraw your original claim, which is contingent upon it.

I'm sorry, but I don't think this follows at all.

Let's assume for the sake of argument that two people auditing
the same code section will find the same set of bugs. So, how
to account for the fact that obvious errors persist for long
periods of time in popular code bases? It must be that those
sections were never properly audited, since by hypothesis
the bugs are obvious and yet were not found. However, this
happens fairly often, which suggests that coverage must
be pretty bad. Accordingly, it's easy to see how you could
get low re-finding rates even if people roughly think alike.

Now, you could argue that because people think alike, everyone
looks at the exact same sections of the code, but I think
that this is belied by the fact that many of these self-same
obvious bugs are found in obvious places, such as protocol
parsers. 

So, while I think it's almost certainly not true that bug finding
order is completely random, I think it's quite plausible that it's
mostly random. Ultimately, however, it's an empirical question and I'd
be quite interested in seeing some studies on it.

I think I've said enough on this general topic. If you'd like to have
the last word, feel free.

-Ekr


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is finding security holes a good idea?

2004-06-15 Thread Eric Rescorla
Thor Lancelot Simon [EMAIL PROTECTED] writes:

 On Mon, Jun 14, 2004 at 08:07:11AM -0700, Eric Rescorla wrote:
 in the paper. 
 
 Roughly speaking:
 If I as a White Hat find a bug and then don't tell anyone, there's no
 reason to believe it will result in any intrusions.  The bug has to

 I don't believe that the premise above is valid.  To believe it, I think
 I'd have to hold that there were no correlation between bugs I found and
 bugs that others were likely to find; and a lot of experience tells me
 very much the opposite.

The extent to which bugs are independently rediscovered is certainly
an open question which hasn't received enough study. However, the
fact that relatively obvious and serious bugs seem to persist for
long periods of time (years) in code bases without being found
in the open literature, suggests that there's a fair amount of
independence. 

-Ekr


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is finding security holes a good idea?

2004-06-14 Thread Eric Rescorla
Ben Laurie [EMAIL PROTECTED] writes:

 Eric Rescorla wrote:

 Cryptography readers who are also interested in systems security may be
 interested in reading my paper from the Workshop on Economics
 and Information Security '04:
 Is finding security holes a good idea?
 Eric Rescorla
 RTFM, Inc.
 A large amount of effort is expended every year on finding and
 patching security holes. The underlying rationale for this activity
 is that it increases welfare by decreasing the number of bugs
 available for discovery and exploitation by bad guys, thus reducing
 the total cost of intrusions. Given the amount of effort expended,
 we would expect to see noticeable results in terms of improved
 software quality. However, our investigation does not support a
 substantial quality improvement--the data does not allow us to
 exclude the possibility that the rate of bug finding in any given
 piece of software is constant over long periods of time. If there is
 little or no quality improvement, then we have no reason to believe
 that that the disclosure of bugs reduces the overall cost of
 intrusions.

 I don't see how that follows. If a bug is found but not disclosed,
 then it can be used for intrusion. If it is disclosed, then it cannot
 (assuming it gets fixed, of course). The fact that there are more bugs
 to be found which can _also_ be used for intrusions doesn't mean
 there's no point in fixing the hole, surely - at least the next bug
 has to be found before intrusions can occur again.

Well, this is just the abstract... The full argument is laid out
in the paper. 

Roughly speaking:
If I as a White Hat find a bug and then don't tell anyone, there's no
reason to believe it will result in any intrusions.  The bug has to
become known to Black Hats before it can be used to mount
intrusions. This can either happen by Black Hats re-finding it or some
White Hat disclosing it.  So, the question is, at least in part, what
the likelihood of these happening is...

-Ekr






-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is finding security holes a good idea?

2004-06-14 Thread Eric Rescorla
Ariel Waissbein [EMAIL PROTECTED] writes:

  
   Roughly speaking:
   If I as a White Hat find a bug and then don't tell anyone, there's no
   reason to believe it will result in any intrusions.  The bug has to
   become known to Black Hats before it can be used to mount
   intrusions. This can either happen by Black Hats re-finding it or some
   White Hat disclosing it.  So, the question is, at least in part, what
   the likelihood of these happening is...
  
   -Ekr
  

 Eric,
 I'd say that the good part comes when the security community learns
 from its mistakes, builds a theory around it, and finds conclusive
 solutions to well defined and isolated problems. So that examples (bug
 reports) give the necessary intuition, they are valuable, and in fact,
 necessary. 

I think it's importances to distinguish between new classes of
bugs and new instances of old bugs. I agree that new classes
of bugs are potentially interesting, however, I don't think
that this argument applies to the 513th buffer overflow. 
See S 8.4 of the paper.


 My point is that, though your argument may be correct, you
 arrive at the conclusion that bug reporting has no effects
 arbitrarily.

I never claimed that. What I said was that the evidence that the
positive effects of bug reporting in terms of reduced intrusions did
not clearly offset the negative effects of said reporting.


 I do not mean to act like the old greeks, interested only in
 theoretical problems, and despising the empirical. I'd like to
 maintain InfoSec infraestructures safe as of ten years ago. But I will
 not get into a discussion on the process of bug reporting, since the
 extensive threads all over cannot settle it. I am confindent that bugs
 need to be reported, eventually -the sooner the better. And that it is
 the software-development community's job to learn from this continuous
 reporting. Doing otherwise is neglecting reality.

I'm not sure how to answer this. In my view it's a bad idea to
be confident of propositions when one doesn't have empirical data
to support them.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is finding security holes a good idea?

2004-06-13 Thread Eric Rescorla
[EMAIL PROTECTED] writes:

 From: Eric Rescorla [EMAIL PROTECTED]

Is finding security holes a good idea?
  
Paper:http://www.dtc.umn.edu/weis2004/rescorla.pdf
Slides:   http://www.dtc.umn.edu/weis2004/weis-rescorla.pdf

 In section 1 there's a crucial phrase not properly followed up:
 significant opportunity cost since these researchers
  could be doing other security work instead of finding
  vulnerabilities.
 What other security work is being used for comparison ?
 - finding and fixing non-program flaws (such as in configuration)
   I do a lot of this - and I'm not about to run out of it.
   I know that _finding_ the flaws is easy.  Even finding
   many of them systematically is easy.  Fixing them is
   often gets stuck on the problem of that's Fred's piece
   of work and he doesn't feel like doing it.
 - fixing long-known and neglected bugs (are there many ?)
 - accelerating patch uptake
 - technical work on tolerant architectures/languages etc
 - advocacy work on tolerant architectures/languages etc
   (Where's Howard Aitken when you need him ?)
 - forensics
 - other ?
All of the above? Probably my favorite would be finding mechanical
ways to make programs more secure--e.g. stuff like Stackguard,
etc. As you say, moving to non-C languages would be a really
good start!

 Footnote 1 mentions an indirect effect of vulnerability research.
 Another one would be programmer education - but reporting yet
 another bug of a common type seems to have low value.  People
 do need to be aware that (their!) software can be faulty and in
 roughly what ways.
Good point.

 In 3.4 if proactive WHD is not worth the effort because the bugs
 get discovered anyway when they are widely exploited what does
 this say about finding vulnerabilities through their use in the
 wild ? Is this more costly but better aimed at the bugs that matter ?
 Are there cost-effective ways to do this reactive discovery ?  What
 tools would simplify it ?
Excellent point. There's no real data on this topic but my 
intuition would be that better IDS/anomaly detection would be
a useful tool here. Also, some kind of automated
forensic network recording so that when intrusions are
detected it's easy to backfigure what happened.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Is finding security holes a good idea?

2004-06-10 Thread Eric Rescorla
Cryptography readers who are also interested in systems security may be
interested in reading my paper from the Workshop on Economics
and Information Security '04:

Is finding security holes a good idea?

Eric Rescorla
RTFM, Inc.

A large amount of effort is expended every year on finding and
patching security holes. The underlying rationale for this activity
is that it increases welfare by decreasing the number of bugs
available for discovery and exploitation by bad guys, thus reducing
the total cost of intrusions. Given the amount of effort expended,
we would expect to see noticeable results in terms of improved
software quality. However, our investigation does not support a
substantial quality improvement--the data does not allow us to
exclude the possibility that the rate of bug finding in any given
piece of software is constant over long periods of time. If there is
little or no quality improvement, then we have no reason to believe
that that the disclosure of bugs reduces the overall cost of
intrusions.

Paper:http://www.dtc.umn.edu/weis2004/rescorla.pdf
Slides:   http://www.dtc.umn.edu/weis2004/weis-rescorla.pdf

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Chalabi Reportedly Told Iran That U.S. Had Code

2004-06-04 Thread Eric Rescorla
Perry E. Metzger [EMAIL PROTECTED] writes:

 The New York Times reports:

 Chalabi Reportedly Told Iran That U.S. Had Code

 June 2, 2004
  By JAMES RISEN and DAVID JOHNSTON 


 Ahmad Chalabi told an Iranian official that the U.S. had
 broken the communications code of Iran's intelligence
 service.

What I think is interesting is to ask how this happened at all.  After
all, we usually think of modern algorithms as essentially
unbreakable. It would certainly be really big news if the NSA knew how
to break AES. Some of my speculation about what broken the
communications code means can be found at:

http://www.rtfm.com/movabletype/archives/2004_06.html#000934

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Blind signatures with DSA/ECDSA?

2004-04-07 Thread Eric Rescorla
Folks,

Does anyone know if there is a blind signature scheme that works with
DSA or ECDSA? I know about Camenisch, Pivetau and Stadler's Blind
Signatures Based on the Discrete Logarithm Problem (1994), but as far
as I can tell that doesn't produce straight DSA-verifiable signatures
and so is a lot less desirable than it might otherwise be.

Has there been any better work on this?

Thanks,
-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: I don't know PAIN...

2003-12-29 Thread Eric Rescorla
Jerrold Leichter [EMAIL PROTECTED] writes:

 |  Note that there is no theoretical reason that it should be
 |  possible to figure out the public key given the private key,
 |  either, but it so happens that it is generally possible to
 |  do so
 | 
 |  So what's this generally possible business about?
 |
 | Well, AFAIK its always possible, but I was hedging my bets :-) I can
 | imagine a system where both public and private keys are generated from
 | some other stuff which is then discarded.
 That's true of RSA!  The public and private keys are indistinguishable - you
 have a key *pair*, and designate one of the keys as public.  Computing either
 key from the other is as hard as factoring the modulus.  (Proof:  Given both
 keys in the pair, it's easy to factor.)

It's worth pointing out that this isn't how RSA is used in practice,
for two reasons:

(1) Most everyone uses one of 3 popular RSA public exponents
(3, 17, 65535) and then computes the private key from p and q.
(2) PKCS-1 RSAPrivateKey structures contain the public key.

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-13 Thread Eric Rescorla
Ian Grigg [EMAIL PROTECTED] writes:
  It's really a mistake to think of SSL as being designed
  with an explicit threat model. That just wasn't how the
  designers at Netscape thought, as far as I can tell.
 
 
 Well, that's the sort of confirmation I'm looking
 for.  From the documents and everything, it seems
 as though the threat model wasn't analysed, it was
 just picked out of a book somewhere.  Or, as you
 say, even that is too kind, they simply didn't
 think that way.

 But, this is a very important point.  It means that
 when we talk about secure browsing, it is wrong to
 defend it on the basis of the threat model.  There
 was no threat model.  What we have is an accident
 of the past.

Maybe so, but it coincides relatively well with the
common Internet threat model, so I think you can't
just dismiss that out of hand as if it were pulled
out of the air.


  Incidentally, Ian, I'd like to propose a counterargument
  to your argument. It's true that most web traffic
  could be encrypted if we had a more opportunistic key
  exchange system. But if there isn't any substantial
  sniffing (i.e. the wire is secure) then who cares?
 
 
 Exactly.  Why do I care?  Why do you care?
 
 It is mantra in the SSL community and in the
 browsing world that we do care.  That's why
 the software is arranged in a a double lock-
 in, between the server and the browser, to
 force use of a CA cert.

You keep talking about the server locking you in, but it doesn't.
The world is full of people who run SSL servers with self-signed
certs.

And on the client side the user can, of course, click ok to the do
you want to accept this cert dialog. Really, Ian, I don't understand
what it is you want to do. Is all you're asking for to have that
dialog worded differently? It's not THAT different from what
SSH pops up.

-Ekr




-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Simple SSL/TLS - Some Questions

2003-10-06 Thread Eric Rescorla
Jill Ramonsky [EMAIL PROTECTED] writes:

 Eric raised some points which I should address. First, he asked me
 You have read the RFC, right?. Well I guess I should be honest here
 and say no, I hadn't done that yet. Maybe that's where I went wrong,
 and would have asked fewer dumb questions if I had. But rest assured
 everyone, I will digest it thoroughly before trying to implement it!

You'll definitely have to. I think that SSL and TLS is pretty
thorough as a protocol book goes, but it's not designed to
let you implement the protocol without reading the RFC.


 He also asked I'm trying to figure out why you care about this. The
 defined algorithms are good enough for almost all purposes., and
 Don't you want to be able to communicate with standard TLS
 implementations? If so, the kind of stuff you seem to want to do will
 in often break that.. To answer the first question, I have to state
 my absolute and sincere belief that if Alice, Bob, Carol, Dave, etc..,
 wish to communicate with each other privately, then it their business
 AND NO-ONE ELSE'S what choice of algorithm(s) they use, etc.. This
 leads inevitably to the conclusion that if a standards body forbids
 this, then the standards body will have to be circumvented. This of
 course leads to the second question, (Don't you want to be able to
 communicate with standard TLS implementations?). The answer is
 obvious. /Of course/ one should be able to communicate with standard
 TLS implementations, otherwise the toolkit would be worthless. And of
 course, communicating with other implementations /does /mean strictly
 obeying all the standards. These two positions are not, however,
 mutally exclusive, because what I am putting together is a /toolkit/,
 not an /application/. Application programmers will be able to use the
 toolkit to build standards-compliant applications if they want that,
 or anarchistic applications if they want that. (Of course, anarchistic
 applications will not interoperate with the rest of the world, but
 that's the price you pay for choosing that option, and it's what I
 mean by the phrase private use by mutually consenting parties).

Uh, this is all sounding very non-simple. Since algorithm negotiation
is one of the things that people generally cite as making TLS too
complicated (not that I agree) I can't see why you'd want to make
it more complicated in this way. Moreover, I would think that
part of making something for the masses would be making appropriate
design decisions so that people can't shoot themselves in the
foot. If people are competent to make those decisions then they
should have no trouble figuring out how to use OpenSSL.


 GnuTLS (obviously only suitable if it ends up with a Gnu license)

GnuTLS already exists.


 Pretty Good TLS (I stole the idea from PGP obviously, but if this is
 to be SSL for the masses then it's not entirely inappropriate)
 
 ...
 Anyway, all suggestions welcome.
 
 (3) MULTIPLY SIGNED CERTIFICATES
 
 A technical question now. (I did look at RFC2246 before asking this,
 but didn't find the answer there). In GPG / PGP, one can have multiply
 signed certificates. It's not called a certificate in GPG, it's
 called a signed key, but the priniciple is the same. Alice can get her
 key signed by both Carol and Dave, which has the intended meaning that
 both Carol and Dave vouch for the authenticity of Alice's key. Thus,
 if Bob wishes to send to Alice, he can do so provided he trusts
 /either/ Carol /or/ Dave.
 
 
 Can you do this with X.509 certificates? I know it would be hideously
 unusual (not to mention expensive) to get a single certificate signed
 by both Verisign and Thwarte, but can it be done? Is it possible? Is
 it allowed?
This is a PKIX issue. Check out RFC 3280. Anyway, the answer
is no. A certificate has one signature. The X.509 way to have this
is to have multiple certificates issued for a given DN/key pair.

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Simple SSL/TLS - Some Questions

2003-10-06 Thread Eric Rescorla
Florian Weimer [EMAIL PROTECTED] writes:
 Jill Ramonsky wrote:
  My question is, how much of a problem is this for the embedded market?
 
 Have you looked at GNU Pth?  It's a non-preemptive threading package
 which should be reasonably portable.
 
 I don't know the TLS/ASN.1 formats by heart, but maybe it's possible to
 receive the complete blob (possibly involving I/O multiplexing) without
 parsing it?  IOW, the parser starts only after the communication layer
 has finished transmitting the message.

The way that TLS works is that you can identify record size
by the record header (first 5 octets). Only when you have
a complete record in hand can you start to parse.

-Ekr


-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Simple SSL/TLS - Some Questions

2003-10-03 Thread Eric Rescorla
 answer. Currently there is a draft on TLS compression.

 THE CERTIFICATE
 
 Can Alice and Bob each create their own certificates? With (for
 example) Alice's key signed by Bob, and Bob's key signed by Alice, as
 is often done in GPG? Who counts as the Issuer in this case? How can
 Alice (or a piece of software working on Alice's behalf) construct an
 X.500 Distinguished Name to describe herself /and be absolutely sure
 that it is globally unique/? On page 12 of Eric's book it explains
 that a DN is a sequence of RDNs, each of which only needs to be
 locally unique, so the whole sequence becomes globally unique. That's
 all very well, but it's still a global namespace overall, so who
 controls it? Let me be clear that Alice and Bob have no intention to
 give even a single penny to Verisign or any other entity, just so that
 they can talk to each other in private.
TLS is basically agnostic on certificate validation and construction.
It references PKIX but in practice you can do whatever you want.

I'm a little puzzled by some of these questions:
(1) Don't you want to be able to communicate with standard TLS
implementations? If so, the kind of stuff you seem to want
to do will in often break that.
(2) I thought your goal was simplicity. All these options for exotic
mechanisms will make things less simple. 

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: DH with shared secret

2003-10-03 Thread Eric Rescorla
Jack Lloyd [EMAIL PROTECTED] writes:

 This was just something that popped into my head a while back, and I was
 wondering if this works like I think it does. And who came up with it
 before me, because it's was too obvious. It's just that I've never heard of
 something alone these lines before.
 
 Basically, you share some secret with someone else (call it S).  Then you
 do a standard issue DH exchange, but instead of the shared key being
 g^(xy), it's g^(xyS)
 
 My impression is that, unless you know S, you can't do a succesfull MITM 
 attack on the exchange. Additionaly, AFAICT, it provides PFS, since if 
 someone later recovers S, there's still that nasty DH exchange to deal 
 with. Of course after S is known MITM becomes possible.
The problem with this protocol is that a single MITM allows 
a dictionary attack. There are better ways to do this.

Keywords: EKE, SRP, SPEKE

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Eric Rescorla
Don Davis [EMAIL PROTECTED] writes:

 EKR writes:
  I'm trying to figure out why you want to invent a new authentication
  protocol rather than just going back to the literature ...
 
 there's another rationale my clients often give for
 wanting a new security system, instead of the off-
 the-shelf standbys:  IPSec, SSL, Kerberos, and the
 XML security specs are seen as too heavyweight for
 some applications.  the developer doesn't want to
 shoehorn these systems' bulk and extra flexibility
 into their applications, because most applications
 don't need most of the flexibility offered by these
 systems.

I hear this a lot, but I think that Perry nailed it earlier. SSL, for
instance, is about as simple as we know how to make a protocol that
does what it does. The two things that are generally cited as being
sources of complexity are:

(1) Negotiation.
(2) Certificates.

Negotiation doesn't really add that much protocol complexity,
and certificates are kind of the price of admission if you want
third party authentication.


 some shops experiment with the idea of using only
 part of OpenSSL, but stripping unused stuff out of
 each new release of OpenSSL is a maintenance hassle.
But here's you're talking about something different, which is
OpenSSL. Most of the OpenSSL complexity isn't actually in 
SSL.

The way I see it, there are basically four options:
(1) Use OpenSSL (or whatever) as-is.
(2) Strip down your toolkit but keep using SSL.
(3) Write your own toolkit that implements a stripped down subset
of SSL (e.g. self-signed certs or anonymous DH).
(4) Design your own protocol and then implement it.

Since SSL without certificates is about as simple as a stream
security protocol can be, I don't see that (4) holds much of
an advantage over (3)

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: how simple is SSL? (Re: Monoculture)

2003-10-01 Thread Eric Rescorla
Adam Back [EMAIL PROTECTED] writes:

 On Wed, Oct 01, 2003 at 08:53:39AM -0700, Eric Rescorla wrote:
   there's another rationale my clients often give for
   wanting a new security system [existing protcools] too heavyweight for
   some applications.
  
  I hear this a lot, but I think that Perry nailed it earlier. SSL, for
  instance, is about as simple as we know how to make a protocol that
  does what it does. The two things that are generally cited as being
  sources of complexity are:
  
  (1) Negotiation.
 
  Negotiation doesn't really add that much protocol complexity,
 
 eh well _now_ we can say that negotiation isn't a problem, but I don't
 think we can say it doesn't add complexity: but in the process of
 getting to SSLv3 we had un-MACed and hence MITM tamperable
 ciphersuites preferences (v1), and then version roll-back attack (v2).
Right, but that's a DESIGN cost that we've already paid. 
It doesn't add significant implementation cost. As in check
out any SSL implementation.


  (2) Certificates.
 
  and certificates are kind of the price of admission if you want
  third party authentication.
 
 Maybe but X.509 certificates, ASN.1 and X.500 naming, ASN.1 string
 types ambiguities inherited from PKIX specs are hardly what one could
 reasonably calls simple.  There was no reason SSL couldn't have used
 for example SSH key formats or something that is simple.  If one reads
 the SSL rfcs it's relatively clear what the formats are the state
 stuff is a little funky, but ok, and then there's a big call out to a
 for-pay ITU standard which references half a dozen other for-pay ITU
 standards.  Hardly compatible with IETF doctrines on open standards
 you would think (though this is a side-track).
 
  Since SSL without certificates is about as simple as a stream
  security protocol can be
 
 I don't think I agree with this assertion.  It may be relatively
 simple if you want X.509 compatibility, and if you want ability to
 negotiate ciphers.

I said WITHOUT certificates.

Take your SSL implementation and code it up to use anonymous
DH only. There's not a lot of complexity to remove at that point.

-Ekr


-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Eric Rescorla
Don Davis [EMAIL PROTECTED] writes:

 eric wrote:
  The way I see it, there are basically four options:
  (1) Use OpenSSL (or whatever) as-is.
  (2) Strip down your toolkit but keep using SSL.
  (3) Write your own toolkit that implements a
  stripped down subset of SSL (e.g. self-signed
  certs or anonymous DH).
  (4) Design your own protocol and then implement it.
 
  Since SSL without certificates is about as simple
  as a stream security protocol can be, I don't see
  that (4) holds much of an advantage over (3)
 
 i agree, except that simplifying the SSL protocol
 will be a daunting task for a non-specialist.  when
 a developer is faced with reading  understanding
 the intricacy of the SSL spec, he'll naturally be
 tempted to start over.  this doesn't exculpate the
 developer for biting off more than he could chew,
 but it's unfair to claim that his only motivation
 was NIH or some other sheer stupidity.
I disagree. If someone doesn't understand enough about SSL
to understna where to simplify, they shouldn't even consider
designing a new protocol.

 btw, i also agree that when a developer decides to
 design a new protocol, he should study the literature
 about the design  analysis of such protocols.  but
 at the same time, we should recognize that there's a
 wake-up call for us in these recurrent requests for
 our review of seemingly-superfluous, obviously-broken
 new protocols.  such developers evidently want and
 need a fifth option, something like:
 
(5) use SSSL: a truly lightweight variant of
SSL, well-analyzed and fully standardized,
which trades away flexibility in favor of
small code size  ease of configuration.
 
 arguably, this is as much an opportunity as a wake-up
 call.

I'm not buying this, especially in the dimension of code
size. I don't see any evidence that the people complaining
about how big SSL are basing their opinion on anything
more than the size of OpenSSL. I've seen SSL implementations
in well under 100k.

-Ekr


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-01 Thread Eric Rescorla
M Taylor [EMAIL PROTECTED] writes:

 Stupid question I'm sure, but does TLS's anonymous DH protect against
 man-in-the-middle attacks? If so, how? I cannot figure out how it would,
 and it would seem TLS would be wide open to abuse without MITM protection so
 I cannot imagine it would be acceptable practice without some form of
 security.

It doesn't protect against MITM. 

You could, however, use a static DH key and then client could
cache it as with SSH.

-Ekr


-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New authentication protocol, was Re: Tinc's response to Linux's answer to MS-PPTP

2003-09-30 Thread Eric Rescorla
Guus Sliepen [EMAIL PROTECTED] writes:

 On Mon, Sep 29, 2003 at 09:35:56AM -0700, Eric Rescorla wrote:
 
  Was there any technical reason why the existing cryptographic
  skeletons wouldn't have been just as good?
 
 Well all existing authentication schemes do what they are supposed do,
 that's not the problem. We just want one that is as simple as possible
 (so we can understand it better and implement it more easily), and which
 is efficient (both speed and bandwidth).

In what way is your protocol either simpler or more efficient
than, say, JFK or the TLS skeleton?


   And I just ripped TLS from the list.
  
  Define ripped. This certainly is not the same as TLS.
 
 Used as a skeleton. Don't ask me to define that as well.

It doesn't appear to me that you've used the TLS skeleton.
The protocol you described really isn't much more like 
TLS than it is like STS or JFK. On the other hand,
all these back and forth DH-based protocols look more
or less the same, except for some important details.


  That's not the same a sdoing a thorough analysis, which can take
  years, as Steve Bellovin has pointed out about Needham-Schroeder.
 
 True, but we can learn even from the bullet holes.

Again, it's important to distinguish between learning experiences
and deployed protocols. I agree that it's worthwhile to try
to do new protocols and let other people analyze them as
a learning experience. But that's different from putting
a not fully analyzed protocol into a deployed system.


  Look, there's nothing wrong with trying to invent new protocols,
  especially as a learning experience. What I'm trying to figure
  out is why you would put them in a piece of software rather 
  than using one that has undergone substantial analysis unless
  your new protocol has some actual advantages. Does it?
 
 We're trying to find that out. If we figure out it doesn't, we'll use
 one of the standard protocols.

Well, I'd start by doing a back of the envelope performance
analysis. If that doesn't show that your approach is better,
then I'm not sure why you would wish to pursue it as a
deployed solution.

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New authentication protocol, was Re: Tinc's response to 'Linux's answer to MS-PPTP'

2003-09-30 Thread Eric Rescorla
Bill Stewart [EMAIL PROTECTED] writes:

  If we use RSA encryption, then both sides know their message can only
  be received by the intended recipient. If we use RSA signing, then we
  both sides know the message they receive can only come from the assumed
  sender. For the purpose of tinc's authentication protocol, I don't see
  the difference, but...
 
   Now, the attacker chooses 0 as his DH public. This makes ZZ always
   equal to zero, no matter what the peer's DH key is.
 
 You need to validate the DH keyparts even if you're
 corresponding with the person you thought you were.
 This is true whether you're using signatures, encryption, or neither.

Not necessarily.

If you're using fully ephemeral DH keys and a properly designed
key, then you shouldn't need to validate the other public share.

-Ekr


-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New authentication protocol, was Re: Tinc's response to Linux's answer to MS-PPTP

2003-09-30 Thread Eric Rescorla
Guus Sliepen [EMAIL PROTECTED] writes:

 On Mon, Sep 29, 2003 at 02:07:04PM +0200, Guus Sliepen wrote:
 
  Step 2:
  Exchange METAKEY messages. The METAKEY message contains the public part
  of a key used in a Diffie-Hellman key exchange.  This message is
  encrypted using RSA with OAEP padding, using the public key of the
  intended recipient.
 
 After comments and reading up on suggested key exchange schemes, I think
 this step should be changed to send the Diffie-Hellman public key in
 plaintext, along with a nonce (large random number) to prevent replays
 and the effects of bad DH public keys. Instead of encrypting both with
 RSA, they should instead be signed using the private key of the sender
 (the DH public key and nonce wouldn't fit in a single RSA message
 anyway). 
 
 IKEv2 (as described in draft-ietf-ipsec-ikev2-10.txt) does almost the
 same. However, IKEv2 does not send the signature directly, but first
 computes the shared key, and uses that to encrypt (using a symmetric
 cipher) the signature. I do not see why they do it that way; the
 signature has to be checked anyway, if it can be done before computing
 the shared key it saves CPU time. Encrypting it does not prevent a man
 in the middle from reading or altering it, since a MITM can first
 exchange his own DH public key with both sides (and hence he can know
 the shared keys). So actually, I don't see the point in encrypting
 message 3 and 4 as described at page 8 of that draft at all.
In order to hide the identities of the communicating peers.

Personally, I don't have much use for identity protection,
but this is the reason as I understand it.

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New authentication protocol, was Re: Tinc's response to Linux's answer to MS-PPTP

2003-09-29 Thread Eric Rescorla
Guus Sliepen [EMAIL PROTECTED] writes:
 On Sat, Sep 27, 2003 at 07:58:14PM +0100, M Taylor wrote:
 TLS makes a distinction between a client and a server. If possible I
 wish to avoid making that distinction. If possible, I would also like to
 continue to be able to use an RSA public/private keypair. This made me
 *sketch* the following _authentication_ protocol:
I'm trying to figure out why you want to invent a new authentication
protocol rather than just going back to the literature and ripping
off one of the many skeletons that already exist (STS, JFK, IKE,
SKEME, SIGMA, etc.). That would save people from the trouble
of having to analyze the details of your new protoocl.


 ==
 Step 1:
 Exchange ID messages. An ID message contains the name of the tinc daemon
 which sends it, the protocol version it uses, and various options (like
 which cipher and digest algorithm it wants to use).
 
 Step 2:
 Exchange METAKEY messages. The METAKEY message contains the public part
 of a key used in a Diffie-Hellman key exchange.  This message is
 encrypted using RSA with OAEP padding, using the public key of the
 intended recipient.

 After this step, both sides use Diffie-Hellman to compute the shared
 secret key. From this master key, keys and IVs for symmetric ciphers and
 digest algorithms will be derived, as well as verification data. From
 this point on all messages will be encrypted.
Why are you using RSA encryption to authenticate your DH rather
than using RSA signature?

Depending on *exactly* how you do things, there are MITM attacks:

Consider the following protocol:

M1={DHx}RSAy -
-  M2={DHy}RSAx
  
   ZZ = DH shared key

HMAC(ZZ,M1,M2)  -
-  HMAC(ZZ,M2,M1) [Reverse order to prevent 
replay]


Now, the attacker chooses 0 as his DH public. This makes ZZ always
equal to zero, no matter what the peer's DH key is. He can now forge
the rest of the exchange and intercept the connection.

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New authentication protocol, was Re: Tinc's response to Linux's answer to MS-PPTP

2003-09-29 Thread Eric Rescorla
Guus Sliepen [EMAIL PROTECTED] writes:

 On Mon, Sep 29, 2003 at 07:53:29AM -0700, Eric Rescorla wrote:
 
  I'm trying to figure out why you want to invent a new authentication
  protocol rather than just going back to the literature and ripping
  off one of the many skeletons that already exist (
 
 Several reasons. Because it's fun, because we learn more from doing it
 ourselves (we learn from our mistakes too), because we want something
 that fits our needs. We could've just grabbed one from the shelf, but
 then we could also have grabbed IPsec or PPP-over-SSH from the shelf,
 instead of writing our own VPN daemon. However, we wanted something
 different.

And I'm trying to understand why. This answer sounds a lot
like NIH.

Was there any technical reason why the existing cryptographic
skeletons wouldn't have been just as good?


  STS,
 
 If you mean station-to-station protocol, then actually that is pretty
 much what we are doing now, except for encrypting instead of signing
 using RSA.

But that's not a harmless change, which is the point of the potential
attack I just described.


  JFK, IKE, SKEME, SIGMA, etc.).
 
 And I just ripped TLS from the list.

Define ripped. This certainly is not the same as TLS.


  That would save people from the trouble of having to analyze the
  details of your new protoocl.
 
 Several people on this list have already demonstrated that they are very
 willing to analyse new protocols.

Actually, no. People are willing to take a quick look and
then shoot bullets at your protocol. That's not the same as
doing a thorough analysis, which can take years, as Steve
Bellovin has pointed out about Needham-Schroeder.


  Why are you using RSA encryption to authenticate your DH rather
  than using RSA signature?
 
 If we use RSA encryption, then both sides know their message can only be
 received by the intended recipient. If we use RSA signing, then we both
 sides know the message they receive can only come from the assumed
 sender. For the purpose of tinc's authentication protocol, I don't see
 the difference, but...

There's no difference if it's done correctly. If it's not done
correctly...


  Now, the attacker chooses 0 as his DH public. This makes ZZ always
  equal to zero, no matter what the peer's DH key is.
 
 I think you mean it is equal to 1 (X^0 is always 1). This is the first
 time I've heard of this, I've never thought of this myself. In that case
 I see the point of signing instead of encrypting.

Except that the way you compute DH is to do Y^X rather than 
X^Y. 


Look, there's nothing wrong with trying to invent new protocols,
especially as a learning experience. What I'm trying to figure
out is why you would put them in a piece of software rather 
than using one that has undergone substantial analysis unless
your new protocol has some actual advantages. Does it?

-Ekr


-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Tinc's response to Linux's answer to MS-PPTP

2003-09-28 Thread Eric Rescorla
M Taylor [EMAIL PROTECTED] writes:
 On Fri, Sep 26, 2003 at 06:26:16PM -0700, Joseph Ashwood wrote:
   Both SSL and SSH have had their security
   problems . . , as perfect as Peter Gutmann would let us believe.
  They may not be perfect but in neither case can Mallet do as much damage as
  easily, even the recent break in OpenSSH did not allow a compromise as big
  as even the smallest of the problems briefly explored in tinc.
 
 Oh, and they fixed their flaws. SSHv1 is not recommended for use at all,
 and most systems use SSHv2 now which is based upon a draft IETF standard. 
 SSL went through SSLv1, SSLv2, SSLv3, TLSv1.0, and TLSv1.1 is a draft IETF
 standard.

Nitpicking alert:
Draft Standard is the technical term for the second tier of
IETF standardization. (Proposed, Draft, Full). The term for
something that has not yet been approved and given an RFC #
is Internet Draft. SSHv2 and TLSv1.1 are Internet Drafts.

-Ekr
 
-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is cryptography where security took the wrong branch?

2003-09-07 Thread Eric Rescorla
Ian Grigg [EMAIL PROTECTED] writes:

 Eric Rescorla wrote:
  
  Ian Grigg [EMAIL PROTECTED] writes:
  
   Eric Rescorla wrote:
   ...
 The other thing to be aware of is that ecommerce itself
 is being stinted badly by the server and browser limits.
 There's little doubt that because servers and browsers
 made poorly contrived decisions on certificates, they
 increased the overall risks to the net by reducing the
 deployment, and probably reduced the revenue flow for
 certificate providers by a factor of 2-5.
I doubt that. Do you have any data to support this claim?
  
   Sure.  SSH.
  That's not data, it's an anecdote--and not a very supportive one
  at that. As far as I know, there isn't actually more total
  SSH deployment than SSL, so you've got to do some kind of
  adjustment for the total potential size of the market, which
  is a notoriously tricky calculation.
 
 It's more than an anecdote.  If I quote from your
 slides, SSH has achieved an almost total domination
 of where it can be deployed.

No. There are lots of other things you CAN do with SSH
that people don't do that often. 


  Do you have any actual
  data or did you just pull 2-5 out of the air?
 
 
 There is a middle ground between data and the air,
 which is analysis. 

Data precedes analysis.

 It's nothing to do with whether the ivory tower
 brigade does some econowhatsists on their models
 and then speculates as to what this all means.
 
 Have a look at the data that is available [2].  You
 will see elasticity.  Have a look at the history
 of a little company called Thawte.  There, you will
 see how elasticity contributed to several hundred
 millions of buyout money.

Nope.

Elasticity is about how much consumption changes when price
changes, not about what people who were already going to buy
choose to buy.

Look at it this way:
If Pepsi cut their price by 50%, it might affect their
market share but would have only a very small amount of
effect on how much fluid people consume overall. The 
market for beverages is competitive but not particularly
elastic. That could easily be happening here.

Ian, it's a major econometrics project to determine how 
elastic a given good has. To imagine that you can do
it with a little handwaving is simply naive.

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is cryptography where security took the wrong branch?

2003-09-07 Thread Eric Rescorla
James A. Donald [EMAIL PROTECTED] writes:

 --
 On 7 Sep 2003 at 9:48, Eric Rescorla wrote:
  It seems to me that your issue is with the authentication 
  model enforced by browsers in the HTTPS context, not with SSL 
  proper.
 
 To the extent that trust information is centrally handled, as 
 it is handled by browsers, it will tend to be applied in ways 
 that benefit the state and the central authority.
Yeah, I'd noticed that being able to buy stuff at Amazon
really didn't benefit me at all.

-Ekr



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL's threat model

2003-09-06 Thread Eric Rescorla
Ian Grigg [EMAIL PROTECTED] writes:
 Does anyone have any pointers to the SSL threat model?
 
 I have Eric Rescorla's book and slides talking about the
 Internet threat model.
 
 The TLS RFC (http://www.faqs.org/rfcs/rfc2246.html) says
 nothing about threat models that I found.
Yeah.  You can kind of infer it from the security analysis at
the end, but I agree it's not optimal. It's important to
remember that the guy who originally designed SSL (Kipp Hickman)
wasn't a security guy and doesn't seem to really have had
a threat model in mind.
 
When I write about it, generally try to summarize what I think
the implicit threat model is based on my memory of the zeitgeist
at the time and the characteristics of SSL.

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is cryptography where security took the wrong branch?

2003-09-03 Thread Eric Rescorla
Ian Grigg [EMAIL PROTECTED] writes:
 Eric Rescorla wrote:
  Ian Grigg [EMAIL PROTECTED] writes:
  I think it's pretty
  inarguable that SSL is a big success.
 
 One thing that has been on my mind lately is how
 to define success of a crypto protocol.  I.e.,
 how to take your thoughts, and my thoughts, which
 differ, and bring the two together.
 
 There appear to be a number of metrics that have
 been suggested:
 
a.  nunber of design wins
b.  penetration into equivalent unprotected
market
c.  number of actual attacks defeated
d.  subjective good at the application level
e.  worthless measures such as deployed copies,
amount of traffic protected
 
 All of these have their weaknesses, of course.
 It may be that a composite measure is required
 to define success.  I'm sure there are more
 measures.
 
 a. The only thing that seems to be clearly a win
 for SSL is the number of design wins - quite
 high.  That is, it would appear that when someone
 is doing a new channel security application, the
 starting point is to consider SSL.
 
 b. we seem to be agreeing on 1% penetration of
 the market, at least by server measurement (see
 my other post where I upped that to 1.24% in the
 most recent figures).
This really depends on your definition of market.
SSL was designed to protect credit card transactions, period.
For that, the market penetration is near 100%.

 d.  subjective good.  For HTTPS, again, it's a
 decidedly mixed score card.  When I go shopping
 at Amazon, it makes little difference to me, because
 the loss of info doesn't effect me as much as it
 might - $50 limit on liability.
That $50 limit is a funny thing.

I look at it this way:
You don't PERSONALLY eat the cost of fraud on your own
card but you eat the cost of fraud on other people's cards.
Thus, as in many situations, it's in your interest for
everyone else to practice good hygiene.

In this particular case, the issuers were *very* wary
of providing credit card transactions over the Internet
without some sort of encryption. So, SSL is what enables
you to do e-commerce on the net. That seems like a large
subjective good.

  Actually, I think that SSL has the right model for the application
  it's intended for. SSH has the right model for the application it
  was intended for. Horses for courses.
 
 Plenty of room for future discussion then :-)
 
 (I sense your pain though - I see from the SHTTP
 experiences, you've been through the mill. 
Vis a vis SHTTP, I'm not sure if that was the right design
or SSL was. However, they had relatively similar threat models.

 I'm almost convinced that WEP is a failure, but
 I think it retains some residual value.
I agree. After all, I occasionally come upon a network I'd
like to use and WEP stops me cause I'm too lazy. On the other
hand, MAC restrictions would have done just as well for that.

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL

2003-07-10 Thread Eric Rescorla
Ian Grigg [EMAIL PROTECTED] writes:
Ian Grigg [EMAIL PROTECTED] writes:

 [EMAIL PROTECTED] wrote:
 
  Instead, I have a
  different question: Where can I learn about SSL?
 
 Most people seem to think the RFC is unreadable,
 so ...
 
  As in, could someone reccommend a good book, or online tutorial, or
  something, somewhere, that explains it all from pretty much first
  principles, and leaves you knowing enough at the end to be able to make
  sensible use of OpenSSL and similar? I don't want a For Dummies type book
  - as I said, I'm reasonably competent - but I would really like access to a
  helpful tutorial. I want to learn. So what's the best thing to go for?
 
 I am reading Eric Rescorla's book at the moment,
 and if you are serious about SSL, it is worth the
 price to get the coverage.  It's well written,
 and relatively easy to read for a technical book.

 It costs a steep $50.  It's not a For Dummies.
 You have to be comfortable with all sorts of things
 already.
Thanks for the kind words.

Actually, the price should be $40 US. That's the price at Amazon.

 It's giving me the intellectual capital to attack
 the engineering failures therein and surrounding
 the deployment of same.  Maybe Eric will offer me
 $100 for my annotated copy just to shut me the
 f**k up ;-)   I've so far discovered 
No payoffs, but I'd love to know what you've discovered :)

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: replay integrity

2003-07-09 Thread Eric Rescorla
tom st denis [EMAIL PROTECTED] writes:
 --- Eric Rescorla [EMAIL PROTECTED] wrote:
  This is all fine, but irrelevant to my point, which is that
  if you're designing a channel security protocol it should
  provide channel level integrity and anti-replay unless there's
  some really good reason not to.
 
 For the love of god the horse is dead.  Let it be!
 
 I've pulled the code [and the rest of the site].  I admitted you were
 right, I admited it had unintentional flaws.  

 What more do you want?  

Tom, 

I'm sorry you're taking this personally, since it's not really
about you. I take Ian to be making a generic argument
that there's not a need for these features in a channel
security protocol. I've certainly hear this argument
before and I think it's worth discussing--even though
I think he's wrong.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: LibTomNet [v0.01]

2003-07-08 Thread Eric Rescorla
tom st denis [EMAIL PROTECTED] writes:
 The lib uses RSA for key exchange [and the client may scrutinize the
 key before making the connection via a callback], AES-128-CTR [two
 different keys for each direction] and SHA1-HMAC.  The niche of the lib
 is that my library compiles to a mere 10KB.  Add SHA1, AES, HMAC, RSA
 and LTM and you get 60KB demo apps   Ideally you should build LTC
 without mpi.o and link against both LTC and LTM.
 
 The lib does not implement any other protocol like SSH/SSL/TLS [etc].

 I have to mention this in good conscience.  I ==STRONGLY== DISCOURAGE
 people from using this library in fielded systems.  I've only been
 working on it for a day and I wouldn't be surprised if there were
 numerous bugs or points of attack [I've fixed a dozen since last
 night].
[Standard rant follows... :)]
I'm trying to figure out why this is a good idea even in principle.

I've seen 100k SSL implementations and that included the ASN.1
processing for certs. I would imagine that one could do a compliant
SSL implementation that used fixed RSA keys in roughly the same
code size as your stuff.


-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: LibTomNet [v0.01]

2003-07-08 Thread Eric Rescorla
tom st denis [EMAIL PROTECTED] writes:

 --- Eric Rescorla [EMAIL PROTECTED] wrote:
  tom st denis [EMAIL PROTECTED] writes:
   Two weeks ago I sat down to learn how to code my own SSL lib [key
  on
   being small].  Suffice it to say after reading the 67 page RFC for
  SSL
   3.0 I have no clue whatsoever how to implement SSL.  
  Funny, none of the 30 or so other people who have done SSL
  implementations had any problem.
 
 Arrg whatever. I really don't give a hoot what you think.

 What I don't get is you guys who are presumably a smart bunch can't
 figure out that 
 
 I 
 
 AM
 
 NOT
 
 TRYING
 
 TO
 
 REPLACE
 
 SSL.
 
 I'm just writing a simple library to provide secure sockets.  That's
 it, that's all.
In other words, this is just an exercise in Not Invented Here. Wonderful.

 Believe it or not, this may come as a surprise to you, but not everyone
 requires standsrd compliant protocols.
If the past 20 years of security work have taught us anything, it's
the value of standardized tools that get a lot of review so that
we can be confident that they're not totally hosed. When people go
off and invent their own stuff without good reason, that's not
good security practice. That's fine if they're just screwing around,
but when they come up with all sorts of bogus reasons why people
might want to use their homegrown stuff instead of the standard
stuff, that's not so fine.

Moreover, your original message said that you intended to use
SSL, but as you yourself admit, you don't understand it and yet
you feel comfortable holding forth about it's merits compared
to your brand new protocol. Huh?

-Ekr

P.S. You claimed earlier that you didn't think RFC 2246 was clear
enough to write a complaint implementation. I was sincere in asking
what you find underspecified. It's my job to make it as complete
as possible.


-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: LibTomNet [v0.01]

2003-07-08 Thread Eric Rescorla
tom st denis [EMAIL PROTECTED] writes:

 --- Eric Rescorla [EMAIL PROTECTED] wrote:
  In other words, this is just an exercise in Not Invented Here.
  Wonderful.
 
 Oh, ok so I need your permission? 
No, you don't need my permission. You can do any fool thing you
want. It would just be nice if you were spending effort filling some
actual need, rather than reinventing the wheel.

 Who gave Netscape permission to
 write SSL?  [or whoever invented it]
Netscape. However, the situation was different then. There
was actually a market niche that SSL didn't fill. It has yet
to be established that LibTomCrypt is in that position.

 Generally I agree that homebrew crypto is a bad idea but I think you
 are undervaluing my knowledge in the field.  I'm not some two-bit IT
 specialist trying to make a quick buck.
You don't seem to understand the issue. It has nothing to do with
how competent you are and everything to do with the fact that
people make mistakes and so homebrew stuff is bad when you can
avoid it. Everyone I know who has worked in this field has made
a bunch of mistakes and depends on others to catch them. 

 My library *really* only has eight functions, it *really* is only 13KB
 [excluding the crypto], it *really* provides secure sockets.
And I claim that SSL implementations can be gotten down to very nearly
that size, especially if you're willing to compromise a lot of the
features, so what virtue is your library providing?

 Just because it isn't SSL doesn't mean its incapable of being secure.
No, it just means that it's never going to get the kind of security
analysis that SSL has, which means that there are probably a bunch
of undiscovered holes.

  Moreover, your original message said that you intended to use
  SSL, but as you yourself admit, you don't understand it and yet
  you feel comfortable holding forth about it's merits compared
  to your brand new protocol. Huh?
 
 Yeah, because I'm not going to sit and study 67 pages for more than a
 day to figure out how to send a key or perform key exchange.
It turns out that doing a principled job is a lot more complicated
than doing key exchange. That's one of the things that one discovers
when actually writing a full protocol rather than just whipping something
together.

 To sum up, I do agree that homebrew stuff is generally of lower quality
 than peer-reviewed standards but I think you're too easily dismissing
 all other works because they're not your own.  To that end I call you
 an elitist pig.  
Seeing as I didn't write SSL, I'm just the document editor, that
just makes you look silly.

 Heck, if you could find a security flaw in LibTomNet [v0.03] I'll buy
 you a beer.
Your protocol does not use appear to have any protection against
active attacks on message sequence, including message deletion,
replay, etc.  True, the attacker can't inject *predictable* plaintext,
but he can inject garbage plaintext and have it accepted as real.

Your protocol is susceptible to truncation attack via TCP FIN forging.

Your server doesn't generate any random values as part of the handshake,
thus, leaving you open to full-session replay attack.

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: LibTomNet [v0.01]

2003-07-08 Thread Eric Rescorla
tom st denis [EMAIL PROTECTED] writes:

 --- Eric Rescorla [EMAIL PROTECTED] wrote:
 
   Heck, if you could find a security flaw in LibTomNet [v0.03] I'll
  buy
   you a beer.
  Your protocol does not use appear to have any protection against
  active attacks on message sequence, including message deletion,
  replay, etc.  True, the attacker can't inject *predictable*
  plaintext,
  but he can inject garbage plaintext and have it accepted as real.
 
 No he can't.  You need a correct HMAC for the data to be accepted. 
 This allows a replay attack which I should fix.  One beer.
 
 Ultimately though the plaintext won't match if you replay though so its
 only half a bug [though a bug that must be fixed].
Uh, this is exactly what I said. If you delete messages or replay
them, they will pass through the HMAC and be decrypted 
(thus giving you unpredictable garbage) and passed to the
application layer.

  Your protocol is susceptible to truncation attack via TCP FIN
  forging.
 
 I don't even know what that is but my protcol must read an entire block
 before parsing it.
Yes, but if I forge a TCP FIN in between blocks, you can
generate a fake connection close. This is a problem if the
protocol layered over top uses connection close to indicate
end of data as (say) HTTP does. That's why SSLv3 and above
include a close_notify message in the alerts.

  Your server doesn't generate any random values as part of the
  handshake,
  thus, leaving you open to full-session replay attack.
 
 Which is why people should use some authentication scheme ontop of
 this.  Note that the server has no clue who you are after making the
 connection.  This is intentional.\
This doesn't always help, unfortunately.

Consider the case where you're using a replayable authentication
scheme such as passwords over your encrypted session. This is
perfectly natural and people do it with SSL all the time.  So, the
attacker captures you doing some transaction replays it to the
server. Congratulations, you've now done it twice.

The standard procedure to prevent this (used in SSL, IKE, etc.) is for
the server to send the client a nonce in his hello message, thus
preventing client-side replay.

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: LibTomNet [v0.01]

2003-07-08 Thread Eric Rescorla
tom st denis [EMAIL PROTECTED] writes:

 --- Eric Rescorla [EMAIL PROTECTED] wrote:
  tom st denis [EMAIL PROTECTED] writes:
   The point I'm trying to make is that just because a fairly standard
   product exists doesn't mean diversity is a bad thing.  Yes, people
  may
   fail to create divergent products but that isn't a bad thing.  You
   learn from the faults and create a better product.  I mean by your
   logic we would all drive Model T cars since well... diversity is
  bad. 
   The model T exists!
  My logic is that if you're going to create something new, it should
  be better than what already exists. There is precious little
  evidence that libtomnet fills that bill.
 
 To you.  You know SSL inside and out.

 LibTomNet has eight functions and one data type in the API.  To a
 complete stranger that is a nice welcome change than say all the
 constants, functions, structures in SSL.
As I said before, the problem here isn't SSL. Rather, it's the way
that OpenSSL does things.  Now, it would be a real contribution for
you to write a simple wrapper for OpenSSL. I've seen people do stuff
like that, but it's generally too custom for general use.

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: LibTomNet [v0.01]

2003-07-08 Thread Eric Rescorla
Ian Grigg [EMAIL PROTECTED] writes:

 Eric Rescorla wrote:
 
  My logic is that if you're going to create something new, it should
  be better than what already exists.
 
 Right.  But better is not a binary choice in real
 life.  SSL is only better if it exceeds all
 requirements when compared against a product
 that has only those requirements.
 
 One needs to look at the requirements.  Tom's
 requirements didn't include message integrity,
 if I saw correctly, because he had something
 in there at a higher layer that covered him
 there.  That's good.
That's certainly not true. He had a message integrity
construct. It just didn't include anti-replay measures.

 Does he require replay protection?  Is he worried
 about MITM?  What about authenticity?  These all
 need to be established before you can compare any
 protocol.

 The whole world doesn't want or need perfect
 channel security.  That's because some parts of
 the world have different needs.
Actually, I think this attitude is generally unproductive.

All else being equal, a protocol which provides more security
is better than a protocol which provides less. Now, all things
aren't equal, but if you can offer substantially more security
with only a modest increase in code complexity, that's generally
a good thing. Where hard tradeoffs have to be made is when
the users are inconvenienced. A little additional programming
doesn't seem like a high cost at all.

I don't find this sort of sure, it's nowhere near as secure as
secure as SSL, but it takes up a little less space argument
very compelling at all.

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: An attack on paypal

2003-06-11 Thread Eric Rescorla
Sunder [EMAIL PROTECTED] writes:

 The worst trouble I've had with https is that you have no way to use host
 header names to differentiate between sites that require different SSL
 certificates.

 i.e. www.foo.com www.bar.com www.baz.com can't all live on the same IP and
 have individual ssl certs for https. :(  This is because the cert is
 exchanged before the http 1.1 layer can say I want www.bar.com 
 
 So you need to waste IP's for this.  Since the browser standards are
 already in place, it's unlikely to be to find a workaround.  i.e. be able
 to switch to a different virtual host after you've established the ssl
 session.  :(
This is being fixed. See draft-ietf-tls-extensions-06.txt

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


  1   2   >