Re: Against Rekeying

2010-03-26 Thread Perry E. Metzger

Also manually forwarded on behalf of  Peter Gutmann. As  before, if you
reply, don't credit me with the text, it is his.

From pgut001 Fri Mar 26 14:44:54 2010
To: b...@links.org, nicolas.willi...@sun.com
Subject: Re: Against Rekeying
Cc: cryptography@metzdowd.com, pe...@piermont.com, si...@josefsson.org
In-Reply-To: 20100325160755.gf21...@sun.com

Nicolas Williams nicolas.willi...@sun.com writes:

I suspect that what happened, ultimately, is that TLS re-negotiation was an
afterthought, barely mentioned in the TLS 1.2 RFC and barely used, therefore
many experts were simply not conscious enough of its existence to care.

I think that was a significant problem with noticing this, that many
implementors may have looked at it, decided it was a nightmare to implement,
served no really obvious purpose once 40-bit keys had gone the way of the
dodo, and was a significant source of future problems (see my previous
message), and so never bothered with it.  As a result it never got much
attention, as do significant chunks of other security protocols.  I think the
real skill in security protocol implementation isn't knowing what to
implement, but knowing what not to implement (I've had an attack-surface-
reduced SSH draft in preparation for awhile now, I really must get back to the
some time).

One nice thing about being the author of a crypto toolkit is that you can
experiment with this, either skipping features or turning existing features
off in new releases, to see if anyone notices.  If no-one does, you leave them
turned off.  You can turn off an awful lot of security-protocol features
before people start to notice, leading me to believe that a scary portion of
many protocols actually consist of attack surface and not features.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-26 Thread Nicolas Williams
On Fri, Mar 26, 2010 at 10:22:06AM -0400, Peter Gutmann wrote:
 I missed that in his blog post as well.  An equally big one is the SSHv2
 rekeying fiasco, where for a long time an attempt to rekey across two
 different implementations typically meant drop the connection, and it still
 does for the dozens(?) of SSH implementations outside the mainstream of
 OpenSSH, Putty, ssh.com and a few others, because the procedure is so complex
 and ambiguous that only a few implementations get it right (at one point the
 ssh.com and OpenSSH implementations would detect each other and turn off
 rekeying because of this, for example).  Unfortunately in SSH you're not even
 allowed to ignore rekey requests like you can in TLS, so you're damned if you
 do and damned if you don't [0].

I made much the same point, but just so we're clear, SSHv2 re-keying has
been interoperating widely since 2005.  (I was at Connectathon, and
while the details of Cthon testing are proprietary, I can generalize and
tell you that interop in this area was very good.)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-26 Thread Peter Gutmann (alt)
Nicolas Williams nicolas.willi...@sun.com writes:

I made much the same point, but just so we're clear, SSHv2 re-keying has been
interoperating widely since 2005.  (I was at Connectathon, and while the
details of Cthon testing are proprietary, I can generalize and tell you that
interop in this area was very good.)

Whose SSH rekeying though?  I follow the support forums for a range of non-
mainstream (i.e. not the usual suspects of OpenSSH, ssh.com, or Putty) SSH
implementations and why does my connection die after an hour with [decryption
error/invalid packet/unrecognised message type/whatever] (all signs of
rekeying issues) is still pretty much an FAQ across them at the current time.

(There's also the mass of ancient copies of the usual suspects, principally
the ssh.com implementation dating back up to ten years, baked into networking
devices and whatnot that will never be updated, or at least if significant
security holes present in the older versions haven't convinced the vendors
using them to update them then I don't think the fact that they drop the
connection after an hour will).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-26 Thread Nicolas Williams
On Sat, Mar 27, 2010 at 12:31:45PM +1300, Peter Gutmann (alt) wrote:
 Nicolas Williams nicolas.willi...@sun.com writes:
 
 I made much the same point, but just so we're clear, SSHv2 re-keying has been
 interoperating widely since 2005.  (I was at Connectathon, and while the
 details of Cthon testing are proprietary, I can generalize and tell you that
 interop in this area was very good.)
 
 Whose SSH rekeying though?  I follow the support forums for a range of non-
 mainstream (i.e. not the usual suspects of OpenSSH, ssh.com, or Putty) SSH
 implementations and why does my connection die after an hour with [decryption
 error/invalid packet/unrecognised message type/whatever] (all signs of
 rekeying issues) is still pretty much an FAQ across them at the current time.

Several key ones, including SunSSH.  I'd have to go ask permission in
order to disclose, since Connectathon results are private, IIRC.  Also,
it's been five years, so some of the information has fallen off my
cache.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-25 Thread Steven Bellovin

On Mar 23, 2010, at 11:21 AM, Perry E. Metzger wrote:

 
 Ekr has an interesting blog post up on the question of whether protocol
 support for periodic rekeying is a good or a bad thing:
 
 http://www.educatedguesswork.org/2010/03/against_rekeying.html
 
 I'd be interested in hearing what people think on the topic. I'm a bit
 skeptical of his position, partially because I think we have too little
 experience with real world attacks on cryptographic protocols, but I'm
 fairly open-minded at this point.

I'm a bit skeptical -- I think that ekr is throwing the baby out with the bath 
water.  Nobody expects the Spanish Inquisition, and nobody expects linear 
cryptanalysis, differential cryptanalysis, hypertesseract cryptanalysis, etc.  
A certain degree of skepticism about the strength of our ciphers is always a 
good thing -- no one has ever deployed a cipher they think their adversaries 
can read, but we know that lots of adversaries have read lots of unbreakable 
ciphers.

Now -- it is certainly possible to go overboard on this, and I think the IETF 
often has.  (Some of the advice given during the design of IPsec was quite 
preposterous; I even thought so then...)  But one can calculate rekeying 
intervals based on some fairly simple assumptions about the amount of 
{chosen,known,unknown} plaintex/ciphertext pairs needed and the work factor for 
the attack, multiplied by the probability of someone developing an attack of 
that complexity, and everything multiplied by Finagle's Constant.  The trick, 
of course, is to make the right assumptions.  But as Bruce Schneier is fond of 
quoting, attacks never get worse; they only get better.  Given recent research 
results, does anyone want to bet on the lifetime of AES?  Sure, the NSA has 
rated it for Top Secret traffic, but I know a fair number of people who no 
longer agree with that judgment.  It's safe today -- but will it be safe in 20 
years?  Will my plaintext still be sensitive then?

All of that is beside the point.  The real challenge is often to design a 
system -- note, a *system*, not just a protocol -- that can be rekeyed *if* the 
long-term keys are compromised.  Once you have that, setting the time interval 
is a much simpler question, and a question that can be revisited over time as 
attacks improve.


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-25 Thread Joseph Ashwood

--
From: Perry E. Metzger pe...@piermont.com
Subject: Against Rekeying


I'd be interested in hearing what people think on the topic. I'm a bit
skeptical of his position, partially because I think we have too little
experience with real world attacks on cryptographic protocols, but I'm
fairly open-minded at this point.


Typically, rekeying is unnecessary, but sooner or later you'll find a 
situation where it is critical. The claim made that everything hinges on is 
that 2^68 bytes is not achievable in a useful period of time, this is not 
always correct.


Cisco recently announced the CRS-3 router, a single router that does 322 
Tbits/sec, that's 40.25 TBytes/sec. Only 7 million seconds to exhaust the 
entire 2^68. This is still fairly large, but be around the industry long 
enough you'll see a big cluster where they communicate as fast as possible, 
all with the same key. I've seen clusters of up to 100 servers at a time, so 
in theory it could be just 70,000 seconds, not even a full day, its also 
worth keeping in mind the bandwidth drain of an organization as cmopute 
intensive as Google of Facebook will easily exceed even these limits.


Certainly an argument can be made that the protocol used is wrong, but this 
kind of protocol gets all too frequently, and since it is usually used for 
high availability (one of the primary reasons for clustering) the need to 
rekey becomes all too real.


So there are times where rekeying is a very necessary requirement. I prefer 
a protocol reboot process instead of an in-protocol rekey, but sometimes you 
have to do what you have to do. Rekeying probably should never have been 
implemented in something like SSL/TLS or SSH, even IPsec it is arguable, but 
extreme environments require extreme solutions.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-25 Thread Simon Josefsson
Perry E. Metzger pe...@piermont.com writes:

 Ekr has an interesting blog post up on the question of whether protocol
 support for periodic rekeying is a good or a bad thing:

 http://www.educatedguesswork.org/2010/03/against_rekeying.html

 I'd be interested in hearing what people think on the topic. I'm a bit
 skeptical of his position, partially because I think we have too little
 experience with real world attacks on cryptographic protocols, but I'm
 fairly open-minded at this point.

One situation where rekeying appears to me not only useful but actually
essential is when you re-authenticate in the secure channel.

TLS renegotiation is used for re-authentication, for example, when you
go from no user authentication to user authenticated, or go from user X
authenticated to user Y authenticated.  This is easy to do with TLS
renegotiation: just renegotiate with a different client certificate.

I would feel uncomfortable using the same encryption keys that were
negotiated by an anonymous user (or another user X) before me when I'm
authentication as user Y, and user Y is planning to send a considerably
amount of traffic that user Y wants to be protected.  Trusting the
encryption keys negotiated by user X doesn't seem prudent to me.
Essentially, I want encryption keys to always be bound to
authentication.

Yes, the re-authentication use-case could be implemented by tearing down
the secure channel and opening a new one, and that may be overall
simpler to implement and support.

However, IF we want to provide a secure channel for application
protocols that re-authenticate, I have a feeling that the secure channel
must support re-keying to yield good security properties.

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-25 Thread Adam Back
Seems people like bottom post around here.

On Tue, Mar 23, 2010 at 8:51 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
 On Tue, Mar 23, 2010 at 10:42:38AM -0500, Nicolas Williams wrote:
 On Tue, Mar 23, 2010 at 11:21:01AM -0400, Perry E. Metzger wrote:
  Ekr has an interesting blog post up on the question of whether protocol
  support for periodic rekeying is a good or a bad thing:
 
  http://www.educatedguesswork.org/2010/03/against_rekeying.html
 
  I'd be interested in hearing what people think on the topic. I'm a bit
  skeptical of his position, partially because I think we have too little
  experience with real world attacks on cryptographic protocols, but I'm
  fairly open-minded at this point.

 I fully agree with EKR on this: if you're using block ciphers with
 128-bit block sizes in suitable modes and with suitably strong key
 exchange, then there's really no need to ever (for a definition of
 ever relative to common connection lifetimes for whatever protocols
 you have in mind, such as months) re-key for cryptographic reasons.

 I forgot to mention that I was referring to session keys for on-the-wire
 protocols.  For data storage I think re-keying is easier to justify.

 Also, there is a strong argument for changing ephemeral session keys for
 long sessions, made by Charlie Kaufman on EKRs blog post: to limit
 disclosure of earlier ciphertexts resulting from future compromises.

 However, I think that argument can be answered by changing session keys
 without re-keying in the SSHv2 and TLS re-negotiation senses.  (Changing
 session keys in such a way would not be trivial, but it may well be
 simpler than the alternative.  I've only got, in my mind, a sketch of
 how it'd work.)

 Nico

In anon-ip (a zero-knowledge systems internal project) and cebolla [1]
we provided forward-secrecy (aka backward security) using symmetric
re-keying (key replaced by hash of previous key).  (Backward and
forward security as defined by Ross Anderson in [2]).

But we did not try to do forward security in the sense of trying to
recover security in the event someone temporarily gained keys.  If
someone has compromised your system badly enough that they can read
keys, they can install a backdoor.

Another angle on this is timing attacks or iterative adaptive attacks
like bleichenbacher's attack on SSL encryption padding.  If re-keying
happens before the attack can complete, perhaps the risk of a
successful so far unnoticed adaptive or side-channel attack can be
reduced.  So maybe there is some use.

Simplicity of design can be good too.

Also patching SSL now that fixes are available might be an idea.  (In
my survey of bank sites most of them still have not patched and are
quite possibly practically vulnerable).

Adam

[1] http://www.cypherspace.org/cebolla/
[2] http://www.cypherspace.org/adam/nifs/refs/forwardsecure.pdf

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-25 Thread Stephan Neuhaus

On Mar 23, 2010, at 22:42, Jon Callas wrote:

 If you need to rekey, tear down the SSL connection and make a new one. There 
 should be a higher level construct in the application that abstracts the two 
 connections into one session.

... which will have its own subtleties and hence probability of failure.

Stephan
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-25 Thread Jon Callas

On Mar 24, 2010, at 2:07 AM, Stephan Neuhaus wrote:

 
 On Mar 23, 2010, at 22:42, Jon Callas wrote:
 
 If you need to rekey, tear down the SSL connection and make a new one. There 
 should be a higher level construct in the application that abstracts the two 
 connections into one session.
 
 ... which will have its own subtleties and hence probability of failure.

Exactly, but they're at the proper place in the system. That's what layering is 
all about.

I'm not suggesting that there's a perfect solution, or even a good one. There 
are times when a designer has a responsibility to make a decision and times 
when a designer has a responsibility *not* to make a decision.

In this particular case, rekeying introduced the most serious problem we've 
ever seen in a protocol like that. Rekeying itself has always been a bit dodgy. 
If you're rekeying because you are worried about the strength of the key (e.g. 
you're using DES), picking a better key is a better answer (use AES instead). 
The most compelling reason to rekey is not because of the key, but because of 
the data size. For ciphers that have a 64-bit block size, rekeying because 
you've sent 2^32 blocks is a much better reason to rekey. But -- an even better 
solution is to use a cipher with a bigger block size. Like AES. Or Camillia. Or 
Twofish. Or Threefish (which has a 512-bit block size in its main version). 
It's far more reasonable to rekey because you encrypted 32G of data than 
because you are worried about the key.

However, once you've graduated up to ciphers that have at least 128-bits of key 
and at least 128-bits of block size, the security considerations shift 
dramatically. I will ask explicitly the question I handwaved before: What makes 
you think that the chance there is a bug in your protocol is less than 2^-128? 
Or if you don't like that question -- I am the one who brought up birthday 
attacks -- What makes you think the chance of a bug is less than 2^-64? I 
believe that it's best to stop worrying about the core cryptographic components 
and worry about the protocol and its use within a stack of related things.

I've done encrypted file managers like what I alluded to, and it's so easy to 
get rekeying active files right, you don't have to worry. Just pull a new bulk 
key from the PRNG every time you write a file. Poof, you're done. For inactive 
files, rekeying them is isomorphic to writing a garbage collector. Garbage 
collectors are hard to get right. We designed, but never built an automatic 
rekeying system. The added security wasn't worth the trouble.

Getting back to your point, yes, you're right, but if rekeying is just opening 
a new network connection, or rewriting a file, it's easy to understand and get 
right. Rekeying makes sense when you (1) don't want to create a new context 
(because that automatically rekeys) and (2) don't like your crypto parameters 
(key, data length, etc). I hesitate to say that it never happens, but I think 
that coming up with a compelling use case where rekeying makes more sense than 
tearing down and recreating the context is a great exercise. Inconvenient use 
cases, sure. Compelling, that's hard.

Jon
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-25 Thread John Ioannidis
I think the problem is more marketing and less technology. Some 
marketoid somewhere decided to say that their product supports rekeying 
(they usually call it key agility). Probably because they read 
somewhere that you should change your password frequently (another 
misconception, but that's for another show).


Also, there's a big difference between rekeying communications protocols 
and rekeying for stored data. Again, the marketoids don't understand 
this. When I was working for a startup that was making a system which 
included an encrypted file system, people kept asking us about rekeying, 
because everybody has it.


/ji

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-25 Thread Ben Laurie
On 24/03/2010 08:28, Simon Josefsson wrote:
 Perry E. Metzger pe...@piermont.com writes:
 
 Ekr has an interesting blog post up on the question of whether protocol
 support for periodic rekeying is a good or a bad thing:

 http://www.educatedguesswork.org/2010/03/against_rekeying.html

 I'd be interested in hearing what people think on the topic. I'm a bit
 skeptical of his position, partially because I think we have too little
 experience with real world attacks on cryptographic protocols, but I'm
 fairly open-minded at this point.
 
 One situation where rekeying appears to me not only useful but actually
 essential is when you re-authenticate in the secure channel.
 
 TLS renegotiation is used for re-authentication, for example, when you
 go from no user authentication to user authenticated, or go from user X
 authenticated to user Y authenticated.  This is easy to do with TLS
 renegotiation: just renegotiate with a different client certificate.
 
 I would feel uncomfortable using the same encryption keys that were
 negotiated by an anonymous user (or another user X) before me when I'm
 authentication as user Y, and user Y is planning to send a considerably
 amount of traffic that user Y wants to be protected.  Trusting the
 encryption keys negotiated by user X doesn't seem prudent to me.
 Essentially, I want encryption keys to always be bound to
 authentication.

Note, however, that one of the reasons the TLS renegotiation attack was
so bad in combination with HTTP was that reauthentication did not result
in use of the new channel to re-send the command that had resulted in a
need for reauthentication. This command could have come from the
attacker, but the reauthentication would still be used to authenticate it.

In other words, designing composable secure protocols is hard. And TLS
isn't one. Or maybe it is, now that the channels before and after
rekeying are bound together (which would seem to invalidate your
argument above).

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-25 Thread Nicolas Williams
On Thu, Mar 25, 2010 at 01:24:16PM +, Ben Laurie wrote:
 Note, however, that one of the reasons the TLS renegotiation attack was
 so bad in combination with HTTP was that reauthentication did not result
 in use of the new channel to re-send the command that had resulted in a
 need for reauthentication. This command could have come from the
 attacker, but the reauthentication would still be used to authenticate it.

It would have sufficed to bind the new and old channels.  In fact, that
is pretty much the actual solution.

 In other words, designing composable secure protocols is hard. And TLS
 isn't one. Or maybe it is, now that the channels before and after
 rekeying are bound together (which would seem to invalidate your
 argument above).

Channel binding is one tool that simplifies the design and analysis of
composable secure protocols.  Had channel binding been used to analyze
TLS re-negotiation earlier the bug would have been obvious earlier as
well.  Proof of that last statement is in the pudding: Martin Rex
independently found the bug when reasoning about channel binding to TLS
channels in the face of re-negotiation; once he started down that path
he found the vulnerability promptly.

(There are several champions of the channel binding technique who could
and should have noticed the TLS bug earlier.  I myself simply took the
security of TLS for granted; I should have been more skeptical.  I
suspect that what happened, ultimately, is that TLS re-negotiation was
an afterthought, barely mentioned in the TLS 1.2 RFC and barely used,
therefore many experts were simply not conscious enough of its existence
to care.  Martin was quite conscious of it while also analyzing a
tangential channel binding proposal.)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-23 Thread Nicolas Williams
On Tue, Mar 23, 2010 at 11:21:01AM -0400, Perry E. Metzger wrote:
 Ekr has an interesting blog post up on the question of whether protocol
 support for periodic rekeying is a good or a bad thing:
 
 http://www.educatedguesswork.org/2010/03/against_rekeying.html
 
 I'd be interested in hearing what people think on the topic. I'm a bit
 skeptical of his position, partially because I think we have too little
 experience with real world attacks on cryptographic protocols, but I'm
 fairly open-minded at this point.

I fully agree with EKR on this: if you're using block ciphers with
128-bit block sizes in suitable modes and with suitably strong key
exchange, then there's really no need to ever (for a definition of
ever relative to common connection lifetimes for whatever protocols
you have in mind, such as months) re-key for cryptographic reasons.

There may be reasons for re-keying, but the commonly given one that a
given key gets weak over time from use (meaning the attacker can gather
ciphertexts) and just the passage of time (during which an attacker
might brute force it) does not apply to modern crypto.

Ensuring that a protocol that uses modern crypto also supports re-keying
only complicates the protocol, which adds to the potential for bugs.

Consider SSHv2: popular implementations of the server do privilege
separation, but after successful login there's the potential for having
to do re-keys that require privilege (e.g., if you're using SSHv2 w/
GSS-API key exchange), which complicates privilege separation.  But for
that wrinkle the only post-login privsep complications are: logout
processing (auditing, ...), and utmpx processing (if you want tty
channels to appear in w(1) output; this could always be handled in ways
that are not specific to sshd).  What a pain!  (OTOH, the ability
delegate fresh GSS credentials via re-keying is useful.)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-23 Thread Bill Frantz
On 3/23/10 at 8:21 AM, pe...@piermont.com (Perry E. Metzger) wrote:

 Ekr has an interesting blog post up on the question of whether protocol
 support for periodic rekeying is a good or a bad thing:
 
 http://www.educatedguesswork.org/2010/03/against_rekeying.html
 
 I'd be interested in hearing what people think on the topic. I'm a bit
 skeptical of his position, partially because I think we have too little
 experience with real world attacks on cryptographic protocols, but I'm
 fairly open-minded at this point.

Eric didn't mention it in his blog post, but he has been deeply involved
in cleaning up the mess left by a protocol error in in SSLv3 and
subsequent TLS versions. This error was in the portion of the protocols
which supported rekeying and created a vulnerability that affected all
users of those protocols, whether they used the rekeying part or not.

The risks from additional protocol complexity must be balanced with the
benefits of including the additional facility. My own opinion is that in
this case, the benefits didn't justify the risk. The few applications
which desired rekeying could have been designed to build a completely
new TLS connection, avoiding the risk for everyone.

Cheers - Bill

---
Bill Frantz| I like the farmers' market   | Periwinkle
(408)356-8506  | because I can get fruits and | 16345 Englewood Ave
www.pwpconsult.com | vegetables without stickers. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-23 Thread Nicolas Williams
On Tue, Mar 23, 2010 at 10:42:38AM -0500, Nicolas Williams wrote:
 On Tue, Mar 23, 2010 at 11:21:01AM -0400, Perry E. Metzger wrote:
  Ekr has an interesting blog post up on the question of whether protocol
  support for periodic rekeying is a good or a bad thing:
  
  http://www.educatedguesswork.org/2010/03/against_rekeying.html
  
  I'd be interested in hearing what people think on the topic. I'm a bit
  skeptical of his position, partially because I think we have too little
  experience with real world attacks on cryptographic protocols, but I'm
  fairly open-minded at this point.
 
 I fully agree with EKR on this: if you're using block ciphers with
 128-bit block sizes in suitable modes and with suitably strong key
 exchange, then there's really no need to ever (for a definition of
 ever relative to common connection lifetimes for whatever protocols
 you have in mind, such as months) re-key for cryptographic reasons.

I forgot to mention that I was referring to session keys for on-the-wire
protocols.  For data storage I think re-keying is easier to justify.

Also, there is a strong argument for changing ephemeral session keys for
long sessions, made by Charlie Kaufman on EKRs blog post: to limit
disclosure of earlier ciphertexts resulting from future compromises.

However, I think that argument can be answered by changing session keys
without re-keying in the SSHv2 and TLS re-negotiation senses.  (Changing
session keys in such a way would not be trivial, but it may well be
simpler than the alternative.  I've only got, in my mind, a sketch of
how it'd work.)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-23 Thread Jon Callas
 I'd be interested in hearing what people think on the topic. I'm a bit
 skeptical of his position, partially because I think we have too little
 experience with real world attacks on cryptographic protocols, but I'm
 fairly open-minded at this point.

I think that if anything, he doesn't go far enough.

Rekeying only makes sense when you aren't using the right crypto, and even then 
might make the situation worse. Rekeying opens up a line of attack. From a 
purely mathematical point of view, here's a way to look at it:

The chance of beating your cipher is P1 (ideally, it's the strength of the 
cipher, let's just say 2^-128). The chance of beating the rekey protocol is P2. 
Rekeying makes sense when P2 is smaller than P1. When P2 is larger than P1, 
you've reduced the security of your system to the chance of a flaw in the 
rekeying, not the cipher.

As others have pointed out, it's front of Ekr's mind that there is (was) a 
major flaw in the SSL/TLS protocol set that came out because of bugs in 
rekeying. Worse, it affected people who wanted high security in more evil ways 
than people who just wanted casual security. Many people (including me) think 
that the best way to fix this is to remove the rekeying. If you need to rekey, 
tear down the SSL connection and make a new one. There should be a higher level 
construct in the application that abstracts the two connections into one 
session.

In most cases where you might want to rekey, the underlying system makes it 
either so trivial you don't need to think about it, or so hard that you can 
ignore it because you just won't.

Let me give a couple examples. First the trivial one. Consider a directory of 
files where each file is encrypted separately with a bulk key per-file. The 
natural way to do this is that every time someone rewrites a file, you make a 
new bulk key and rewrite the file. You don't have to worry about rekeying 
because it just falls out.

Now the hard one. Consider a disk that is encrypted with some full disk 
encryption system. If you want to rekey that disk, you have to read and write 
every block. For a large disk, that is seriously annoying. If your disk does 
100MB/s (which very fast for a spindle and still pretty fast for SSDs), then 
you can do 180G per hour (that's 6G per minute, 360G per hour, and halve it 
because you have to read and write) max. That's about six hours for a terabyte.

If your disk only does 10MB/s, which many spindles do, then it's 60 hours to 
rekey that terabyte. You can do the math for other sizes and speeds as well as 
I can. In any event, you're not going to rekey the disk very often. In fact 
most of the people who really care about rekeying storage are changing their 
requirements so that you have to do a rekey on the same schedule as retiring 
media -- which effectively means no rekey.

A long-time rant of mine is that security people don't do layering. I think 
this falls into a layering aspect. If you design your system so that your 
connection has a single key and you transparently reconnect, then rekeying is 
just forcing a reconnect. If you make your storage have one key per file, then 
rekeying the files is just rewriting them. It can easily vanish.

And yes, obviously, there are exception cases. Exceptions are always 
exceptional.

Jon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-23 Thread Adam Back
In anon-ip (a zero-knowledge systems internal project) and cebolla [1]
we provided forward-secrecy (aka backward security) using symmetric
re-keying (key replaced by hash of previous key).  (Backward and
forward security as defined by Ross Anderson in [2]).

But we did not try to do forward security in the sense of trying to
recover security in the event someone temporarily gained keys.  If
someone has compromised your system badly enough that they can read
keys, they can install a backdoor.

Another angle on this is timing attacks or iterative adaptive attacks
like bleichenbacher's attack on SSL encryption padding.  If re-keying
happens before the attack can complete, perhaps the risk of a
successful so far unnoticed adaptive or side-channel attack can be
reduced.  So maybe there is some use.

Simplicity of design can be good too.

Also patching SSL now that fixes are available might be an idea.  (In
my survey of bank sites most of them still have not patched and are
quite possibly practically vulnerable).

Adam

[1] http://www.cypherspace.org/cebolla/
[2] http://www.cypherspace.org/adam/nifs/refs/forwardsecure.pdf

On Tue, Mar 23, 2010 at 8:51 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
 On Tue, Mar 23, 2010 at 10:42:38AM -0500, Nicolas Williams wrote:
 On Tue, Mar 23, 2010 at 11:21:01AM -0400, Perry E. Metzger wrote:
  Ekr has an interesting blog post up on the question of whether protocol
  support for periodic rekeying is a good or a bad thing:
 
  http://www.educatedguesswork.org/2010/03/against_rekeying.html
 
  I'd be interested in hearing what people think on the topic. I'm a bit
  skeptical of his position, partially because I think we have too little
  experience with real world attacks on cryptographic protocols, but I'm
  fairly open-minded at this point.

 I fully agree with EKR on this: if you're using block ciphers with
 128-bit block sizes in suitable modes and with suitably strong key
 exchange, then there's really no need to ever (for a definition of
 ever relative to common connection lifetimes for whatever protocols
 you have in mind, such as months) re-key for cryptographic reasons.

 I forgot to mention that I was referring to session keys for on-the-wire
 protocols.  For data storage I think re-keying is easier to justify.

 Also, there is a strong argument for changing ephemeral session keys for
 long sessions, made by Charlie Kaufman on EKRs blog post: to limit
 disclosure of earlier ciphertexts resulting from future compromises.

 However, I think that argument can be answered by changing session keys
 without re-keying in the SSHv2 and TLS re-negotiation senses.  (Changing
 session keys in such a way would not be trivial, but it may well be
 simpler than the alternative.  I've only got, in my mind, a sketch of
 how it'd work.)

 Nico
 --

 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com