Re: Against Rekeying

2010-03-25 Thread Steven Bellovin

On Mar 23, 2010, at 11:21 AM, Perry E. Metzger wrote:

 
 Ekr has an interesting blog post up on the question of whether protocol
 support for periodic rekeying is a good or a bad thing:
 
 http://www.educatedguesswork.org/2010/03/against_rekeying.html
 
 I'd be interested in hearing what people think on the topic. I'm a bit
 skeptical of his position, partially because I think we have too little
 experience with real world attacks on cryptographic protocols, but I'm
 fairly open-minded at this point.

I'm a bit skeptical -- I think that ekr is throwing the baby out with the bath 
water.  Nobody expects the Spanish Inquisition, and nobody expects linear 
cryptanalysis, differential cryptanalysis, hypertesseract cryptanalysis, etc.  
A certain degree of skepticism about the strength of our ciphers is always a 
good thing -- no one has ever deployed a cipher they think their adversaries 
can read, but we know that lots of adversaries have read lots of unbreakable 
ciphers.

Now -- it is certainly possible to go overboard on this, and I think the IETF 
often has.  (Some of the advice given during the design of IPsec was quite 
preposterous; I even thought so then...)  But one can calculate rekeying 
intervals based on some fairly simple assumptions about the amount of 
{chosen,known,unknown} plaintex/ciphertext pairs needed and the work factor for 
the attack, multiplied by the probability of someone developing an attack of 
that complexity, and everything multiplied by Finagle's Constant.  The trick, 
of course, is to make the right assumptions.  But as Bruce Schneier is fond of 
quoting, attacks never get worse; they only get better.  Given recent research 
results, does anyone want to bet on the lifetime of AES?  Sure, the NSA has 
rated it for Top Secret traffic, but I know a fair number of people who no 
longer agree with that judgment.  It's safe today -- but will it be safe in 20 
years?  Will my plaintext still be sensitive then?

All of that is beside the point.  The real challenge is often to design a 
system -- note, a *system*, not just a protocol -- that can be rekeyed *if* the 
long-term keys are compromised.  Once you have that, setting the time interval 
is a much simpler question, and a question that can be revisited over time as 
attacks improve.


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-25 Thread Joseph Ashwood

--
From: Perry E. Metzger pe...@piermont.com
Subject: Against Rekeying


I'd be interested in hearing what people think on the topic. I'm a bit
skeptical of his position, partially because I think we have too little
experience with real world attacks on cryptographic protocols, but I'm
fairly open-minded at this point.


Typically, rekeying is unnecessary, but sooner or later you'll find a 
situation where it is critical. The claim made that everything hinges on is 
that 2^68 bytes is not achievable in a useful period of time, this is not 
always correct.


Cisco recently announced the CRS-3 router, a single router that does 322 
Tbits/sec, that's 40.25 TBytes/sec. Only 7 million seconds to exhaust the 
entire 2^68. This is still fairly large, but be around the industry long 
enough you'll see a big cluster where they communicate as fast as possible, 
all with the same key. I've seen clusters of up to 100 servers at a time, so 
in theory it could be just 70,000 seconds, not even a full day, its also 
worth keeping in mind the bandwidth drain of an organization as cmopute 
intensive as Google of Facebook will easily exceed even these limits.


Certainly an argument can be made that the protocol used is wrong, but this 
kind of protocol gets all too frequently, and since it is usually used for 
high availability (one of the primary reasons for clustering) the need to 
rekey becomes all too real.


So there are times where rekeying is a very necessary requirement. I prefer 
a protocol reboot process instead of an in-protocol rekey, but sometimes you 
have to do what you have to do. Rekeying probably should never have been 
implemented in something like SSL/TLS or SSH, even IPsec it is arguable, but 
extreme environments require extreme solutions.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-25 Thread Simon Josefsson
Perry E. Metzger pe...@piermont.com writes:

 Ekr has an interesting blog post up on the question of whether protocol
 support for periodic rekeying is a good or a bad thing:

 http://www.educatedguesswork.org/2010/03/against_rekeying.html

 I'd be interested in hearing what people think on the topic. I'm a bit
 skeptical of his position, partially because I think we have too little
 experience with real world attacks on cryptographic protocols, but I'm
 fairly open-minded at this point.

One situation where rekeying appears to me not only useful but actually
essential is when you re-authenticate in the secure channel.

TLS renegotiation is used for re-authentication, for example, when you
go from no user authentication to user authenticated, or go from user X
authenticated to user Y authenticated.  This is easy to do with TLS
renegotiation: just renegotiate with a different client certificate.

I would feel uncomfortable using the same encryption keys that were
negotiated by an anonymous user (or another user X) before me when I'm
authentication as user Y, and user Y is planning to send a considerably
amount of traffic that user Y wants to be protected.  Trusting the
encryption keys negotiated by user X doesn't seem prudent to me.
Essentially, I want encryption keys to always be bound to
authentication.

Yes, the re-authentication use-case could be implemented by tearing down
the secure channel and opening a new one, and that may be overall
simpler to implement and support.

However, IF we want to provide a secure channel for application
protocols that re-authenticate, I have a feeling that the secure channel
must support re-keying to yield good security properties.

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-25 Thread Adam Back
Seems people like bottom post around here.

On Tue, Mar 23, 2010 at 8:51 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
 On Tue, Mar 23, 2010 at 10:42:38AM -0500, Nicolas Williams wrote:
 On Tue, Mar 23, 2010 at 11:21:01AM -0400, Perry E. Metzger wrote:
  Ekr has an interesting blog post up on the question of whether protocol
  support for periodic rekeying is a good or a bad thing:
 
  http://www.educatedguesswork.org/2010/03/against_rekeying.html
 
  I'd be interested in hearing what people think on the topic. I'm a bit
  skeptical of his position, partially because I think we have too little
  experience with real world attacks on cryptographic protocols, but I'm
  fairly open-minded at this point.

 I fully agree with EKR on this: if you're using block ciphers with
 128-bit block sizes in suitable modes and with suitably strong key
 exchange, then there's really no need to ever (for a definition of
 ever relative to common connection lifetimes for whatever protocols
 you have in mind, such as months) re-key for cryptographic reasons.

 I forgot to mention that I was referring to session keys for on-the-wire
 protocols.  For data storage I think re-keying is easier to justify.

 Also, there is a strong argument for changing ephemeral session keys for
 long sessions, made by Charlie Kaufman on EKRs blog post: to limit
 disclosure of earlier ciphertexts resulting from future compromises.

 However, I think that argument can be answered by changing session keys
 without re-keying in the SSHv2 and TLS re-negotiation senses.  (Changing
 session keys in such a way would not be trivial, but it may well be
 simpler than the alternative.  I've only got, in my mind, a sketch of
 how it'd work.)

 Nico

In anon-ip (a zero-knowledge systems internal project) and cebolla [1]
we provided forward-secrecy (aka backward security) using symmetric
re-keying (key replaced by hash of previous key).  (Backward and
forward security as defined by Ross Anderson in [2]).

But we did not try to do forward security in the sense of trying to
recover security in the event someone temporarily gained keys.  If
someone has compromised your system badly enough that they can read
keys, they can install a backdoor.

Another angle on this is timing attacks or iterative adaptive attacks
like bleichenbacher's attack on SSL encryption padding.  If re-keying
happens before the attack can complete, perhaps the risk of a
successful so far unnoticed adaptive or side-channel attack can be
reduced.  So maybe there is some use.

Simplicity of design can be good too.

Also patching SSL now that fixes are available might be an idea.  (In
my survey of bank sites most of them still have not patched and are
quite possibly practically vulnerable).

Adam

[1] http://www.cypherspace.org/cebolla/
[2] http://www.cypherspace.org/adam/nifs/refs/forwardsecure.pdf

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-25 Thread Stephan Neuhaus

On Mar 23, 2010, at 22:42, Jon Callas wrote:

 If you need to rekey, tear down the SSL connection and make a new one. There 
 should be a higher level construct in the application that abstracts the two 
 connections into one session.

... which will have its own subtleties and hence probability of failure.

Stephan
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Question regarding common modulus on elliptic curve cryptosystems

2010-03-25 Thread Matt Crawford

On Mar 21, 2010, at 4:13 PM, Sergio Lerner wrote:

 I looking for a public-key cryptosystem that allows commutation of the 
 operations of encription/decryption for different users keys
 ( Ek(Es(m)) =  Es(Ek(m)) ).
 I haven't found a simple cryptosystem in Zp or Z/nZ.
 
 I think the solution may be something like the RSA analogs in elliptic 
 curves. Maybe a scheme that allows the use of a common modulus for all users 
 (RSA does not).

If your application can work with a trusted authority generating all the 
keypairs, and you sacrifice the use of short public exponents *and* sacrifice 
the possession of the factors of the modulus by the key owners, making them do 
more work on decryption, I think you can have what you asked for. But that's a 
lot of ifs.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


copy of On the generation of DSS one-time keys?

2010-03-25 Thread James Muir
Daniel Bleichenbacher presented an implementation attack against DSA in
2001 titled On the generation of DSS one-time keys.  I think it made
the rounds as a preprint, but I don't know if it was ever officially
published.  It's cited frequently (e.g. in the SEC1 doc
http://www.secg.org/download/aid-780/sec1-v2.pdf), but I cannot seem to
locate a copy.

Can anyone point me to a copy of this preprint?

-James



signature.asc
Description: OpenPGP digital signature


Re: Question regarding common modulus on elliptic curve cryptosystems

2010-03-25 Thread James A. Donald

On 2010-03-22 11:22 PM, Sergio Lerner wrote:
Commutativity is a beautiful and powerful property. See On the power 
of Commutativity in Cryptography by Adi Shamir.
Semantic security is great and has given a new provable sense of 
security, but commutative building blocks can be combined to build the 
strangest protocols without going into deep mathematics, are better 
suited for teaching crypto and for high-level protocol design. They 
are like the Lego blocks of cryptography!


Now I'm working on an new untraceable e-cash protocol which has some 
additional properties. And I'm searching for a secure  commutable 
signing primitive.


The most powerful primitive, from which all manner of weird and 
wonderful protocols can be concocted, are gap diffie helman groups.  
Read Alexandra Boldyreva Threshold Signatures, Multisignatures, and 
Blind Signatures based on Gap-Diffie-Helman Group Signatures.


I am not sure what you want to do with commutativity, but suppose that 
you want a coin that needs to be signed by two parties in either order 
to be valid.


Suppose we consider call the operation that combines two points on an 
elliptic curve to be generate a third point multiplication and division, 
so that we use the familiar notation of exponentiation, thereby 
describing elliptic point crypto systems in the same notation as prime 
number crypto systems (a notation I think confusing, but everyone else 
uses it)


Suppose everyone uses the same Gap Diffie Helman group, and the same 
generator g.


A valid unblinded coin is the pair {u, (u^(b*c)}, yielding a valid DDH 
tuple {g, g^(b*c), u, u^(b*c)}, where u is some special format (not a 
random number)


Repeating in slightly different words.  A valid unblinded coin is a coin 
that with the joint public key of Bob and Carol yields a valid DDH 
tuple, in which the third element of the tuple has some special form.


Edward wants Bob and Carol to give him a blinded coin.  He already knows 
some other valid coin, {w, w^(b*c)).  He generates a point u that 
satifies the special properties for a valid coin, and a random number 
x.  He asks Bob and Carol to sign u*(w^(-x)), giving him a blinded coin, 
which he unblinds.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-25 Thread Jon Callas

On Mar 24, 2010, at 2:07 AM, Stephan Neuhaus wrote:

 
 On Mar 23, 2010, at 22:42, Jon Callas wrote:
 
 If you need to rekey, tear down the SSL connection and make a new one. There 
 should be a higher level construct in the application that abstracts the two 
 connections into one session.
 
 ... which will have its own subtleties and hence probability of failure.

Exactly, but they're at the proper place in the system. That's what layering is 
all about.

I'm not suggesting that there's a perfect solution, or even a good one. There 
are times when a designer has a responsibility to make a decision and times 
when a designer has a responsibility *not* to make a decision.

In this particular case, rekeying introduced the most serious problem we've 
ever seen in a protocol like that. Rekeying itself has always been a bit dodgy. 
If you're rekeying because you are worried about the strength of the key (e.g. 
you're using DES), picking a better key is a better answer (use AES instead). 
The most compelling reason to rekey is not because of the key, but because of 
the data size. For ciphers that have a 64-bit block size, rekeying because 
you've sent 2^32 blocks is a much better reason to rekey. But -- an even better 
solution is to use a cipher with a bigger block size. Like AES. Or Camillia. Or 
Twofish. Or Threefish (which has a 512-bit block size in its main version). 
It's far more reasonable to rekey because you encrypted 32G of data than 
because you are worried about the key.

However, once you've graduated up to ciphers that have at least 128-bits of key 
and at least 128-bits of block size, the security considerations shift 
dramatically. I will ask explicitly the question I handwaved before: What makes 
you think that the chance there is a bug in your protocol is less than 2^-128? 
Or if you don't like that question -- I am the one who brought up birthday 
attacks -- What makes you think the chance of a bug is less than 2^-64? I 
believe that it's best to stop worrying about the core cryptographic components 
and worry about the protocol and its use within a stack of related things.

I've done encrypted file managers like what I alluded to, and it's so easy to 
get rekeying active files right, you don't have to worry. Just pull a new bulk 
key from the PRNG every time you write a file. Poof, you're done. For inactive 
files, rekeying them is isomorphic to writing a garbage collector. Garbage 
collectors are hard to get right. We designed, but never built an automatic 
rekeying system. The added security wasn't worth the trouble.

Getting back to your point, yes, you're right, but if rekeying is just opening 
a new network connection, or rewriting a file, it's easy to understand and get 
right. Rekeying makes sense when you (1) don't want to create a new context 
(because that automatically rekeys) and (2) don't like your crypto parameters 
(key, data length, etc). I hesitate to say that it never happens, but I think 
that coming up with a compelling use case where rekeying makes more sense than 
tearing down and recreating the context is a great exercise. Inconvenient use 
cases, sure. Compelling, that's hard.

Jon
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Question regarding common modulus on elliptic curve cryptosystems AND E-CASH

2010-03-25 Thread James A. Donald

On 2010-03-23 1:09 AM, Sergio Lerner wrote:
I've read some papers, not that much. But I don't mind reinventing the 
wheel, as long as the new protocol is simpler to explain.

Reading the literature, I couldn't  find a e-cash protocol which :

- Hides the destination / source of payments.
- Hides the amount of money transferred.
- Hides the account balance of each person from the bank.
- Allows off-line payments.
- Avoids giving the same bill to two different people by design. 
This means that the protocol does not need to detect the use of cloned 
bills.
- Gives each person a cryptographic proof of owning the money they 
have in case of dispute.


I someone points me out a protocol that manages to fulfill this 
requirements, I'd be delighted.
I think I can do it with a commutative signing primitive, and a 
special zero-proof of knowledge.


Gap Diffie Helman gives you a commutative signing primitive, and a 
zero-proof of knowledge.




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-25 Thread John Ioannidis
I think the problem is more marketing and less technology. Some 
marketoid somewhere decided to say that their product supports rekeying 
(they usually call it key agility). Probably because they read 
somewhere that you should change your password frequently (another 
misconception, but that's for another show).


Also, there's a big difference between rekeying communications protocols 
and rekeying for stored data. Again, the marketoids don't understand 
this. When I was working for a startup that was making a system which 
included an encrypted file system, people kept asking us about rekeying, 
because everybody has it.


/ji

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [vserver] Bought an entropykey - very happy

2010-03-25 Thread Eugen Leitl

From: coderman coder...@gmail.com
Date: Wed, 24 Mar 2010 10:50:33 -0700
To: Morlock Elloi morlockel...@yahoo.com
Cc: cypherpu...@al-qaeda.net
Subject: Re: [vserver] Bought an entropykey - very happy

On Wed, Mar 24, 2010 at 8:43 AM, Morlock Elloi morlockel...@yahoo.com
wrote:
 While avalanche noise (hoping it doesn't start to tunnel - that current must
be actively controlled as each junction is different) is a good source of
randomness (up to megabits / sec / junction), encrypting it just means
masking possible low entropy. I'd prefer to see raw conditoned stream than
encrypted one (even web content looks high-entropy to Diehard when
encrypted).
...

i have loved the padlock engines on via cores since they hit the
market in C5XL form with a single hw generator available via XSTORE.
unlike many designs this free wheeling resource can provide a torrent
of entropy sufficient to sate even the most gregarious consumption.

as mentioned above, you need a fast user space entropy daemon sanity
checking the raw, (probably) biased stream coming from hardware but it
is still good practice to digest this entropy to obscure any potential
generator state/bias heading into the host entropy pool.

that is to say, of the two common modes for utilizing hw entropy:
a. conservatively sample from a whitened, string filtered entropy
source for a low rate of high quality output (see xstore config words)
b. ramp un-whitened, un-filtered source(s) to maximum rate and AES/SHA
mix for high throughput, high quality output while irreversibly
masking generator bias/state present in the raw source stream.

the latter is more effective in practice and capable of generation
rates  20Mbps with full FIPS sanity checks. the former tops out
around 1Mbps or less with more transient latency spikes on read (when
successive attempts to read fail to pass whiten+strfilter). note that
padlock engine supports SHA and AES on die as well making these easy
and fast to apply to generator output.

if you are still concerned a more conservative configuration would
estimate entropy density while feeding from raw input stream and add
encrypted/digested product to the host entropy pool with the specified
entropy density estimate adjusted downward to your requirements. (most
OS'es support this)

--

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


New Research Suggests That Governments May Fake SSL Certificates

2010-03-25 Thread Dave Kleiman

March 24th, 2010 New Research Suggests That Governments May Fake SSL 
Certificates
Technical Analysis by Seth Schoen 
http://www.eff.org/deeplinks/2010/03/researchers-reveal-likelihood-governments-fake-ssl

Today two computer security researchers, Christopher Soghoian and Sid Stamm, 
released a draft of a forthcoming research paper in which theypresent evidence 
that certificate authorities (CAs) may be cooperating with government agencies 
to help them spy undetected on secure encrypted communications. (EFF 
sometimes advises Soghoian on responsible disclosure issues, including for this 
paper.) More details and reporting are available at Wired today. The draft 
paper includes marketing materials from Packet Forensics, an Arizona company, 
which suggests that government users have the ability to import a copy of any 
legitimate keys they obtain (potentially by court order) into Packet Forensics 
products in order to impersonate sites and trick users into a false sense of 
security afforded by web, e-mail, or VoIP encryption. This would allow those 
governments to routinely bypass encryption without breaking it..

Soghoian and Stamm also observe that browsers trust huge numbers of CAs — and 
all of those organizations are trusted completely, so that the validity of any 
entity they approve is accepted without question.  Every organization on a 
browser's trusted list has the power to certify sites all around the world. 
Existing browsers do not consider whether a certificate was signed by a 
different CA than before; a laptop that has seen Gmail's site certified by a 
subsidiary of U.S.-based VeriSign thousands of times would raise no alarm if 
Gmail suddenly appeared to present a different key apparently certified by an 
authority in Poland, the United Arab Emirates, Turkey, or Brazil. Yet such a 
change would be an indication that the user's encrypted HTTP traffic was being 
intercepted.

Paper: http://files.cloudprivacy.net/ssl-mitm.pdf


Respectfully,

Dave Kleiman - http://www.ComputerForensicExaminer.com - 
http://www.DigitalForensicExpert.com 

4371 Northlake Blvd #314
Palm Beach Gardens, FL 33410
561.310.8801 





Re: Law Enforcement Appliance Subverts SSL

2010-03-25 Thread dan

Rui Paulo writes:
-+---
 | http://www.wired.com/threatlevel/2010/03/packet-forensics/
 | 
 | At a recent wiretapping convention however, security researcher Chris =
 | Soghoian discovered that a small company was marketing internet spying =
 | boxes to the feds designed to intercept those communications, without =
 | breaking the encryption, by using forged security certificates, instead =
 | of the real ones that websites use to verify secure connections. To use =
 | the appliance, the government would need to acquire a forged certificate =
 |  from any one of more than 100 trusted Certificate Authorities.
 | 


I rather like Cormac Herley's paper:

  http://preview.tinyurl.com/yko7lhg
  So Long, And No Thanks for the Externalities:
  The Rational Rejection of Security Advice by Users

which I cite here for this line:

  It is hard to blame users for not being interested in SSL
  and certificates when (as far as we can determine) 100% of
  all certificate errors seen by users are false positives.



--dan

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Blog post from Matt Blaze about Soghoian Stamm paper

2010-03-25 Thread Perry E. Metzger

Matt has an interesting blog post up about the Soghoian  Stamm SSL
interception paper:

http://www.crypto.com/blog/spycerts

-- 
Perry E. Metzgerpmetz...@cis.upenn.edu
Department of Computer and Information Science, University of Pennsylvania

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-25 Thread Ben Laurie
On 24/03/2010 08:28, Simon Josefsson wrote:
 Perry E. Metzger pe...@piermont.com writes:
 
 Ekr has an interesting blog post up on the question of whether protocol
 support for periodic rekeying is a good or a bad thing:

 http://www.educatedguesswork.org/2010/03/against_rekeying.html

 I'd be interested in hearing what people think on the topic. I'm a bit
 skeptical of his position, partially because I think we have too little
 experience with real world attacks on cryptographic protocols, but I'm
 fairly open-minded at this point.
 
 One situation where rekeying appears to me not only useful but actually
 essential is when you re-authenticate in the secure channel.
 
 TLS renegotiation is used for re-authentication, for example, when you
 go from no user authentication to user authenticated, or go from user X
 authenticated to user Y authenticated.  This is easy to do with TLS
 renegotiation: just renegotiate with a different client certificate.
 
 I would feel uncomfortable using the same encryption keys that were
 negotiated by an anonymous user (or another user X) before me when I'm
 authentication as user Y, and user Y is planning to send a considerably
 amount of traffic that user Y wants to be protected.  Trusting the
 encryption keys negotiated by user X doesn't seem prudent to me.
 Essentially, I want encryption keys to always be bound to
 authentication.

Note, however, that one of the reasons the TLS renegotiation attack was
so bad in combination with HTTP was that reauthentication did not result
in use of the new channel to re-send the command that had resulted in a
need for reauthentication. This command could have come from the
attacker, but the reauthentication would still be used to authenticate it.

In other words, designing composable secure protocols is hard. And TLS
isn't one. Or maybe it is, now that the channels before and after
rekeying are bound together (which would seem to invalidate your
argument above).

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-25 Thread Nicolas Williams
On Thu, Mar 25, 2010 at 01:24:16PM +, Ben Laurie wrote:
 Note, however, that one of the reasons the TLS renegotiation attack was
 so bad in combination with HTTP was that reauthentication did not result
 in use of the new channel to re-send the command that had resulted in a
 need for reauthentication. This command could have come from the
 attacker, but the reauthentication would still be used to authenticate it.

It would have sufficed to bind the new and old channels.  In fact, that
is pretty much the actual solution.

 In other words, designing composable secure protocols is hard. And TLS
 isn't one. Or maybe it is, now that the channels before and after
 rekeying are bound together (which would seem to invalidate your
 argument above).

Channel binding is one tool that simplifies the design and analysis of
composable secure protocols.  Had channel binding been used to analyze
TLS re-negotiation earlier the bug would have been obvious earlier as
well.  Proof of that last statement is in the pudding: Martin Rex
independently found the bug when reasoning about channel binding to TLS
channels in the face of re-negotiation; once he started down that path
he found the vulnerability promptly.

(There are several champions of the channel binding technique who could
and should have noticed the TLS bug earlier.  I myself simply took the
security of TLS for granted; I should have been more skeptical.  I
suspect that what happened, ultimately, is that TLS re-negotiation was
an afterthought, barely mentioned in the TLS 1.2 RFC and barely used,
therefore many experts were simply not conscious enough of its existence
to care.  Martin was quite conscious of it while also analyzing a
tangential channel binding proposal.)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [Not] Against Rekeying

2010-03-25 Thread james hughes
On Tue, Mar 23, 2010 at 11:21:01AM -0400, Perry E. Metzger wrote:
 Ekr has an interesting blog post up on the question of whether protocol
 support for periodic rekeying is a good or a bad thing:
 
 http://www.educatedguesswork.org/2010/03/against_rekeying.html

On Mar 23, 2010, at 4:23 PM, Adam Back wrote:

 In anon-ip (a zero-knowledge systems internal project) and cebolla [1]
 we provided forward-secrecy (aka backward security) using symmetric
 re-keying (key replaced by hash of previous key).  (Backward and
 forward security as defined by Ross Anderson in [2]).

The paper on Cebolla[4] states that Trust in symmetric keys diminishes the 
longer they are used in the wild. Key rotation, or re-keying, must be done at 
regular interfals to lessen the success attackers can have at crypt-anayzing 
the keys. This is exactly the kind of justification that the Dkr post and most 
of the comments agree is flawed.

It goes on to state what was said about new keys being derived from old keys.

[4] http://www.cypherspace.org/cebolla/cebolla.pdf


Hmm. Interesting. Learn one key, have them all for the future. Wow. Yes, that 
is Ross' definition of backward security, and clearly does not meet Ross' 
definition of forward security. In reading the paper, it seems like this system 
is: Crack one key, you're in forever. A government's dream for an anonymity 
service. Ross' definitions for, backwards, forwards makes sense from a 
terminology point of view, but IMHO without both, it is not secure.

Sure one can talk about attack scenarios, and that just proves the tautology 
that we don't know what we don't know (or don't know what has not been invented 
yet). There is no excuse to bad crypto hygiene. I don't know why someone would 
build a system with K_i+1 = h(K_i) when there are so many good algorithms out 
there.


 But we did not try to do forward security in the sense of trying to
 recover security in the event someone temporarily gained keys.  If
 someone has compromised your system badly enough that they can read
 keys, they can install a backdoor.

I agree with the Ekr posting, but not the characterization above. The Ekr 
posting says [rekey for] Damage limitation If a key is disclosed and This 
isn't totally crazy. The statement of fact is that if a key is compromised, a 
rekey limits the scope of the compromise.

The Ekr posting said nothing about how the key was disclosed. Yes, if you have 
root on the machine an have mounted an active attack, all bets are off, but 
there are other ways for key disclosure to happen (as was discussed in the Ekr 
posting).

For example a cold boot attack [3] can be used to recover a communications 
session key (instead of a disk key). If that key has been used for a 
particularly long time, and if one assumed that the attacker had the 
opportunity to  record all the ciphertext, then one must expect that all of 
that information can now be read.

[3] http://en.wikipedia.org/wiki/Cold_boot_attack

[The comments about breaking keys is deleted. I agree with the original posting 
and everyone else that changing a key OF A MODERN CIPHER to eliminate algorithm 
weaknesses is not a valuable reason to rekey.]

On Mar 23, 2010, at 2:42 PM, Jon Callas wrote:

 If you need to rekey, tear down the SSL connection and make a new one. There 
 should be a higher level construct in the application that abstracts the two 
 connections into one session.

I agree (but as a nit, we can reverse the order. Create a completely new 
session and just move the traffic to the new connection.)

Limiting the scope of a key compromise is the only justification I can see for 
rekey. That said, limiting the scope of the information available because of a 
key compromise is still a very important consideration. 

Jim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com