Re: MD5 considered harmful today, SHA-1 considered harmful tomorrow

2009-01-20 Thread Bodo Moeller
On Sat, Jan 17, 2009 at 5:24 PM, Steven M. Bellovin s...@cs.columbia.edu 
wrote:

 I've mentioned it before, but I'll point to the paper Eric Rescorla
 wrote a few years ago:
 http://www.cs.columbia.edu/~smb/papers/new-hash.ps or
 http://www.cs.columbia.edu/~smb/papers/new-hash.pdf .  The bottom line:
 if you're running a public-facing web server, you *can't* offer a SHA-2
 certificate because you have no way of knowing if the client supports
 SHA-2. Fixing that requires a TLS fix; see the above timeline for that.

The RFC does exit (TLS 1.2 in RFC 5246 from August 2008 makes SHA-256
mandatory), so you can send a SHA-256 certificate to clients that
indicate they support TLS 1.2 or later.  You'd still need some other
certificate for interoperability with clients that don't support
SHA-256, of course, and you'd be sending that one to clients that do
support SHA-256 but not TLS 1.2.  (So you'd fall back to SHA-1, which
is not really a problem when CAs make sure to use the hash algorithm
in a way that doesn't rely on hash collisions being hard to find,
which probably is a good idea for *any* hash algorithm.)

Bodo

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


[heise online UK] Secure deletion: a single overwrite will do it

2009-01-20 Thread Stefan Kelm
The myth that to delete data really securely from a hard disk you have
to overwrite it many times, using different patterns, has persisted for
decades, despite the fact that even firms specialising in data recovery,
openly admit that if a hard disk is overwritten with zeros just once,
all of its data is irretrievably lost.

Craig Wright, a forensics expert, claims to have put this legend finally
to rest. He and his colleagues ran a scientific study to take a close
look at hard disks of various makes and different ages, overwriting
their data under controlled conditions and then examining the magnetic
surfaces with a magnetic-force microscope. They presented their paper at
ICISS 2008 and it has been published by Springer AG in its Lecture Notes
in Computer Science series (Craig Wright, Dave Kleiman, Shyaam Sundhar
R. S.: Overwriting Hard Drive Data: The Great Wiping Controversy).

They concluded that, after a single overwrite of the data on a drive,
whether it be an old 1-gigabyte disk or a current model (at the time of
the study), the likelihood of still being able to reconstruct anything
is practically zero. Well, OK, not quite: a single bit whose precise
location is known can in fact be correctly reconstructed with 56 per
cent probability (in one of the quoted examples). To recover a byte,
however, correct head positioning would have to be precisely repeated
eight times, and the probability of that is only 0.97 per cent.
Recovering anything beyond a single byte is even less likely.

Nevertheless, that doesn't stop the vendors of data-wiping programs
offering software that overwrites data up to 35 times, based on
decades-old security standards that were developed for diskettes.
Although this may give a data wiper the psychological satisfaction of
having done a thorough job, it's a pure waste of time.

Something much more important, from a security point of view, is
actually to overwrite all copies of the data that are to be deleted. If
a sensitive document has been edited on a PC, overwriting the file is
far from sufficient because, during editing, the data have been saved
countless times to temporary files, back-ups, shadow copies, swap files
... and who knows where else? Really, to ensure that nothing more can be
recovered from a hard disk, it has to be overwritten completely, sector
by sector. Although this takes time, it costs nothing: the dd command in
any Linux distribution will do the job perfectly.

(djwm)

http://www.heise-online.co.uk/news/Secure-deletion-a-single-overwrite-will-do-it--/112432


T.I.S.P.  -  Lassen Sie Ihre Qualifikation zertifizieren
vom 09.-13.03.2009 - http://www.secorvo.de/college/tisp/
-
Stefan Kelm
Security Consulting

Secorvo Security Consulting GmbH
Ettlinger Strasse 12-14, D-76137 Karlsruhe
Tel. +49 721 255171-304, Fax +49 721 255171-100
stefan.k...@secorvo.de, http://www.secorvo.de/
PGP: 87AE E858 CCBC C3A2 E633 D139 B0D9 212B

Mannheim HRB 108319, Geschaeftsfuehrer: Dirk Fox

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: MD5 considered harmful today, SHA-1 considered harmful tomorrow

2009-01-20 Thread Darren J Moffat

Paul Hoffman wrote:

At 12:24 PM +0100 1/12/09, Weger, B.M.M. de wrote:

When in 2012 the winner of the
NIST SHA-3 competition will be known, and everybody will start
using it (so that according to Peter's estimates, by 2018 half
of the implementations actually uses it), do we then have enough
redundancy?


No offense, Benne, but are serious? Why would everybody even consider it? 
Give what we know about the design of SHA-2 (too little), how would we know whether SHA-3 
is any better than SHA-2 for applications such as digital certificates?

In specific, if most systems have implemented the whole SHA-2 family by the 
time SHA-3 is settled, and then there is a problem found in SHA-2/256, I would 
argue that it is probably much more prudent to change to SHA-2/384 than to 
SHA-3/256. SHA-2/384 will most likely be much than to SHA-3/256, but it will 
have had significantly more study.


Can you state the assumptions for why you think that moving to SHA384 
would be safe if SHA256 was considered vulnerable in some way please.


SHA256,384,512 are a suite all built on the same basic algorithm 
construction.  Depending on how SHA256 fell the whole suite could be 
vulnerable irrespective of the digest length or maybe it won't be.


Until we know how the SHA3 digest is actually constructed the same could 
even be true of that.


I don't think it depends at all on who you trust but on what algorithms 
are available in the protocols you need to use to run your business or 
use the apps important to you for some other reason.   It also very much 
depends on why the app uses the crypto algorithm in question, and in the 
case of digest/hash algorithms wither they are key'd (HMAC) or not.


--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: MD5 considered harmful today, SHA-1 considered harmful tomorrow

2009-01-20 Thread Paul Hoffman
At 1:38 PM + 1/19/09, Darren J Moffat wrote:
Can you state the assumptions for why you think that moving to SHA384 would be 
safe if SHA256 was considered vulnerable in some way please.

Sure. I need 128 bits of pre-image protection for, say, a digital signature. 
SHA2/256 is giving me that. Then, due to some weakness, it is only giving me 
112 bits of protection. The weakness is understood in the crypto community, and 
it's a straight-line loss of bits of protection.

SHA2/384 would then give me 168 bits of protection, which is more than the 128 
what I need.

Even if you don't trust that there is a straight-line loss of bits, you would 
have to be believing that the attack is much worse for SHA2/384 than it was for 
SHA2/256 in order to bring the output down to the level that I need.

--Paul Hoffman, Director
--VPN Consortium

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: MD5 considered harmful today, SHA-1 considered harmful tomorrow

2009-01-20 Thread Victor Duchovni
On Mon, Jan 19, 2009 at 10:45:55AM +0100, Bodo Moeller wrote:

 The RFC does exit (TLS 1.2 in RFC 5246 from August 2008 makes SHA-256
 mandatory), so you can send a SHA-256 certificate to clients that
 indicate they support TLS 1.2 or later.  You'd still need some other
 certificate for interoperability with clients that don't support
 SHA-256, of course, and you'd be sending that one to clients that do
 support SHA-256 but not TLS 1.2.  (So you'd fall back to SHA-1, which
 is not really a problem when CAs make sure to use the hash algorithm
 in a way that doesn't rely on hash collisions being hard to find,
 which probably is a good idea for *any* hash algorithm.)

It would be helpful if as a first step, SSL_library_init() (a.k.a.
OpenSSL_add_ssl_algorithms()) enabled the SHA-2 family of digests,
I would make this change in the 0.9.9 development snapshots.

[ Off topic: I find OpenSSL release-engineering a rather puzzling
process. The patch releases are in fact feature releases, and there
are no real patch releases even for critical security issues.  I chose
to backport the 0.9.8j security fixes to 0.9.8i and sit out all the
new FIPS code, ... This should not be necessary. I really hope to see
real OpenSSL patch releases some day with development of new features
*strictly* in the development snapshots. Ideally this will start with
0.9.9a, with no new features, just bugfixes, in [b-z]. ]

-- 
Viktor.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: MD5 considered harmful today, SHA-1 considered harmful tomorrow

2009-01-20 Thread Steven M. Bellovin
On Mon, 19 Jan 2009 10:45:55 +0100
Bodo Moeller bmoel...@acm.org wrote:

 On Sat, Jan 17, 2009 at 5:24 PM, Steven M. Bellovin
 s...@cs.columbia.edu wrote:
 
  I've mentioned it before, but I'll point to the paper Eric Rescorla
  wrote a few years ago:
  http://www.cs.columbia.edu/~smb/papers/new-hash.ps or
  http://www.cs.columbia.edu/~smb/papers/new-hash.pdf .  The bottom
  line: if you're running a public-facing web server, you *can't*
  offer a SHA-2 certificate because you have no way of knowing if the
  client supports SHA-2. Fixing that requires a TLS fix; see the
  above timeline for that.
 
 The RFC does exit (TLS 1.2 in RFC 5246 from August 2008 makes SHA-256
 mandatory), so you can send a SHA-256 certificate to clients that
 indicate they support TLS 1.2 or later.  You'd still need some other
 certificate for interoperability with clients that don't support
 SHA-256, of course, and you'd be sending that one to clients that do
 support SHA-256 but not TLS 1.2.  (So you'd fall back to SHA-1, which
 is not really a problem when CAs make sure to use the hash algorithm
 in a way that doesn't rely on hash collisions being hard to find,
 which probably is a good idea for *any* hash algorithm.)
 
So -- who supports TLS 1.2?  (Btw -- note the date of that RFC: August
2008.  That's almost exactly 3 years after ekr and I published our
paper.  Since ekr is co-chair of the TLS working group, we can assume
that that group was aware of the problem.  See what Peter and I said
about how long it takes to get any changes deployed.)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: MD5 considered harmful today, SHA-1 considered harmful tomorrow

2009-01-20 Thread Peter Gutmann
Steven M. Bellovin s...@cs.columbia.edu writes:

So -- who supports TLS 1.2?

Not a lot, I think.  The problem with 1.2 is that it introduces a pile of
totally gratuitous incompatible changes to the protocol that require quite a
bit of effort to implement (TLS 1.1 - 1.2 is at least as big a step, if not a
bigger step, than the change from SSL to TLS), complicate an implementation,
are difficult to test because of the general lack of implementations
supporting it, and provide no visible benefit.  Why would anyone rush to
implement this when what we've got now works[0] just fine?

Peter.

[0] For whatever level of works applies to SSL/TLS, in the sense that 1.2
won't work any better than 1.1 does.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: MD5 considered harmful today, SHA-1 considered harmful tomorrow

2009-01-20 Thread Jon Callas
I have a general outline of a timeline for adoption of new crypto  
mechanisms (e.g. OAEP, PSS, that sort of thing, and not specifically  
algorithms) in my Crypto Gardening Guide and Planting Tips, http://www.cs.auckland.ac.nz/~pgut001/pubs/crypto_guide.txt 
, see Question J about 2/3 of the way down.  It's not meant to be  
definitively accurate for all cases but was created as a rough  
guideline for people proposing to introduce new crypto mechanisms to  
give an idea of how long they should expect to wait to see them  
adopted.


I've always been pleased with your answer to Question J, so I'll say  
what we're doing at PGP.


We deprecated MD5 in '97. That was one of the main points of the new  
formats that became OpenPGP was that agility has its own challenges,  
but it's worth it.


We had a meeting recently to look at what we're going to do. Our first  
thoughts were that we would scrub MD5 from the UI and be done with it.  
Then we realized that we need to leave enough of the old UI so that  
people can *remove* MD5 from their use.


We decided that we'll issue warnings in the annotations when we verify  
MD5 signatures. We can't stop verifying them, but we'll do an  
equivalent to what we do with 40-bit crypto in S/MIME. (40-bit still  
harries S/MIME; it's really a pity that we have to deal with it. Our  
solution is that 40-bit crypto is just a fancy form of plaintext. We  
decode it the way we decode quoted-printable, base64, and other fancy  
forms of plaintext.) We debated removing it from the APIs, and  
concluded that that is asking for trouble, because someone will need  
to do that for diagnostic and testing purposes.


We've started deprecating the 160-bit hashes. There will be comments  
in the UI for both SHA-1 and RIPE-MD/160. We think NIST's advice for  
phasing them out next year is just fine, and so we'll start really  
phasing them out next year.


Lastly, we considered other options for hash algorithms. Presently,  
it's too early to do anything, but we'll look at it again when we do  
more work on the 160-bit hashes.


Jon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: MD5 considered harmful today, SHA-1 considered harmful tomorrow

2009-01-20 Thread Nicolas Williams
On Mon, Jan 19, 2009 at 01:38:02PM +, Darren J Moffat wrote:
 I don't think it depends at all on who you trust but on what algorithms 
 are available in the protocols you need to use to run your business or 
 use the apps important to you for some other reason.   It also very much 
 depends on why the app uses the crypto algorithm in question, and in the 
 case of digest/hash algorithms wither they are key'd (HMAC) or not.

As Jeff Hutzelman suggested recently, inspired by the SSHv2 CBC mode
vulnerability, hash algorithm agility for PKI really means having more
than one signature, each using a different hash, in each certificate;
this enlarges certificates.  Alternatively, it needs to be possible to
select what certificate to present to a peer based on an algorithm
negotiation; this tends to mean adding round-trips to our protocols.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [heise online UK] Secure deletion: a single overwrite will do it

2009-01-20 Thread Jason


On Mon, 19 Jan 2009, Stefan Kelm wrote:

... and who knows where else? Really, to ensure that nothing more can be
recovered from a hard disk, it has to be overwritten completely, sector
by sector. Although this takes time, it costs nothing: the dd command in
any Linux distribution will do the job perfectly.


I agree in general, although you still have to watch out for reserve tracks 
(search on this page):


http://forum.hddguru.com/seagate-terminal-commands-t6411.html

All hard disks have reserved sectors, which are used automatically by the 
drive logic if there is a defect in the media.:


http://cisn.metu.edu.tr/97-2/hardware.html

Those could perhaps be used to smuggle data out of a wiped disk.  Or, if your 
disk firmware is (or someday becomes) clever enough to transparently swap out 
dying sectors with those from its reserved store, you could accidentally end 
up with data on the disk that dd would miss.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [heise online UK] Secure deletion: a single overwrite will do it

2009-01-20 Thread dan

Peter Gutmann has responded

http://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html

(see the Further Epilogue section well down the page)

--dan


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com