Re: [Cryptography] NIST about to weaken SHA3?
On Mon, Sep 30, 2013 at 05:45:52PM +1000, James A. Donald wrote: On 2013-09-30 14:34, Viktor Dukhovni wrote: On Mon, Sep 30, 2013 at 05:12:06AM +0200, Christoph Anton Mitterer wrote: Not sure whether this has been pointed out / discussed here already (but I guess Perry will reject my mail in case it has): https://www.cdt.org/blogs/joseph-lorenzo-hall/2409-nist-sha-3 I call FUD. If progress is to be made, fight the right fights. The SHA-3 specification was not weakened, the blog confuses the effective security of the algorithtm with the *capacity* of the sponge construction. SHA3 has been drastically weakened from the proposal that was submitted and cryptanalyzed: See for example slides 43 and 44 of https://docs.google.com/file/d/0BzRYQSHuuMYOQXdHWkRiZXlURVE/edit Have you read the SAKURA paper? http://eprint.iacr.org/2013/231.pdf In section 6.1 it describes 4 capacities for the SHA-2 drop-in replacements, and in 6.2 these are simplified to two (and strengthened for the truncated digests) i.e. the proposal chosen by NIST. Should one also accuse ESTREAM of maliciously weakening SALSA? Or might one admit the possibility that winning designs in contests are at times quite conservative and that one can reasonably standardize less conservative parameters that are more competitive in software? If SHA-3 is going to be used, it needs to offer some advantages over SHA-2. Good performance and built-in support for tree hashing (ZFS, ...) are acceptable reasons to make the trade-off explained on slides 34, 35 and 36 of: https://ae.rsaconference.com/US13/connect/fileDownload/session/397EA47B1FB103F0B3E87D6163C7129E/CRYP-W23.pdf -- Viktor. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] NIST about to weaken SHA3?
On Tue, Oct 01, 2013 at 07:21:03AM +1000, James A. Donald wrote: On 2013-10-01 00:44, Viktor Dukhovni wrote: Should one also accuse ESTREAM of maliciously weakening SALSA? Or might one admit the possibility that winning designs in contests are at times quite conservative and that one can reasonably standardize less conservative parameters that are more competitive in software? less conservative means weaker. Weakening SHA3 to gain cryptanalytic advantage does not make much sense. SHA3 collisions or preimages even at 80-bit cost don't provide anything interesting to a cryptanalyst, and MITM attackers will attack much softer targets. We know exactly why it was weakened. The the proposed SHA3-256 digest gives 128 bits of security for both collisions and preimages. Likewise the proposed SHA3-512 digest gives 256 bits of security for both collisions and preimages. Weaker in ways that the NSA has examined, and the people that chose the winning design have not. The lower capacity is not weaker in obscure ways. If Keccak delivers substantially less than c/2 security, then it should not have been chosen at all. If you believe that 128-bit preimage and collision resistance is inadequate in combination with AES128, or 256-bit preimage and collision resistance is inadequate in combination with AES256, please explain. Why then hold a contest and invite outside scrutiny in the first place.? The contest led to an excellent new hash function design. This is simply a brand new unexplained secret design emerging from the bowels of the NSA, which already gave us a variety of backdoored crypto. Just because they're after you, doesn't mean they're controlling your brain with radio waves. Don't let FUD cloud your judgement. -- Viktor. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] RSA equivalent key length/strength
On Mon, Sep 30, 2013 at 10:07:14AM +1000, James A. Donald wrote: Therefore, everyone should use Curve25519, which we have every reason to believe is unbreakable. Superceded by the improved Curve1174. http://cr.yp.to/elligator/elligator-20130527.pdf -- Viktor. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] NIST about to weaken SHA3?
On Mon, Sep 30, 2013 at 05:12:06AM +0200, Christoph Anton Mitterer wrote: Not sure whether this has been pointed out / discussed here already (but I guess Perry will reject my mail in case it has): https://www.cdt.org/blogs/joseph-lorenzo-hall/2409-nist-sha-3 I call FUD. If progress is to be made, fight the right fights. The SHA-3 specification was not weakened, the blog confuses the effective security of the algorithtm with the *capacity* of the sponge construction. The actual NIST Proposal strengthens SHA-3 relative to the authors' most performant proposal (http://eprint.iacr.org/2013/231.pdf section 6.1) by rounding up the capacity of the sponge construction to 256 bits for both SHA3-224 and SHA3-256, and rounding up to 512 bits for both SHA3-384 and SHA3-512 (matching the proposal in section 6.2). The result is that the 256-capacity variant gives 128-bit security against both collision and first preimage attacks, while the 512-bit capacity variant gives 256-bit security. This removes the asymmetry in the security properties of the hash. Yes, this is a performance trade-off, but it seems entirely reasonable. Do you really need 256 bits of preimage resistance with 128-bit ciphersuites, or 512 bits of preimage resistance with 256-bit ciphersuites? SHA2-256's O(256) bits of preimage resistance was not a design requirement, rather it needed 128-bits of collision resistance, the stronger preimage resistance is an artifact of the construction. For a similar sentiment see: http://crypto.stackexchange.com/questions/10008/why-restricting-sha3-to-have-only-two-possible-capacities -- Viktor. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] RSA equivalent key length/strength
On Fri, Sep 27, 2013 at 11:23:27AM -0400, Phillip Hallam-Baker wrote: Actually, it turns out that the problem is that the client croaks if the server tries to use a key size that is bigger than it can handle. Which means that there is no practical way to address it server side within the current specs. Or smaller (e.g. GnuTLS minimum client-side EDH strength). And given that with EDH there is as yet no TLS extension that allows the client to advertise the range of supported EDH key lengths ( with EECDH the client can communicate supported curves), there is no timely incremental path to stronger EDH parameters. In addition to the protocol obstacles we also have API obstacles, since the protocol values need to be communicated to applications that provide appropriate parameters for the selected strength (EDH or EECDH). In OpenSSL 1.0.2 there is apparently a new interface for server-side EECDH curve selection that takes client capabilities into account. For EDH there is need for an appropriate new extension, and new interfaces to pass the parameters to the server application. Deploying more capable software will take a O(10 years). We could perhaps get there a bit faster, if the toolkits selected from a fixed set of suitable parameters and did not require application changes, but this has the drawback of juicier targets for cryptanalysis. So multiple things need to be done: - For now enable 1024-bit EDH with different parameters at each server, changed from time to time. Avoid non-interoperable parameter choices, that is counter-productive. - Publish a new TLS extension that allows clients to publish supported EDH parameter sizes. Extend TLS toolkit APIs to expose this range to the server application. Upgrade toolkit client software to advertise the supported EDH parameter range. - Enable EECDH with secp256r1 (and friends) unless it is reasonably believed to be cooked for efficient DLP by its creators. - Standardize new EECDH curves (e.g. DJB's Curve1174). -- Viktor. P. S. For SMTP transport security deploy DNSSEC and DANE TLSA. I'm hoping at least one of the larger service providers will do this in the not too distant future. Postfix (2.11 official release 2.11) will support this in early 2014. Exim will take a bit longer, as they're cutting a release now, and the DANE support is not yet there. The other MTAs will I hope follow along in due course. The SMTP backbone (inter-domain SMTP via MX records, ...) can be upgraded to use downgrade-resistant authenticated TLS. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] RSA equivalent key length/strength
On Sat, Sep 21, 2013 at 05:07:02PM -0700, Patrick Pelletier wrote: and there was a similar discussion on the OpenSSL list recently, with GnuTLS getting blamed for using the ECRYPT recommendations rather than 1024: http://www.mail-archive.com/openssl-users@openssl.org/msg71899.html GnuTLS is reasonably sound engineering in electing 2048-bit groups by default on the TLS server. This inter-operates with the majority of clients, all the client has to do is to NOT artificially limit its implementation to 1024 bit EDH. GnuTLS fails basic engineering principles when it sets a lower bound of 2048-bit EDH in its TLS client code. TLS clients do not negotiate the DH parameters, only the use of EDH, and most server implementations deployed today will offer 1024-bit EDH groups even when the symmetric cipher key length is substantially stronger. Having GnuTLS clients fail to connect to most servers, (and e.g. with opportunistic TLS SMTP failing over to plain-text as a result) is not helping anyone! To migrate the world to stronger EDH, the GnuTLS authors should work with the other toolkit implementors in parallel with and through the IETF to get all servers to move to stronger groups. Once that's done, and the updated implementations are widely deployed raise the client minimum EDH group sizes. Unilaterally raising the client lower-bound is just, to put it bluntly, pissing into the wind. -- Viktor. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] PRISM-Proofing and PRISM-Hardening
On Tue, Sep 17, 2013 at 11:48:40PM -0700, Christian Huitema wrote: Given that many real organizations have hundreds of front end machines sharing RSA private keys, theft of RSA keys may very well be much easier in many cases than broader forms of sabotage. Or we could make it easy to have one separate RSA key per front end, signed using the main RSA key of the organization. This is only realistic with DANE TLSA (certificate usage 2 or 3), and thus will start to be realistic for SMTP next year (provided DNSSEC gets off the ground) with the release of Postfix 2.11, and with luck also a DANE-capable Exim release. For HTTPS, there is little indication yet that any of the major browsers are likely to implement DANE support in the near future. -- Viktor. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] PRISM-Proofing and PRISM-Hardening
On Wed, Sep 18, 2013 at 08:04:04PM +0100, Ben Laurie wrote: This is only realistic with DANE TLSA (certificate usage 2 or 3), and thus will start to be realistic for SMTP next year (provided DNSSEC gets off the ground) with the release of Postfix 2.11, and with luck also a DANE-capable Exim release. What's wrong with name-constrained intermediates? X.509 name constraints (critical extensions in general) typically don't work. -- Viktor. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] PRISM-Proofing and PRISM-Hardening
On Wed, Sep 18, 2013 at 08:47:17PM +, Viktor Dukhovni wrote: On Wed, Sep 18, 2013 at 08:04:04PM +0100, Ben Laurie wrote: This is only realistic with DANE TLSA (certificate usage 2 or 3), and thus will start to be realistic for SMTP next year (provided DNSSEC gets off the ground) with the release of Postfix 2.11, and with luck also a DANE-capable Exim release. What's wrong with name-constrained intermediates? X.509 name constraints (critical extensions in general) typically don't work. And public CAs don't generally sell intermediate CAs with name constraints. Rather undercuts their business model. -- Viktor. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] PRISM-Proofing and PRISM-Hardening
On Tue, Sep 17, 2013 at 05:01:12PM -0400, Perry E. Metzger wrote: (Note that this assumes no cryptographic breakthroughs like doing discrete logs over prime fields easily or (completely theoretical since we don't really know how to do it) sabotage of the elliptic curve system in use.) Given that many real organizations have hundreds of front end machines sharing RSA private keys, theft of RSA keys may very well be much easier in many cases than broader forms of sabotage. There is also I suspect a lot of software with compiled-in EDH primes (RFC 5114 or other). Without breaking EDH generally, perhaps they have better precomputation attacks that were effective against the more popular groups. I would certainly recommend that each server generate its own EDH parameters, and change them from time to time. Sadly when choosing between a 1024-bit or a 2048-bit EDH prime you get one of interoperability or best-practice security but not both. And indeed the FUD around the NIST EC curves is rather unfortunate. Is secp256r1 better or worse than 1024-bit EDH? -- Viktor. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] People should turn on PFS in TLS (was Re: Fwd: NYTimes.com: N.S.A. Foils Much Internet Encryption)
On Tue, Sep 10, 2013 at 12:56:16PM -0700, Bill Stewart wrote: I thought the normal operating mode for PFS is that there's an initial session key exchange (typically RSA) and authentication, which is used to set up an encrypted session, and within that session there's a DH or ECDH key exchange to set up an ephemeral session key, and then that session key is used for the rest of the session. This is not the case in TLS. The EDH or EECDH key exchange is performed in the clear. The server EDH parameters are signed with the server's private key. https://tools.ietf.org/html/rfc2246#section-7.4.3 In TLS with EDH (aka PFS) breaking the public key algorithm of the server certificate enables active attackers to impersonate the server (including MITM attacks). Breaking the Diffie-Hellman or EC Diffie-Hellman algorithm used allows a passive attacker to recover the session keys (break must be repeated for each target session), this holds even if the certificate public-key algorithm remains secure. -- Viktor. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] Time for djb's Edwards curves in TLS?
Is there a TLS WG draft adding djb's Curve1174 to the list of named curves supported by TLS? If there's credible doubt about the safety of the NIST curves, it seems that Curve1174 (in Edwards form) would make a good choice for EECDH, perhaps coupled with a similar curve with ~512 bits. Slides with rationale: http://cr.yp.to/talks/2013.05.31/slides-dan+tanja-20130531-4x3.pdf Detailed paper motivating Curve1174: http://cr.yp.to/elligator/elligator-20130527.pdf The current situation with EECDH over the NIST prime curves not shown compromised, but no longer trusted is rather sub-optimal. -- Viktor. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] Speaking of EDH (GnuTLS interoperability)
Some of you may have seen my posts to postfix-users and openssl-users, if so, apologies for the duplication. http://archives.neohapsis.com/archives/postfix/2013-09/thread.html#80 http://www.mail-archive.com/openssl-users@openssl.org/index.html#71903 The short version is that while everyone is busily implementing EDH, they may run into some interoperability issues. GnuTLS clients by default insist on a minimum EDH prime size that is not generally interoperable (2432 bits). Since the TLS protocol only negotiates the use of EDH, but not the prime size (the EDH parameters are unilaterally announced by the server), this setting, while cryptographically sound, is rather poor engineering. The context in which this was discovered is also amusing. Exim uses GnuTLS and has a work-around to drop the DH prime floor to 1024-bits, which is interoperable in practice. Debian however wanted to improve Exim to make it more secure, so the floor was raised to 2048-bits in a Debian patch. As a result STARTTLS from Debian's Exim (before sanity was restored in Exim 4.80-3 in Debian wheezy, AFAIK it is still broken in Debian squeeze) fails with Postfix, Sendmail, and other SMTP servers. In all probability this stronger version of Exim then needlessly sends mail without TLS, since with SMTP TLS is typically opportunistic, and likely after TLS fails delivery is retried in the clear! -- Viktor. P.S. shameless off-topic plug: If you want better than opportunistic TLS for email, consider adopting DNSSEC for your domains and publishing TLSA RRs for your SMTP servers. Postfix supports DANE as of 2.11-20130825. See https://tools.ietf.org/html/draft-dukhovni-smtp-opportunistic-tls-01 http://www.postfix.org/TLS_README.html#client_tls_dane Make sure to publish either IN TLSA 3 1 1 or IN TLSA 2 1 1 certificate associations. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Techniques for malevolent crypto hardware
On Sun, Sep 08, 2013 at 06:16:45PM -0400, John Kelsey wrote: I don't think you can do anything useful in crypto without some good source of random bits. If there is a private key somewhere (say, used for signing, or the public DH key used alongside the ephemeral one), you can combine the hash of that private key into your PRNG state. The result is that if your entropy source is bad, you get security to someone who doesn't compromise your private key in the future, and if your entropy source is good, you get security even against someone who compromises your private key in the future (that is, you get perfect forward secrecy). Nice in theory of course, but in practice applications don't write their own PRNGS. They use whatever the SSL library provides, OpenSSL, GnuTLS, ... If we assume weak PRNGS in the toolkit (or crypto chip, ...) then EDH could be weaker than RSA key exchange (provided the server's key is strong enough). The other concern is that in practice many EDH servers offer 1024-bit primes, even after upgrading the certificate strength to 2048-bits. Knee-jerk reactions to very murky information may be counter-productive. Until there are more specific details, it is far from clear which is better: - RSA key exchange with a 2048-bit modulus. - EDH with (typically) 1024-bit per-site strong prime modulus - EDH with RFC-5114 2048-bit modulus and 256-bit q subgroup. - EECDH using secp256r1 Until there is credible information one way or the other, it may be best to focus on things we already know make sense: - keep up with end-point software security patches - avoid already known weak crypto (RC4?) - Make sure VM provisioning includes initial PRNG seeding. - Save entropy across reboots. - ... Yes PFS addresses after the fact server private key compromise, but there is some risk that we don't know which if any of the PFS mechanisms to trust, and implementations are not always well engineered (see my post about GnuTLS and interoperability). -- Viktor. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography