Re: [Cryptography] RSA recommends against use of its own products.
On 2013-09-27 09:54, Phillip Hallam-Baker wrote: Quite, who on earth thought DER encoding was necessary or anything other than incredible stupidity? I have yet to see an example of code in the wild that takes a binary data structure, strips it apart and then attempts to reassemble it to pass to another program to perform a signature check. Yet every time we go through a signature format development exercise the folk who demand canonicalization always seem to win. DER is particularly evil as it requires either the data structures to be assembled in the reverse order or a very complex tracking of the sizes of the data objects or horribly inefficient code. But XML signature just ended up broken. We have a compiler that generates C code from ASN.1 code. Does it not generate code behind the scenes that does all this ugly stuff for us without us having to look at the code? I have not actually used the compiler, and I have discovered that hand generating code to handle ASN.1 data structures is a very bad idea, but I am told that if I use the compiler, all will be rainbows and unicorns. You go first. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] RSA equivalent key length/strength
On 2013-09-28 01:23, Phillip Hallam-Baker wrote: Most cryptolibraries have a hard coded limit at 4096 bits and there are diminishing returns to going above 2048. Going from 4096 to 8192 bits only increases the work factor by a very small amount and they are really slow which means we end up with DoS considerations. We really need to move to EC above RSA. Only it is going to be a little while before we work out which parts have been contaminated by NSA interference and which parts are safe from patent litigation. RIM looks set to collapse with or without the private equity move. The company will be bought with borrowed money and the buyers will use the remaining cash to pay themselves a dividend. Mitt Romney showed us how that works. We might possibly get lucky and the patents get bought out by a white knight. But all the mobile platform providers are in patent disputes right now and I can't see it likely someone will plonk down $200 million for a bunch of patents and then make the crown jewels open. Problem with the NSA is that its Jekyll and Hyde. There is the good side trying to improve security and the dark side trying to break it. Which side did the push for EC come from? In fact we do know this. NSA NIST claimed that their EC curves are provably random (therefore not backdoored) In fact, they are provably non random, selected on an unrevealed basis, which contradiction is, under the circumstances, compelling evidence that the NIST curves are in fact backdoored. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] RSA equivalent key length/strength
On 27/09/13 18:23 PM, Phillip Hallam-Baker wrote: Problem with the NSA is that its Jekyll and Hyde. There is the good side trying to improve security and the dark side trying to break it. Which side did the push for EC come from? What's in Suite A? Will probably illuminate that question... iang ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Gilmore response to NSA mathematician's "make rules for NSA" appeal
On 09/27/2013 05:30 AM, james hughes wrote: > The thing that this list can effect is the creation of standards with > a valuable respect for Moore's law and increases of mathematical > understanding. Stated differently, "just enough security" is the > problem. This past attitude did not respect the very probably future > that became a reality. I think there probably are some fair criticisms that we were a bit complacent after the clipper and export stuff seemed to be sorted out and the whole NIST/NSA thing with the AES and SHA-3 competitions seemed to be ticking over nicely. > Are we going to continue this behavior? IMHO, based on what I have > been seeing on the TLS list, probably. That's more than a bit silly though IMO. The sensible approach here is to a) see what's the best we can do now with deployed code given that we know it takes years to get anything near everything updated, but also b) figure out what do we want to do, knowing that it'll take years for deployment to happen no matter how small a change we make. a) is Yaron's BCP draft b) is TLS1.3 (hopefully) and maybe some extensions for earlier versions of TLS as well Arguing for (b) only, and that we ignore (a) would be dumb. For (a), we are entirely constrained in what we can do, basically, the only thing we can do is say how to better configure already deployed code. S. > > Jim > > > > > ___ The cryptography > mailing list cryptography@metzdowd.com > http://www.metzdowd.com/mailman/listinfo/cryptography > ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] RSA equivalent key length/strength
On Fri, Sep 27, 2013 at 11:23:27AM -0400, Phillip Hallam-Baker wrote: > Actually, it turns out that the problem is that the client croaks if the > server tries to use a key size that is bigger than it can handle. Which > means that there is no practical way to address it server side within the > current specs. Or smaller (e.g. GnuTLS minimum client-side EDH strength). And given that with EDH there is as yet no TLS extension that allows the client to advertise the range of supported EDH key lengths ( with EECDH the client can communicate supported curves), there is no timely incremental path to stronger EDH parameters. In addition to the protocol obstacles we also have API obstacles, since the protocol values need to be communicated to applications that provide appropriate parameters for the selected strength (EDH or EECDH). In OpenSSL 1.0.2 there is apparently a new interface for server-side EECDH curve selection that takes client capabilities into account. For EDH there is need for an appropriate new extension, and new interfaces to pass the parameters to the server application. Deploying more capable software will take a O(10 years). We could perhaps get there a bit faster, if the toolkits selected from a fixed set of suitable parameters and did not require application changes, but this has the drawback of juicier targets for cryptanalysis. So multiple things need to be done: - For now enable 1024-bit EDH with different parameters at each server, changed from time to time. Avoid non-interoperable parameter choices, that is counter-productive. - Publish a new TLS extension that allows clients to publish supported EDH parameter sizes. Extend TLS toolkit APIs to expose this range to the server application. Upgrade toolkit client software to advertise the supported EDH parameter range. - Enable EECDH with secp256r1 (and friends) unless it is reasonably believed to be cooked for efficient DLP by its creators. - Standardize new EECDH curves (e.g. DJB's Curve1174). -- Viktor. P. S. For SMTP transport security deploy DNSSEC and DANE TLSA. I'm hoping at least one of the larger service providers will do this in the not too distant future. Postfix (2.11 official release 2.11) will support this in early 2014. Exim will take a bit longer, as they're cutting a release now, and the DANE support is not yet there. The other MTAs will I hope follow along in due course. The SMTP backbone (inter-domain SMTP via MX records, ...) can be upgraded to use downgrade-resistant authenticated TLS. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] heterotic authority + web-of-trust + pinning
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 09/25/2013 04:59 AM, Peter Gutmann wrote: > Something that can "sign a new RSA-2048 sub-certificate" is called a CA. For > a browser, it'll have to be a trusted CA. What I was asking you to explain > is > how the browsers are going to deal with over half a billion (source: Netcraft > web server survey) new CAs in the ecosystem when "websites sign a new > RSA-2048 > sub-certificate". There are other ways of thinking about it that makes it seem not quite so bad. There are many approaches to establishing trust. Familiar examples include: *) The top-down authoritarian X.509 approach, such as we see in SSL. *) The pinning-only approach, such as we see in SSH. *) The web-of-trust approach, such as we see in PGP. Each of these has some security advantages and disadvantages. Each has some convenience advantages and disadvantages. My point for today is that one can combine these in ways that are heterotic, i.e. that show hybrid vigor. -- The example of combining the CA approach with pinning has already been mentioned. -- Let's now discuss how one might combine the CA approach with the web-of-trust approach. Here's one possible use-case: Suppose you have a HTTPS web site using a certificate that you bought from some godlike CA. When it expires, you buy another to replace it. So far so good. However, it would be even better if you could use the old certificate to sign the new one. This certifies that you /intend/ for there to be a continuation of the same security relationship. [As a detail, in this approach, you want a certificate to have three stages of life: (1) active, in normal use, (2) retired, not in active use, but still valid for signing its successor, and (3) expired, not used at all.] This is like PGP in the sense that the new certificate has multiple signatures, one from the top-down CA and one from the predecessor. The idea of having multiple signatures is foreign to the heirarchical authoritarian X.509 way of thinking, but I don't see any reason why this would be hard to do. -- Similar heterotic thinking applies to SSH. Suppose I want to replace my old host key with another. It would be nice to use the old one to /sign/ the new one, so that a legitimate replacement doesn't look like a MITM attack. (In theory, you could validate a new SSH keypair by distributing the fingerprint via SSL or PGP, which reduces it to a problem previously "solved" ... but that's labor-intensive, and AFAICT hardly anybody but me ever bothers to do it.) > Something that can "sign a new RSA-2048 sub-certificate" is called a CA. You could call it that, but you could just call it a /signer/. PGP has already demonstrated that you can have millions upon millions of signers. In the use-case sketched above, we don't even need a keyserver. The web site just offers its public key, plus a certifiate signed by the CA, plus another certificate signed by the predecessor key. For end-to-end security of email, where it may be that neither end is a server, some sort of keyserver is probably necessary. This seems like a manageable problem. We agree that half a billion CAs would be too many, if they all had the power to sign anything and everything. Forsooth, my system already has 321 certificates in /etc/ssl/certs, and that seems like waaay too many, IMHO. That's because the adversay needs to subvert only one of them, and the adversary gets to pick and choose. On the other hand, if we think in terms of a /signer/ with much more limited power, perhaps only the power to countersign a successor cert that has already been signed by a CA, that sounds to me like a good thing, not a bad thing. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.12 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIVAwUBUkW1svO9SFghczXtAQLjPg//d4P9Lchbe7Z7sylIzGGXyx8oe0742vrX /7L3c+RvIymE5b7sighyDQFKjM16CIGp9bbfVS5XkAwyWdWv3alWjXfL3vAV0mjx Xctad+B5ipyg22+t9xGI4c+NXgC+oxqu3D5tNy7kg6tDuKHEDpxDqip0IEdOTNTE +N2uyfg9N4ltIf5Y0PnkTaEl0as42lifLlhn7b1PrZ4H1YaWOUyTlIdM/TPeK8OD f3rlnbmScFVchdhAbUOanaHOAqiR6RMG+exSksISq8KwcvTnei1EChGV7yQ/LTxv H/qpMq2RJPMWnr6wZ3EnpJvOTVFKE/E8oYiUMa1ZnvsesMce7Xu45tILA92NKyeA lc8RATR0CXij1CgXyf+exaURif0hQtgMRRM9hZdKur5y3Uaysgu+Jz9Dh8oGs5a+ 5TccQhsm/CpPzArgNYrnw87I7b1j6RnH12sEYVpwYqvnQGR3JW7xKoPSf9zI8OHG RW68BKeRlTwUdb8nsvvX5jl8QN29H/oajH83D+S0aY2fwMTxxqpHWO+mkcHWHbdE iKzJ2t5oh9lskBXj83Ect7tQ+UAtrFXMcEPGTD36IbXceMQ8dqpyq7yX/PRXtwKM 5uuTCFsDcW6fULvtr8c13SU/FaBTg8fImdF36FnangW6679Jjp0+6B8EQu26jZj2 +1NFV1rGqoo= =V6VL -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Gilmore response to NSA mathematician's "make rules for NSA" appeal
http://www.nytimes.com/2013/09/27/opinion/have-a-nice-day-nsa.html On Sep 25, 2013, at 3:14 PM, John Kelsey wrote: > Right now, there is a lot of interest in finding ways to avoid NSA > surveillance. In particular, Germans and Brazilians and Koreans would > presumably rather not have their data made freely available to the US > government under what appear to be no restrictions at all. If US companies > would like to keep the business of Germans and Brazilians and Koreans, they > probably need to work out a way to convincingly show that they will safeguard > that data even from the US government. I think we are in agreement, but I am focused on what this list -can- do and -can-not- do. All the large banks have huge systems and processes that protect the privacy of their customers. It works most of the time, but no large bank can say they will never have an employee go bad. My point is that this thread was moving towards the statement that citizens of country X should use service providers that "eliminate the need for trust". Because of subpoenas and collaboration this statement is true in whatever the country the service provider is in and who the 3rd parties are. In essence, this is a tautology that has nothing to do with Cryptography. Even if a service provider could "convince you that they _can't_ betray you", it would either be naiveté or simply be marketing. The only real way to "eliminate the need for trust" from any service provider of any kind, or any country (your's or some other country), is to not use them. The one problem that this list (cryptography@metzdowd.com) -can- focus on is that the bar has been set too low for the governments to be able to break a few keys and gain access to a lot of information. This is the violation of trust in the internet that, in part, has been enabled by weak cryptographic standards (short keys, non-ephemeral keys, subverted algorithms, etc.). I am not certain that Google could have done anything differently. Stated differently, Google (and all the world's internet service providers) are collateral damage. The thing that this list can effect is the creation of standards with a valuable respect for Moore's law and increases of mathematical understanding. Stated differently, "just enough security" is the problem. This past attitude did not respect the very probably future that became a reality. Are we going to continue this behavior? IMHO, based on what I have been seeing on the TLS list, probably. Jim ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] RSA equivalent key length/strength
On Fri, Sep 27, 2013 at 3:59 AM, John Gilmore wrote: > > And the problem appears to be compounded by dofus legacy implementations > > that don't support PFS greater than 1024 bits. This comes from a > > misunderstanding that DH keysizes only need to be half the RSA length. > > > > So to go above 1024 bits PFS we have to either > > > > 1) Wait for all the servers to upgrade (i.e. never do it because the > won't > > upgrade) > > > > 2) Introduce a new cipher suite ID for 'yes we really do PFS at 2048 bits > > or above'. > > Can the client recover and do something useful when the server has a > buggy (key length limited) implementation? If so, a new cipher suite > ID is not needed, and both clients and servers can upgrade asynchronously, > getting better protection when both sides of a given connection are > running the new code. > Actually, it turns out that the problem is that the client croaks if the server tries to use a key size that is bigger than it can handle. Which means that there is no practical way to address it server side within the current specs. > In the case of (2) I hope you mean "yes we really do PFS with an > unlimited number of bits". 1025, 2048, as well as 16000 bits should work. > There is no reason to use DH longer than the key size in the certificate and no reason to use a shorter DH size either. Most cryptolibraries have a hard coded limit at 4096 bits and there are diminishing returns to going above 2048. Going from 4096 to 8192 bits only increases the work factor by a very small amount and they are really slow which means we end up with DoS considerations. We really need to move to EC above RSA. Only it is going to be a little while before we work out which parts have been contaminated by NSA interference and which parts are safe from patent litigation. RIM looks set to collapse with or without the private equity move. The company will be bought with borrowed money and the buyers will use the remaining cash to pay themselves a dividend. Mitt Romney showed us how that works. We might possibly get lucky and the patents get bought out by a white knight. But all the mobile platform providers are in patent disputes right now and I can't see it likely someone will plonk down $200 million for a bunch of patents and then make the crown jewels open. Problem with the NSA is that its Jekyll and Hyde. There is the good side trying to improve security and the dark side trying to break it. Which side did the push for EC come from? -- Website: http://hallambaker.com/ ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] RSA recommends against use of its own products.
On Wed, Sep 25, 2013 at 7:18 PM, Peter Gutmann wrote: > =?iso-8859-1?Q?Kristian_Gj=F8steen?= > writes: > > >(For what it's worth, I discounted the press reports about a trapdoor in > >Dual-EC-DRBG because I didn't think anyone would be daft enough to use > it. I > >was wrong.) > > +1. It's the Vinny Gambini effect (from the film My Cousin Vinny): > > Judge Haller: Mr. Gambini, didn't I tell you that the next time you > appear > in my court that you dress appropriately? > Vinny: You were serious about dat? > > And it's not just Dual-EC-DRBG that triggers the "You were serious about > dat?" > response, there are a number of bits of security protocols where I've > been... > distinctly surprised that anyone would actually do what the spec said. > Quite, who on earth thought DER encoding was necessary or anything other than incredible stupidity? I have yet to see an example of code in the wild that takes a binary data structure, strips it apart and then attempts to reassemble it to pass to another program to perform a signature check. Yet every time we go through a signature format development exercise the folk who demand canonicalization always seem to win. DER is particularly evil as it requires either the data structures to be assembled in the reverse order or a very complex tracking of the sizes of the data objects or horribly inefficient code. But XML signature just ended up broken. [Just found your ASN.1 dump tool and using it to debug my C# ASN.1 encoder, OK so maybe ASN.1 is not terrible if I can put together a compiler in four days but I am not using the Assanine 1 schema syntax and I am using my personal toolchain] > (Having said that, I've also occasionally been pleasantly surprised when, > by > unanimous unspoken consensus among implementers, everyone ignored the spec > and > did the right thing). > I have a theory that the NSA stooges are not the technical folk. Why on earth would a world class expert want to spend their time playing silly games sabotaging specs when they could have much more fun working inside the NSA at Fort Meade or building stuff. What I would do is to take a person who is a technical wannabe and provide him with technical support and tell him to try to wheedle positions as a document editor. Extra points if they manage to discourage participation by folk with solid technical chops. We saw something of the sort during the anti-spam efforts. I was sure at the time that the spammers had folk paid to make the discussions as acrimonious as possible. -- Website: http://hallambaker.com/ ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] [cryptography] Asynchronous forward secrecy encryption
- Forwarded message from zooko - Date: Fri, 27 Sep 2013 00:08:32 +0400 From: zooko To: Michael Rogers Cc: Randombit List Subject: Re: [cryptography] Asynchronous forward secrecy encryption User-Agent: Mutt/1.5.21 (2010-09-15) Let me just mention that this conversation is AWESOME. I only wish the folks over at Perry's Crypto List (http://www.metzdowd.com/pipermail/cryptography/) knew that we were having such a great conversation over here. On Thu, Sep 19, 2013 at 09:20:04PM +0100, Michael Rogers wrote: > > The key reuse issue isn't related to the choice between time-based and > message-based updates. It's caused by keys and IVs in the current design > being derived deterministically from the shared secret and the sequence > number. If an endpoint crashes and restarts, it may reuse a key and IV with > new plaintext. Not good. Another defense against this is to generate the IV from the plaintext, possibly from the plaintext in addition to other stuff. There are three things that you might want to throw into your IV generator: 1. the plaintext, 2. a persistent secret key used only for this purpose and known only to this client, 3. a random nonce read from the operating system. I would suggest including 1 and 2 but not 3. This *could* be seen as an alternative to the defense you described: > In the new design, the temporary keys are still derived deterministically > from the shared secret, but the IVs and ephemeral keys are random. Or it could be used as an added, redundant defense. I guess if it is an added, redundant defense then this is the same as including the random nonce -- number 3 from the list above. Regards, Zooko ___ cryptography mailing list cryptogra...@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography - End forwarded message - -- Eugen* Leitl http://leitl.org";>leitl http://leitl.org __ ICBM: 48.07100, 11.36820 http://ativel.com http://postbiota.org AC894EC5: 38A5 5F46 A4FF 59B8 336B 47EE F46E 3489 AC89 4EC5 signature.asc Description: Digital signature ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] RSA equivalent key length/strength
> And the problem appears to be compounded by dofus legacy implementations > that don't support PFS greater than 1024 bits. This comes from a > misunderstanding that DH keysizes only need to be half the RSA length. > > So to go above 1024 bits PFS we have to either > > 1) Wait for all the servers to upgrade (i.e. never do it because the won't > upgrade) > > 2) Introduce a new cipher suite ID for 'yes we really do PFS at 2048 bits > or above'. Can the client recover and do something useful when the server has a buggy (key length limited) implementation? If so, a new cipher suite ID is not needed, and both clients and servers can upgrade asynchronously, getting better protection when both sides of a given connection are running the new code. In the case of (2) I hope you mean "yes we really do PFS with an unlimited number of bits". 1025, 2048, as well as 16000 bits should work. John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] RSA recommends against use of its own products.
On Thu, 26 Sep 2013, ianG wrote: > Right, scratch the Brits and the French. Maybe AU, NZ? I don't know. > Maybe the Germans / Dutch / Austrians. At the risk of getting political, I'd recommend against AU (I live there). Our new gummint has already shown that it will put its own interests ahead of those of the people (cancelling the proposed National Broadband Network springs to mind). Switzerland, perhaps? They have a history of secrecy... -- Dave ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography