Re: [Cryptography] Opening Discussion: Speculation on BULLRUN
3) Shortly after the token indictment of Zimmerman (thus prompting widespread use and promotion of the RSA public key encryption algorithm), the Clinton administration's FBI then advocated a relaxation of encryption export regulations in addition to dropping all plans for the Clipper chip I need to correct some facts, especially since I'm seeing this continue to get repeated. Phil was never charged, indicted, sued, or anything else. He was *investigated*. He was investigated for export violations, not for anything else. Being investigated is bad enough, but that's what happened. The government dropped the investigation in early 1996. The government started the investigation because they were responding to a complaint from RSADSI that Phil and team violated export control. As Phill noted, there was the secondary issue of the dispute over the RSA patent license, but that was a separate issue. RSADSI filed the complaint with the government that started the investigation. Jon PGP.sig Description: PGP signature ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] [cryptography] Random number generation influenced, HW RNG
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Sep 7, 2013, at 8:06 PM, John Kelsey crypto@gmail.com wrote: There are basically two ways your RNG can be cooked: a. It generates predictable values. Any good cryptographic PRNG will do this if seeded by an attacker. Any crypto PRNG seeded with too little entropy can also do this. b. It leaks its internal state in its output in some encrypted way. Basically any cryptographic processing of the PRNG output is likely to clobber this. There's also another way -- that it's a constant PRNG. For example, take a good crypto PRNG, seed it in manufacturing, and then in its life, it just outputs from that fixed state. That fixed state might be secret or known to outsiders, but either way, it's a cooked PRNG. Sadly, there were (are?) some hardware PRNGs on TPMs that were precisely this. Jon -BEGIN PGP SIGNATURE- Version: PGP Universal 3.2.0 (Build 1672) Charset: us-ascii wj8DBQFSLLbjsTedWZOD3gYRAhMzAJ93/YEF8mTwdJ/ktl5SiR5IPp4DtwCeIrZh KHVy+CIpN69GpJNlX0LiKiM= =i4b8 -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Why prefer symmetric crypto over public key crypto?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Sep 6, 2013, at 11:05 PM, Jaap-Henk Hoepman j...@cs.ru.nl wrote: Public-key cryptography is less well-understood than symmetric-key cryptography. It is also tetchier than symmetric-key crypto, and if you pay attention to us talking about issues with nonces, counters, IVs, chaining modes, and all that, you see that saying that it's tetchier than that is a warning indeed. You have the same issues with nonces, counters, etc. with symmetric crypto so I don't see how that makes it preferable over public key crypto. Point taken. Bruce made a quip, and I offered an explanation about why that quip might make sense. I have also, in debate with Jerry, opined that public-key cryptography is a powerful thing that can't be replaced with symmetric-key cryptography. That's something that I firmly believe. At its most fundamental, public-key crypto allows one to encrypt something to someone whom one does not have a prior security relationship with. That is powerful beyond words. If you want to be an investigative reporter and want to say, If you need to talk to me privately, use K -- you can't do it with symmetric crypto; you have to use public-key. If you are a software developer and want to say say, If you find a bug in my system and want to tell me, use K -- you can't do it with symmetric crypto. Heck, if you want to leave someone a voicemail securely you've never talked to, you need public key crypto. That doesn't make Bruce's quip wrong, it just makes it part of the whole story. Jon -BEGIN PGP SIGNATURE- Version: PGP Universal 3.2.0 (Build 1672) Charset: us-ascii wj8DBQFSKsy0sTedWZOD3gYRAm9wAJ9k8cASoXlfYOK/d0jrMtXQ8N/XegCg3ikv miKwWy0D+O8JGF+6hh1Y3oU= =msNM -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] XORing plaintext with ciphertext
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Sep 7, 2013, at 12:14 AM, Dave Horsfall d...@horsfall.org wrote: Got a question that's been bothering me for a whlie, but it's likely purely academic. Take the plaintext and the ciphertext, and XOR them together. Does the result reveal anything about the key or the painttext? It better not. That would be a break of amazing simplicity that transcends broken. Jon -BEGIN PGP SIGNATURE- Version: PGP Universal 3.2.0 (Build 1672) Charset: us-ascii wj8DBQFSKuANsTedWZOD3gYRAhHiAJsGJ43vKlGRY1p9moFvyY0GZV8ePgCfa4R0 oCWJ6kNVs+qlnwcpfhU/bNA= =Ub19 -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] ElGamal, DSA randomness (was Re: Why prefer symmetric crypto over public key crypto?)
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Sep 7, 2013, at 5:09 PM, Perry E. Metzger pe...@piermont.com wrote: Note that such systems should at this point be using deterministic methods (hashes of text + other data) to create the needed nonces. I believe several such methods have been published and are considered good, but are not well standardized. Certainly this eliminates a *very* important source of fragility in such systems and should be universally implemented. References to such methods are solicited -- I'm operating without my usual machine at the moment while its hard drive restores from backup. For as long as PGP has done DSA, it protected the signature nonce by hashing it with the DSA private key. These days, we'd do an HMAC, most likely. There's now an RFC 6979 on Deterministic DSA now, as well. Phil Z, David Kravitz, and I started on something equivalent and then stopped when we saw what Thomas Pornin was doing. It's good stuff. https://datatracker.ietf.org/doc/rfc6979/ Jon -BEGIN PGP SIGNATURE- Version: PGP Universal 3.2.0 (Build 1672) Charset: us-ascii wj8DBQFSK8FpsTedWZOD3gYRAs2DAKCA8Di/fH9ZYvAb4y5Byb2bN6MudQCgkXZO 80uY0/A7zZ3CBe6C0/1ALfU= =eqWE -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Suite B after today's news
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Sep 6, 2013, at 11:41 AM, Jack Lloyd ll...@randombit.net wrote: I think that any of OCB, CCM, or EAX are preferable from a security standpoint, but none of them parallelize as well. If you want to do a lot of encrypted and authenticated high-speed link encryption, well, there is likely no other answer. It's GCM or nothing. OCB parallelizes very well in software and I see no reason it would not also do so in hardware; each block of both the plaintext and associated data can be processed independently of the others, and all of OCB's operations (xor, GF(2^128) doubling, Grey codes) seem like they would be well suited to a fast hardware implementation. And actually McGrew and Viega's original 2003 paper on GCM specifically mentions that OCB scales to high speeds in hardware, though they do not provide references to specific results. I confess that I might not explain very well a controversy that I lie on a different side of -- I'm using CCM, myself. My above explanation is what GCM proponents have told me -- that if you are doing multiple high-speed streams and have hardware you can throw at it, then it's what you want. There is/was an additional OCB issue specifically that there is/was IP around it. Univ. of California has recently relaxed them, but it's still needlessly complex. I confess I tend to think of OCB as a footnote -- the cool thing we can't use -- only. My decision tree is that I think in a perfect world, one would use OCB, but the IP nixes it. CCM was created specifically because it's not OCB, and EAX as an alternative to the alternative CCM. GCM is too easy to screw up and is slow in software (yes, there's galois multiply on Intel processors, but most of what I do is ARM). There's nothing wrong with EAX, but CCM is there and standardized in a number of places. Other people might end up with a different place for their own reasons. I don't think that any of them are bad, including the decision of using GCM and just making sure you do it right. Jon -BEGIN PGP SIGNATURE- Version: PGP Universal 3.2.0 (Build 1672) Charset: us-ascii wj8DBQFSKm8XsTedWZOD3gYRAjUuAKC2sqp6C0wCrg+KydfhroBjYahqjwCgo+4d tLx/6e9TaWxRuknLWHEvF5w= =M7s8 -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Opening Discussion: Speculation on BULLRUN
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Sep 6, 2013, at 4:42 AM, Jerry Leichter leich...@lrw.com wrote: Argh! And this is why I dislike using symmetric and asymmetric to describe cryptosystems: In English, the distinction is way too brittle. Just a one-letter difference - and in including or not the letter physically right next to the s. This is why I try to say public key and symmetric key whenever possible. Jon -BEGIN PGP SIGNATURE- Version: PGP Universal 3.2.0 (Build 1672) Charset: us-ascii wj8DBQFSKnGAsTedWZOD3gYRAhWzAJ9HfLc3nVuzIGMrywrY83vi63AlLgCeJdhJ NytYPZWee7tNMqdjI5TMkhQ= =vdDZ -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Opening Discussion: Speculation on BULLRUN
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Sep 6, 2013, at 6:23 AM, Jerry Leichter leich...@lrw.com wrote: Is such an attack against AES *plausible*? I'd have to say no. But if you were on the stand as an expert witness and were asked under cross-examination Is this *possible*?, I contend the only answer you could give is I suppose so (with tone and body language trying to signal to the jury that you're being forced to give an answer that's true but you don't in your gut believe it). I'd be happy to give a different answer, like -- almost certainly not. Could an encryption algorithm be explicitly designed to have properties like this? I don't know of any, but it seems possible. I've long suspected that NSA might want this kind of property for some of its own systems: In some cases, it completely controls key generation and distribution, so can make sure the system as fielded only uses good keys. If the algorithm leaks without the key generation tricks leaking, it's not just useless to whoever grabs onto it - it's positively hazardous. The gun that always blows up when the bad guy tries to shoot it We know as a mathematical theorem that a block cipher with a back door *is* a public-key system. It is a very, very, very valuable thing, and suggests other mathematical secrets about hitherto unknown ways to make fast, secure public key systems. To me, it's like getting a cheap supply of gold and then deciding you'll make bullets out of it instead of lead. To riff on that analogy, it feels like you're suggesting that they would shoot themselves in the foot because they know that the bullet fragments will hurt their opponent. That's why I say almost certainly not. It suggests irrationality beyond my personal ken. It's something I classify colloquially as too stupid to live. My assumptions about the NSA are that they're smart, clever, and practical. Conjectures about their behavior that deviate from any of those axes ring false to the degree that they deviate from that. My conjectures start with assuming they're at least as smart as me, and I start with what would I do if I were them? I think they're smart enough not to attack the strong points of the system, but the weak points. I think they're smart enough to prefer operating in stealth. Yeah, yeah, sure, if with those resources I stumbled into a fundamental mathematical advantage, I'd use it. But I would use it to maximize my gain, not to be gratuitously sneaky. The math we know about block ciphers suggests (not proves, suggests) that a back door in a cipher is impractical, because it would imply the holy grail of public key systems -- fast, secure, public key crypto. It suggests secure trapdoor functions that can be made out of very simple components. If I found one, it would be great, but I'd devote my resources to places where I technology is on my side. Those include network security and software security, along with traffic analysis. If I wanted to devote research resources, I'd be looking closely at language-theoretic security. I'd be paying close attention to the fantastic things that have come out of there. The stuff that Bangert, Bratus, Shapiro, and Smith did on turning an MMU into a Turing machine is where I'd devote research, as well as their related work on weird machines. I apologize for repeating myself, but I'd fight the next war, not the last one. Jon -BEGIN PGP SIGNATURE- Version: PGP Universal 3.2.0 (Build 1672) Charset: us-ascii wj8DBQFSKno7sTedWZOD3gYRAjMUAJ9qDQcQZVr/1580qZStlu/7fFgLIwCg2U5r WFth65Vi4GIDF1wu5oVukYs= =M/f+ -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Why prefer symmetric crypto over public key crypto?
On Sep 6, 2013, at 6:13 AM, Jaap-Henk Hoepman j...@cs.ru.nl wrote: In this oped in the Guardian http://www.theguardian.com/world/2013/sep/05/nsa-how-to-remain-secure-surveillance Bruce Schneier writes: Prefer symmetric cryptography over public-key cryptography. The only reason I can think of is that for public key crypto you typically use an American (and thus subverted) CA to get the recipients public key. What other reasons could there be for this advice? Public-key cryptography is less well-understood than symmetric-key cryptography. It is also tetchier than symmetric-key crypto, and if you pay attention to us talking about issues with nonces, counters, IVs, chaining modes, and all that, you see that saying that it's tetchier than that is a warning indeed. The magic of public key crypto is that it gets rid of the key management problem -- if I'm going to communicate with you with symmetric crypto, how do I get the keys to you? The pain of it is that it replaces it with a new set of problems. Those problems include that the amazing power of public-key crypto tempts one to do things that may not be wise. Jon PGP.sig Description: PGP signature ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Is ECC suspicious?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Sep 5, 2013, at 4:09 PM, Perry E. Metzger pe...@piermont.com wrote: Now, this certainly was a problem for the random number generator standard, but is it an actual worry in other contexts? I tend not to believe that but I'm curious about opinions. If there is a place to worry, it would be about the specific curves. I had a lively dinner-table conversation with Dan Bernstein and Tanja Lange at CRYPTO this year, and Dan pointed out that there's been a lot of work on cryptanalysis of specific curves and curve families. We know, for example that anything over GF(p^n) is seeming dodgy, but GF(p) seems okay. There are recent Eurocrypt papers on said. The Suite B curves were picked some time ago. Maybe they have problems. I have a small amount of raised eyebrow because the greatest bulwark we have against the SIGINT capabilities of any intelligence agency are that agency's IA cousins. I don't think that the Suite B curves would have been intentionally weak. That would be a shock. However, if the SIGINT guys (e.g.) discovered a weakness that gave P-256 something les than 128 bits of security, they might just sit on it. Certainly, even if they wanted to release that, there would be politics compounded by security compartments. Learning that they sat on a weakness would might be a shock, but it wouldn't be a surprise. If there is an issue, that's the place it would be. Not ECC as a technology, but specific curves. Jon -BEGIN PGP SIGNATURE- Version: PGP Universal 3.2.0 (Build 1672) Charset: us-ascii wj8DBQFSKRprsTedWZOD3gYRAqEnAKDrFOI4v8DnYxZdPEbFHflTRktwcACg28/f hyvPYuLAdM+58z0rTxg9Fss= =EnSi -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Suite B after today's news
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Sep 5, 2013, at 6:16 PM, Dan McDonald dan...@kebe.com wrote: Consider the Suite B set of algorithms: AES-GCM AES-GMAC IEEE Elliptic Curves (256, 384, and 521-bit) Traditionally, people were pretty confident in these. How are people's confidence in them now? My opinion about GCM and GMAC has not changed. I've never been a fan. My objection to them is that they are tetchy to use -- hard to get right, easy to get wrong. It's pretty much what is in Niels's paper: http://csrc.nist.gov/groups/ST/toolkit/BCM/documents/comments/CWC-GCM/Ferguson2.pdf I don't think they're actively bad, though. For the purpose they were created for -- parallelizable authenticated encryption -- it serves its purpose. You can have a decent implementor implement them right in hardware and walk away. I think that any of OCB, CCM, or EAX are preferable from a security standpoint, but none of them parallelize as well. If you want to do a lot of encrypted and authenticated high-speed link encryption, well, there is likely no other answer. It's GCM or nothing. Remember that every intelligence agency has a SIGINT branch and an IA (Information Assurance) branch. Sometimes they are different agencies (at least titularly) like GCHQ/CESG, BND/BSI, etc. The NSA does not separate its SIGINT directorate and the IA directorate into different agencies. I think the IA people have shown they do a good job, but they are humans too and make mistakes. Heck, there are things that various IA people do and recommend that I disagree with from weakly to strongly. I weakly disagree with GCM -- I think it's spinach and I say to hell with it, as opposed to thinking it's crap. Would a signals intelligence organization that finds a flaw in what the IA people did tell the IA branch so people can fix it? That's the *real* question. Jon -BEGIN PGP SIGNATURE- Version: PGP Universal 3.2.0 (Build 1672) Charset: us-ascii wj8DBQFSKTc3sTedWZOD3gYRAhsoAKCP0xlsuWIE5CMDeBMwqQQ4hVIInwCg7LJX XHkmG7DzCxPubNay86/UL7U= =Eo6n -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Suite B after today's news
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Sep 5, 2013, at 7:15 PM, Peter Gutmann pgut...@cs.auckland.ac.nz wrote: Jon Callas j...@callas.org writes: My opinion about GCM and GMAC has not changed. I've never been a fan. Same here. AES is, as far as we know, pretty secure, so any problems are going to arise in how AES is used. AES-CBC wrapped in HMAC is about as solid as you can get. AES-GCM is a design or coding accident waiting to happen. This isn't the 1990s, we don't need to worry about whether DES or FEAL or IDEA or Blowfish really are secure or not, we can just take a known-good system off the shelf and use it. What we need to worry about now is deployability. AES- CTR and AES-GCM are RC4 all over again, it's as if we've learned nothing from the last time round. How do you feel (heh, I typoed that as feal) about the other AEAD modes? Jon -BEGIN PGP SIGNATURE- Version: PGP Universal 3.2.0 (Build 1672) Charset: us-ascii wj8DBQFSKTwesTedWZOD3gYRAgyXAJ0X7q9+1DRM+1p/eQ13Hlu0P4s4vQCgsQLG zs8/592lHqurlVWlghRTdJg= =Ni0l -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Opening Discussion: Speculation on BULLRUN
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Sep 5, 2013, at 7:01 PM, Peter Gutmann pgut...@cs.auckland.ac.nz wrote: Perry E. Metzger pe...@piermont.com writes: I'm aware of the randomness issues for ECDSA, but what's the issue with ECDH that you're thinking of? It's not just randomness, it's problems with DLP-based crypto in general. For example there's the scary tendency of DLP-based ops to leak the private key (or at least key bits) if you get even the tiniest thing wrong. For example if you follow DSA's: k = G(t,KKEY) mod q then you've leaked your x after a series of signatures, so you need to know that you generate a large-than-required value before reducing mod q. The whole DLP family is just incredibly brittle. I don't disagree by any means, but I've been through brittleness with both discrete log and RSA, and it seems like only a month ago that people were screeching to get off RSA over to ECC to avert the cryptocalypse. And that the ostensible reason was that there are new discrete log attacks -- which was just from Mars and I thought that that proved the people didn't know what they were talking about. Oh, wait, it *was* only a month ago! Silly me. Crypto experts issue a call to arms to avert the cryptopocalypse http://arstechnica.com/security/2013/08/crytpo-experts-issue-a-call-to-arms-to-avert-the-cryptopocalypse/ Discrete log has brittleness. RSA has brittleness. ECC is discrete log over a finite field that's hard to understand. It all sucks. RSA certainly appears to require vastly longer keys for the same level of assurance as ECC. That's assuming that the threat is cryptanalysis rather than bypass. Why bother breaking even 1024-bit RSA when you can bypass? And now we're back to the hymnal you and I have been singing from. It ain't the crypto, it's the software. Jon -BEGIN PGP SIGNATURE- Version: PGP Universal 3.2.0 (Build 1672) Charset: us-ascii wj8DBQFSKTuhsTedWZOD3gYRAhiJAKDaNIw1ztD/Lj1WAW3U/pOtkpoybQCgoW6o nd08pq+l1QiViF7cPATuPig= =Z3wh -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Opening Discussion: Speculation on BULLRUN
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Sep 5, 2013, at 7:31 PM, Jerry Leichter leich...@lrw.com wrote: Another interesting goal: Shape worldwide commercial cryptography marketplace to make it more tractable to advanced cryptanalytic capabilities being developed by NSA/CSS. Elsewhere, enabling access and exploiting systems of interest and inserting vulnerabilities. These are all side-channel attacks. I see no other reference to cryptanalysis, so I would take this statement at face value: NSA has techniques for doing cryptanalysis on certain algorithms/protocols out there, but not all, and they would like to steer public cryptography into whatever areas they have attacks against. This makes any NSA recommendation *extremely* suspect. As far as I can see, the bit push NSA is making these days is toward ECC with some particular curves. Makes you wonder. Yes, but. The reason we are using those curves is because they want them for products they buy. (I know for a fact that NSA has been interested in this area of mathematics for a *very* long time: A mathematician I knew working in the area of algebraic curves (of which elliptic curves are an example) was re cruited by - and went to - NSA in about 1975. I heard indirectly from him after he was at NSA, where he apparently joined an active community of people with related interests. This is a decade before the first public suggestion that elliptic curves might be useful in cryptography. (But maybe NSA was just doing a public service, advancing the mathematics of algebraic curves.) I think it might even go deeper than that. ECC was invented in the civilian world by Victor Miller and Neal Koblitz (independently) in 1985, so they've been planning for breaking it even a decade before its invention. NSA has two separate roles: Protect American communications, and break into the communications of adversaries. Just this one example shows that either (a) the latter part of the mission has come to dominate the former; or (b) the current definition of an adversary has become so broad as to include pretty much everyone. I definitely believe (b). However, I also think that they aren't a monolith, and we know that each part of the mission is the adversary of the other. I don't believe that the IA people would do a bad job to support SIGINT. Once you start down that path, it's easy to get to madness, or perhaps merely evidence that they have time travel. I'll add that they have a third mission -- run the government's classified computer network, and that *that* mission is the one that Snowden worked for. Jon -BEGIN PGP SIGNATURE- Version: PGP Universal 3.2.0 (Build 1672) Charset: us-ascii wj8DBQFSKUQLsTedWZOD3gYRAlZvAKCtZP9iy1eyGBq4UbG9xO9jmNscigCZAYVv M13sxiFZ5ch7PhgoIh1LziA= =fEtw -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Opening Discussion: Speculation on BULLRUN
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Sep 5, 2013, at 8:02 PM, Jerry Leichter leich...@lrw.com wrote: Perhaps it's time to move away from public-key entirely! We have a classic paper - Needham and Schroeder, maybe? - showing that private key can do anything public key can; it's just more complicated and less efficient. Not really. The Needham-Schroeder you're thinking of is the essence of Kerberos, and while Kerberos is a very nice thing, it's hardly a replacement for public key. If you use a Needham-Schroeder/Kerberos style system with symmetric key systems, you end up with all of the trust problems, but on steroids. (And by the way, please say symmetric key as opposed to public key -- if you say private key then someone will inevitably get confused and think you mean the private half of a public key pair and there will be tears.) Not only are the techniques brittle and increasingly under suspicion, but in practice almost all of our public key crypto inherently relies on CA's - a structure that's just *full* of well-known problems and vulnerabilities. Public key *seems to* distribute the risk - you just get the other guy's public key and you can then communicate with him safely. But in practice it *centralizes* risks: In CA's, in single magic numbers that if revealed allow complete compromise for all connections to a host (and we now suspect they *are* being revealed.) I have to disagree. You don't need a CA. There's a very long rant I could make here, and I'll try to keep it a summary. Much of the system we have is built needing CAs, but it was only built that way. A long time ago, the certificate structure we're still vestigially using had as one of its goals a way to keep the riff-raff from using crypto. I remember when I got my first PEM certificate, I had to send my blinking passport off to MITRE for two weeks so they could let me encrypt the crapola that was sitting on my disk unencrypted. It was harder to get a cert than it was to get a visa to Saudi Arabia! So much of what we would have encrypted we just printed on paper and put in a file cabinet. Excuse me, I'm starting on that rant I said I wouldn't do. The major problem one has with public key is knowing that the public key of the endpoint you want to talk to us actually the right public key. Trusted Introducers of any sort are one way to solve the problem. CAs are merely an industrialized form of Trusted Introducer and not ipso facto bad. The way that Web PKI (as it's now being called) is using Trusted Introducers is suboptimal, but ironically, we are on the inflection point of a real honest-to-whomever fix to them in the form of Certificate Transparency. That suggests even another discussion, one that I promised Ben I'd get to eventually. The major problem with the certificate system is actually the browsers, in my opinion, because they actively discourage using certificates in any other way. If browsers, for example, allowed you to use a private cert with a user experience that was ultimately SSH-like (also called TOFU for Trust On First Use) as opposed to putting big blood-red danger warnings up, it would work out better for everyone including the CAs. But anyway, there are other solutions. They range from some variant of Direct Trust being TOFU or even using a Kerberos-like system to hand you a key, or what we do in ZRTP, or lots of other things. The bottom line is that if you want to send someone a message securely and you have never talked to them before, you have no other way to deal with it than public key systems. (Or you can re-define the problem. Suppose I want to send Glenn Greenwald a message and his Kerberos controller gives me an AES key, I merely have to trust the controller. If we say that trusting him is the same as trusting the controller, then yeah, sure, it works. That's a suitable redefinition in which the KDC is isomorphic to a CA. But if we allow public key, then I could get Mr. Greenwald's public key from an intermediary who is not necessarily an authority, or even self-publish keys. It's done with PGP all the time.) We need to re-think everything about how we do cryptography. Many decisions were made based on hardware limitations of 20 and more years ago. More efficient claims from the 1980's often mean nothing today. Many decisions assumed trust models (like CA's) that we know are completely unrealistic. Mobile is very different from the server-to-server and dumb-client-to-server models that were all anyone thought about the time. (Just look at SSL: It has the inherent assumption that the server *must* be authenticated, but the client ... well, that's optional and rarely done.) None of the work then anticipated the kinds of attacks that are practical today. I concur that the way that browsers and web servers us SSL is suboptimal. This doesn't mean that a solution is impossible, it only means we have
Re: [Cryptography] Opening Discussion: Speculation on BULLRUN
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Sep 5, 2013, at 8:24 PM, Jerry Leichter leich...@lrw.com wrote: Another interesting goal: Shape worldwide commercial cryptography marketplace to make it more tractable to advanced cryptanalytic capabilities being developed by NSA/CSS. ... This makes any NSA recommendation *extremely* suspect. As far as I can see, the bit push NSA is making these days is toward ECC with some particular curves. Makes you wonder. Yes, but. The reason we are using those curves is because they want them for products they buy. They want to buy COTS because it's much cheap, and COTS is based on standards. So they have two contradictory constraints: They want the stuff they buy secure, but they want to be able to break in to exactly the same stuff when anyone else buys it. The time-honored way to do that is to embed some secret in the design of the system. NSA, knowing the secret, can break in; no one else can. There have been claims in this direction since NSA changed the S-boxes in DES. For DES, we now know that was to protect against differential cryptanalysis. No one's ever shown a really convincing case of such an embedded secret hack being done ... but now if you claim it can't happen, you have to explain how the goal in NSA's budget could be carried out in a way consistent with the two constraints. Damned if I know (I know for a fact that NSA has been interested in this area of mathematics for a *very* long time: A mathematician I knew working in the area of algebraic curves (of which elliptic curves are an example) was recruited by - and went to - NSA in about 1975 I think it might even go deeper than that. ECC was invented in the civilian world by Victor Miller and Neal Koblitz (independently) in 1985, so they've been planning for breaking it even a decade before its invention. I'm not sure exactly what you're trying to say. Yes, Miller and Koblitz are the inventors of publicly known ECC, and a number of people (Diffie, Hellman, Merkle, Rivest, Shamir, Adelman) are the inventors of publicly known public-key cryptography. But in fact we now know that Ellis, Cocks, and Williamson at GCHQ anticipated their public key cryptography work by several years - but in secret. I think the odds are extremely high that NSA was looking at cryptography based on algebraic curves well before Miller and Koblitz. Exactly what they had developed, there's no way to know. But of course if you want to do good cryptography, you also have to do cryptanalysis. So, yes, it's quite possible that NSA was breaking ECC a decade before its (public) invention. :-) What am I trying to say? I'm being a bit of a smartass. I'm sorry, it's a character flaw, but it's one that amuses me. I'll be blunt, instead. There is a lot of discussion here -- not really so much from you but in general -- that in my opinion is fighting the last war. Sometimes that last war is the crypto wars of the 1990s, but sometimes it's WWII. Yeah, yeah, if you don't remember history you'll repeat it, but we need to look through the windshield, not the rear view mirror. My smartassedness was saying that by looking at the past, gawrsh, maybe we're seeing a time machine! The present war is not the previous one. This one is not about crypto. It involves crypto, but it's not *about* it. The bright young things of 1975 who went to work for the NSA wrote theorems and got lifetime employment. The bright young things of 2010 write shellcode and are BAH contractors. There are two major trends that are happening. One is that they're hitting the network, not the crypto. Look at Dave Aitel's career, not your mathematician friend. Aitel is one of the ones that got away, and what he talks about is what we're seeing that they are doing. If you have to listen to one of the old school mathematicians, listen to Shamir -- they go around crypto. (And actually, we need to look not at Aitel as he left in 2002, but the bright young thing who left last year, but I think I'm making my point.) The other major trend is that outsourcing, contracting and other things ruined the social contract between them and the people who work there. (This reflects the other other problem which is that the social contract between them and us seems to be void.) Nonetheless, Aitel and others left and are leaving because no longer do they tap you on the shoulder in college and then there's the mutual backscratching of a lifelong career. Now a contractor knows that when the contract is over, they're out of a job. And when the contractor sees malfeasance that goes all the way up to the Commander-in-Chief, they look at what their employment agreement said, as well as the laws that apply to them. If you're in that environment and you see malfeasance, you go to your superior and it's a felony not to. If your superior is part of the malfeasance, you go
Re: [Cryptography] Can you backdoor a symmetric cipher (was Re: Opening Discussion: Speculation on BULLRUN)
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Sep 5, 2013, at 9:33 PM, Perry E. Metzger pe...@piermont.com wrote: It is probably very difficult, possibly impossible in practice, to backdoor a symmetric cipher. For evidence, I direct you to this old paper by Blaze, Feigenbaum and Leighton: http://www.crypto.com/papers/mkcs.pdf There is also a theorem somewhere (I am forgetting where) that says that if you have a block cipher with a back door, then it is also a public key cipher. The proof is easy to imagine -- whatever trap door lets you unravel the cipher is the secret key, and the block cipher proper is a PRF that covers the secret key. I remember the light bulb going on over my head when I saw it presented. So if you have a backdoored symmetric cipher, you also have a public key algorithm that runs five orders of magnitude faster than any existing public key algorithm. This suggests that such a thing does not exist. We have a devil of a time making public key systems that actually work. Look at all we've talked about with brittleness of the existing ones, and how none of the alternatives (Lattice, McElice, etc.) are really any better and most of those are really only useful in a post-quantum world. It doesn't prove it, but it suggests it. The real question there is whether someone who had such a thing would want to be remembered by history as the inventor of the most significant PK system the world has ever seen, or a backdoored cipher. Jon -BEGIN PGP SIGNATURE- Version: PGP Universal 3.2.0 (Build 1672) Charset: us-ascii wj8DBQFSKV02sTedWZOD3gYRAnK5AJ9aB8I0csP1ryW6aaXEqMPOyL31PwCfZuUs swH73+Zqwqy4ZFeD7QjWoyM= =BnW3 -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] NSA and cryptanalysis
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 What is the state of prior art for the P-384? When was it first published? Given that RIM is trying to sell itself right now and the patents are the only asset worth having, I don't have good feelings on this. Well apart from the business opportunities for expert witnesses specializing in crypto. The problem is that to make the market move we need everyone to decide to go in the same direction. So even though my employer can afford a license, there is no commercial value to that license unless everyone else has access. Do we have an ECC curve that is (1) secure and (2) has a written description prior to 1 Sept 1993? Due to submarine patent potential, even that is not necessarily enough but it would be a start. My understanding is that of the NIST curves, P-256 and P-384 are unencumbered and that P-521 was dropped from Suite B because of IP concerns along with MQV. I don't pretend to speak with authority on any of it. The niggling things often don't make sense. I'm just saying what my understanding is. Jon -BEGIN PGP SIGNATURE- Version: PGP Universal 3.2.0 (Build 1672) Charset: iso-8859-1 wj8DBQFSJg4vsTedWZOD3gYRAka/AKChFoqbDL35bwkrSkeUWdLckNnh5QCfU2mh 7fBzDMh5JKvCI8Hu/AuIuk8= =dv6q -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] FIPS, NIST and ITAR questions
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 2) Is anyone aware of ITAR changes for SHA hashes in recent years that require more than the requisite notification email to NSA for download URL and authorship information? Figuring this one out last time around took ltttss of reading. ITAR? Cryptography hasn't been under ITAR since way back in the 1900s. Jon -BEGIN PGP SIGNATURE- Version: PGP Universal 3.2.0 (Build 1672) Charset: us-ascii wj8DBQFSJi1GsTedWZOD3gYRApH4AKDBAgddU4Cdi7T+kzDVrJ7JXgmQXgCg4I4p /iPW/GvNa2SOfCzXbl8kpME= =1+0b -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] NSA and cryptanalysis
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Sep 2, 2013, at 3:06 PM, Jack Lloyd ll...@randombit.net wrote: On Mon, Sep 02, 2013 at 03:09:31PM -0400, Jerry Leichter wrote: a) The very reference you give says that to be equivalent to 128 bits symmetric, you'd need a 3072 bit RSA key - but they require a 2048 bit key. And the same reference says that to be equivalent to 256 bits symmetric, you need a 521 bit ECC key - and yet they recommend 384 bits. So, no, even by that page, they are not recommending equivalent key sizes - and in fact the page says just that. Suite B is specified for 128 and 192 bit security levels, with the 192 bit level using ECC-384, SHA-384, and AES-256. So it seems like if there is a hint to be drawn from the Suite B params, it's about AES-192. The real issue is that the P-521 curve has IP against it, so if you want to use freely usable curves, you're stuck with P-256 and P-384 until some more patents expire. That's more of it than 192 bit security. We can hold our noses and use P-384 and AES-256 for a while. Jon -BEGIN PGP SIGNATURE- Version: PGP Universal 3.2.0 (Build 1672) Charset: us-ascii wj8DBQFSJWpasTedWZOD3gYRAjMtAKD/W9IPWtI8qwpP7w0v1aX9BgrwHACeMsRl 594r4LFPCTsIA9+xBUk4/5Q= =RGYR -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Email and IM are ideal candidates for mix networks
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Aug 29, 2013, at 3:43 AM, Jerry Leichter leich...@lrw.com wrote: - If I need to change because the private key was compromised, there's nothing I can do about past messages; the question is what I do to minimize the number of new messages that will arrive with a now-known-insecure key. This was the case I assumed the previous poster was concerned with. Personally, I think you shouldn't worry about this. The real sin is getting an attachment to a key. You are much better off developing a philosophy of key management in which you use it and then get rid of it regularly. If you do this reasonably well, it reduces the chance that a key will get compromised because its aegis, footprint, shadow, etc. is small. It also reduces the effect because most likely it takes more time to break the key than its lifetime; I consider hacking the key, stealing it, etc. to be a form of breaking. Stealing a key through a 'sploit is also cryptanalysis. Be Buddist about your keys and have no attachments. (This is also a good philosophy about mail, but that's a different discussion.) - As I outlined things, there was never a reason you couldn't have multiple public keys, and in fact it would be a good idea to make traffic analysis harder. Adding a new key for a new facet of your electronic life is trivial. That's a fine step to a good attitude, but the effect on traffic analysis will be small or close to nil. Traffic analysis includes social graph analysis and any good social graph analysis will include probabilities that an entity will have different personae. Keys are just masks, too, just like a persona. Jon -BEGIN PGP SIGNATURE- Version: PGP Universal 3.2.0 (Build 1672) Charset: us-ascii wj8DBQFSIC5MsTedWZOD3gYRAmpmAJ0UJ7K9GWo9FLSa8HR1CmSbWRZcgQCgkuif rbTWOi5eHdxNpRzQ9VkqDBY= =PpOZ -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: Has there been a change in US banking regulations recently?
What on earth happened? Was there a change in banking regulations in the last few months? Possibly it's related to PCI DSS and other work that BITS has been doing. Also, if one major player cleans up their act and sings about how cool they are, then that can cause the ice to break. Another possibility is that a number of people in financials have been able to get security funding despite the banking disasters because the risk managers know that the last thing they need is a security brouhaha while they are partially owned by government and thus voters. I bet on synergies between both. If I were a CSO at a bank, I might encourage a colleague to make a presentation about how their security cleanups position them to get an advantage at getting out from under the thumb of the feds over their competitors. Then I would make sure the finance guys got a leaked copy. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: NY Times article on Blackberry
On Aug 9, 2010, at 4:47 PM, Perry E. Metzger wrote: Really quite mediocre coverage of Blackberry's security issues https://www.nytimes.com/2010/08/09/technology/09rim.html I especially fault them for having virtually no coverage of the position that would oppose removing security features for the benefit of law enforcement -- the fact that such alterations can seriously harm legitimate users is not mentioned at all. Indeed, but there are also other things not being mentioned. One is that there is an OpenPGP package available on all RIM devices, and if you are using that, you get true end-to-end crypto. Another is that one of the things that the Saudis definitely want is control over whether young men and young women are talking to each other, which is a threat to society far more pernicious than terrorism. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: A mighty fortress is our PKI, Part II
On Jul 30, 2010, at 4:58 AM, Peter Gutmann wrote: [0] I've never understood why this is a comedy of errors, it seems more like a tragedy of errors to me. That is because a tragedy involves someone dying. Strictly speaking, a tragedy involves a Great Person who is brought to their undoing and death because of some small fatal flaw in their otherwise sterling character. In contrast, comedies involve no one dying, but the entertaining exploits of flawed people in flawed circumstances. PKI is not a tragedy, it's comedy. No one dies in PKI. They may get embarrassed or lose money, but that happens in comedy. It's the basis of many timeless comedies. Specifically, PKI is a farce. In the same strict definition of dramatic types, a farce is a comedy in which small silly things are compounded on top of each other, over and over. The term farce itself comes from the French to stuff and is comedically like stuffing more and more feathers into a pillow until the thing explodes. So farces involve ludicrous situations, buffoonery, wildly improbable / implausible situations, and crude characterizations of well-known comedic types. Farces typically also involve mistaken identity, disguises, verbal humor including sexual innuendo all in a fast-paced plot that doesn't let up piling things on top of each other until the whole thing bursts at the seams. PKI has figured in tragedy, most notably when Polonius asked Hamlet, What are you signing, milord? and he answered, OIDs, OIDs, OIDs, but that was considered comic relief. Farcical use of PKI is far more common. We all know the words to Gilbert's patter-song, I Am the Very Model of a Certificate Authority, and Wilde's genius shows throughout The Importance of Being Trusted. Lady Bracknell's snarky comment, To lose one HSM, Mr. Worthing, may be regarded as a misfortune, but lose your backup smacks of carelessness, is pretty much the basis of the WebTrust audit practice even to this day. More to the point, not only did Cyrano issue bogus short-lived certificates to help woo Roxane, but Mozart and Da Ponte wrote an entire farcical opera on the subject of abuse of issuance, EV Fan Tutti. There are some who assert that he did this under the control of the Freemasons, who were then trying to gain control of the Austro-Hungarian authentication systems. These were each farcical social commentary on the identity trust policies of the day. Mozart touched upon this again (libretto by Bretzner this time) in The Revocation of the Seraglio, but this was comic veneer over the discontent that the so-called Aluminum Bavariati had with the trade certifications in siding sales throughout the German states, as well as export control policies since Aluminum was an expensive strategic metal of the time. People suspected the Freemasons were behind it all yet again. Nonetheless, it was all farce. Most of us would like to forget some of the more grotesque twentieth-century farces, like the thirties short where Moe, Larry, and Shemp start the Daddy-O DNS registration company and CA or the 23 Skidoo DNA-sequencing firm as a way out of the Great Depression. But S.J. Perleman's Three Shares in a Boat shows a real-world use of a threshold scheme. I don't think anyone said it better than W.C. Fields did in Never Give a Sucker an Even Break and You Can't Cheat an Honest Man. I think you'll have to agree that unlike history, which starts out as tragedy and replays itself as farce, PKI has always been farce over the centuries. It might actually end up as tragedy, but so far so good. I'm sure that if we look further, the Athenians had the same issues with it that we do today, and that Sophocles had his own farcical commentary. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: A mighty fortress is our PKI, Part II
On Aug 4, 2010, at 11:29 PM, Peter Gutmann wrote: Jon Callas j...@callas.org writes: But S.J. Perleman's Three Shares in a Boat Uhh. minor nitpick, it was Jerome K.Jerome who wrote Three Shares in a Boat. He followed it up with Three Certificates on the Bummel, a reference to the sharing of commercial vendors' code-signing keys with malware authors. Oh, well. You are, of course, correct. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Against Rekeying
On Mar 24, 2010, at 2:07 AM, Stephan Neuhaus wrote: On Mar 23, 2010, at 22:42, Jon Callas wrote: If you need to rekey, tear down the SSL connection and make a new one. There should be a higher level construct in the application that abstracts the two connections into one session. ... which will have its own subtleties and hence probability of failure. Exactly, but they're at the proper place in the system. That's what layering is all about. I'm not suggesting that there's a perfect solution, or even a good one. There are times when a designer has a responsibility to make a decision and times when a designer has a responsibility *not* to make a decision. In this particular case, rekeying introduced the most serious problem we've ever seen in a protocol like that. Rekeying itself has always been a bit dodgy. If you're rekeying because you are worried about the strength of the key (e.g. you're using DES), picking a better key is a better answer (use AES instead). The most compelling reason to rekey is not because of the key, but because of the data size. For ciphers that have a 64-bit block size, rekeying because you've sent 2^32 blocks is a much better reason to rekey. But -- an even better solution is to use a cipher with a bigger block size. Like AES. Or Camillia. Or Twofish. Or Threefish (which has a 512-bit block size in its main version). It's far more reasonable to rekey because you encrypted 32G of data than because you are worried about the key. However, once you've graduated up to ciphers that have at least 128-bits of key and at least 128-bits of block size, the security considerations shift dramatically. I will ask explicitly the question I handwaved before: What makes you think that the chance there is a bug in your protocol is less than 2^-128? Or if you don't like that question -- I am the one who brought up birthday attacks -- What makes you think the chance of a bug is less than 2^-64? I believe that it's best to stop worrying about the core cryptographic components and worry about the protocol and its use within a stack of related things. I've done encrypted file managers like what I alluded to, and it's so easy to get rekeying active files right, you don't have to worry. Just pull a new bulk key from the PRNG every time you write a file. Poof, you're done. For inactive files, rekeying them is isomorphic to writing a garbage collector. Garbage collectors are hard to get right. We designed, but never built an automatic rekeying system. The added security wasn't worth the trouble. Getting back to your point, yes, you're right, but if rekeying is just opening a new network connection, or rewriting a file, it's easy to understand and get right. Rekeying makes sense when you (1) don't want to create a new context (because that automatically rekeys) and (2) don't like your crypto parameters (key, data length, etc). I hesitate to say that it never happens, but I think that coming up with a compelling use case where rekeying makes more sense than tearing down and recreating the context is a great exercise. Inconvenient use cases, sure. Compelling, that's hard. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Against Rekeying
I'd be interested in hearing what people think on the topic. I'm a bit skeptical of his position, partially because I think we have too little experience with real world attacks on cryptographic protocols, but I'm fairly open-minded at this point. I think that if anything, he doesn't go far enough. Rekeying only makes sense when you aren't using the right crypto, and even then might make the situation worse. Rekeying opens up a line of attack. From a purely mathematical point of view, here's a way to look at it: The chance of beating your cipher is P1 (ideally, it's the strength of the cipher, let's just say 2^-128). The chance of beating the rekey protocol is P2. Rekeying makes sense when P2 is smaller than P1. When P2 is larger than P1, you've reduced the security of your system to the chance of a flaw in the rekeying, not the cipher. As others have pointed out, it's front of Ekr's mind that there is (was) a major flaw in the SSL/TLS protocol set that came out because of bugs in rekeying. Worse, it affected people who wanted high security in more evil ways than people who just wanted casual security. Many people (including me) think that the best way to fix this is to remove the rekeying. If you need to rekey, tear down the SSL connection and make a new one. There should be a higher level construct in the application that abstracts the two connections into one session. In most cases where you might want to rekey, the underlying system makes it either so trivial you don't need to think about it, or so hard that you can ignore it because you just won't. Let me give a couple examples. First the trivial one. Consider a directory of files where each file is encrypted separately with a bulk key per-file. The natural way to do this is that every time someone rewrites a file, you make a new bulk key and rewrite the file. You don't have to worry about rekeying because it just falls out. Now the hard one. Consider a disk that is encrypted with some full disk encryption system. If you want to rekey that disk, you have to read and write every block. For a large disk, that is seriously annoying. If your disk does 100MB/s (which very fast for a spindle and still pretty fast for SSDs), then you can do 180G per hour (that's 6G per minute, 360G per hour, and halve it because you have to read and write) max. That's about six hours for a terabyte. If your disk only does 10MB/s, which many spindles do, then it's 60 hours to rekey that terabyte. You can do the math for other sizes and speeds as well as I can. In any event, you're not going to rekey the disk very often. In fact most of the people who really care about rekeying storage are changing their requirements so that you have to do a rekey on the same schedule as retiring media -- which effectively means no rekey. A long-time rant of mine is that security people don't do layering. I think this falls into a layering aspect. If you design your system so that your connection has a single key and you transparently reconnect, then rekeying is just forcing a reconnect. If you make your storage have one key per file, then rekeying the files is just rewriting them. It can easily vanish. And yes, obviously, there are exception cases. Exceptions are always exceptional. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Biotech Based Cryptogram Challenge
On Sep 17, 2009, at 6:31 AM, Jim Windle wrote: http://www.genengnews.com/cryptogramchallenge/ This is contest to decode the message encrypted in the colors of a 96 well microtiter plate used for an enzyme-linked immunosorbent assay test in which the color indicate the amount of antigen present. The first to decode it gets a $1500 prize. Yes, but it has nothing to do with biotech at all, except in the presentation. The instructions say that the plaintext is represented in the RGB values of each cell, along with the transparency (alpha) of each color blob. So each character is represented (wolog, since we don't know how deep the alpha channel is) by an integer value of ABGR of each color blob. Cute, but not biotech. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: XML signature HMAC truncation authentication bypass
On Jul 26, 2009, at 10:31 PM, Peter Gutmann wrote: Jon Callas j...@callas.org writes: You are of course correct, Peter, but are you saying that we shouldn't do anything? Well, I think it's necessary to consider the tradeoffs, if you don't know the other side's capabilities then it's a bit risky to assume that they're the same as yours. Let's look at it the other way, and suppose that I said that despite increases in processor power and distributed password crackers, we would leave the iteration count where it was in 1997, because if you increase it, it might go to small machines where that would be an issue. Consider the tradeoffs. You'd call me daft for refusing to protect the majority of people. You are wrong with this. *Messages* don't have this property, so long as they were encrypted to a public key. It is unlocking the *key* that has this problem. The data was encrypted using pre-shared secrets (i.e. packet type 3) which does have this property. (Don't ask me, I didn't create the requirements, I just got called in to help diagnose the problem, which was that at some point S2K's coming from PGP Desktop were killing their embedded units. Maybe they were even using externally-generated private keys or who knows what rather than pre- shared secrets for messages, but whatever it was it was the S2K step that was causing it). Okay, password-protected files would get it, too. I won't ask why you're sending password protected files to an agent. I know you didn't design this. That problem *only* exists when you import a key from a fast client into a slow client. That problem can be fixed either through some smart software (look at the iteration count and if it's higher than you like, change it the next time you use the key), or the user can do it manually. This doesn't work in a heterogeneous environment where the requirements will be something like messages having to comply with certain parts of the OpenPGP spec, and no more. Adding riders telling users how to manually configure individual applications doesn't work because end-users will never read the technical spec, or even know that it exists. I guess we could argue this point endlessly, but I really just brought it up to mention the unintended consequences of a particular design decision, and more generally the dangers of allowing unbounded integer and general data ranges in specs. Some implementations will enforce sensible limits, many won't (and will fail against fairly trivial attacks because of this), and without any guidance in the spec the ones that do take care to bound values are deemed non-compliant while the vulnerable ones that don't do any checking are deemed compliant. This is completely backwards for a security spec. Sure, but. I think that unintended consequences is not quite the right way to put it. We don't intend to cause slow computers problems, but it was an intentional change with well-known upsides and downsides. Despite that, the upside seems to outweigh the downside. This change shipped in September 2006. It's nearly three years old, and this would make only the second issue we had with it. When it shipped, BlackBerries used signed math in computing the iteration count, and got it wrong. We made a BlackBerry export tool that reset the iteration count. That got fixed in 4.1 of the BB software, as I remember it. So with millions helped and one field problem, it's not bad. By the way, do you think it's safe to phase out MD5? That will break all the PGP 2 users. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: XML signature HMAC truncation authentication bypass
Where this falls apart completely is when there are asymmetric capabilities across sender and receiver. You are of course correct, Peter, but are you saying that we shouldn't do anything? I don't believe that we should roll over and die. We should fight back, even if the advantage is to the attacker. Having an embedded device suspend (near) real- time processing while it iterates away at something generated on a multicore 3GHz desktop PC isn't really an option in a production environment (the actual diagnosis was messages generated by PGP Desktop cause our devices to crash because they were triggering a deadman timer that soft-restarted them, it wasn't until they used an implementation that sanity-checked input values that they realised what the problem was). You are wrong with this. *Messages* don't have this property, so long as they were encrypted to a public key. It is unlocking the *key* that has this problem. That problem *only* exists when you import a key from a fast client into a slow client. That problem can be fixed either through some smart software (look at the iteration count and if it's higher than you like, change it the next time you use the key), or the user can do it manually. Set your passphrase once to the same thing it used to be. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: XML signature HMAC truncation authentication bypass
On Jul 17, 2009, at 8:39 PM, Peter Gutmann wrote: PGP Desktop 9 uses as its default an iteration count of four million (!!) for its password hashing, which looks like a DoS to anything that does sanity-checking of input. That's precisely what it is -- a denial of service to password crackers. There are a couple of things I'll add, one in the OpenPGP standard, and one in that implementation. In the standard, the iteration count is not a count of hash iterations as in (e.g.) PKCS#5, but a length of output. So four million is four million bytes of output. For SHA-1, that's a count of 200,000, and for SHA-256 125,000 iterations. While this is a bit eccentric, it allows you to use any size hash and any block size cipher. Even more eccentric is the way it's encoded, as an 8-bit floating point value. In the implementation, we upped the default because of more password cracking, but also added a twist in it. We time the number of iterations take 1/10 of a second on the computer you're using, and use that value. The goal is to have the iteration count scale as computers get faster without having to make software changes. The downsides of this are left as an exercise for the reader (as are the obvious workarounds). Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: What will happen to your crypto keys when you die?
On Jul 1, 2009, at 4:29 PM, silky wrote: On Wed, Jul 1, 2009 at 6:48 PM, Udhay Shankar Nud...@pobox.com wrote: Udhay Shankar N wrote, [on 5/29/2009 9:02 AM]: Fascinating discussion at boing boing that will probably be of interest to this list. http://www.boingboing.net/2009/05/27/what-will-happen-to.html Followup article by Cory Doctorow: http://www.guardian.co.uk/technology/2009/jun/30/data-protection-internet A potentially amusing/silly solution would be to have one strong key that you change monthly, and then, encrypt *that* key, with a method that will be brute-forceable in 2 months and make it public. As long as you are constantly changing your key, no-one will decrypt it in time, but assuming you do die, they can potentially decrypt it while arranging your funeral :) I'll point out that PGP has had key splitting for ages now. You can today make a strong public key and split it into N shares, of which two or three shares are needed to reconstitute the key, and hand those out to trusted loved ones. You can then use that public key for files, virtual disks, whole disk volumes -- anywhere you could use an RSA or Elgamal key -- and be assured that your data is safe in the absence of a conspiracy of those loved ones. It's there now, and has been there for a decade. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: password safes for mac
On Jun 27, 2009, at 6:57 PM, Perry E. Metzger wrote: Does anyone have a recommended encrypted password storage program for the mac? I would recommend the built-in keychain for anything that it works with. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Warning! New cryptographic modes!
I'd use a tweakable mode like EME-star (also EME*) that is designed for something like this. It would also work with 512-byte blocks. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: [tahoe-dev] SHA-1 broken!
It also is not going to be trivial to do this -- but it is now in the realm of possibility. I'm not being entirely a smartass when I say that it's always in the realm of possibility. The nominal probability for SHA-1 -- either 2^80 or 2^160 depending on context -- is a positive number. It's small, but it's always possible. The recent case of cert collisions happened because of two errors, hash problems and sequential serial numbers. If either had been corrected, the problem wouldn't have happened. I liken in in analogy to a fender-bender that happened because the person responsible had both worn-out brakes (an easily-fixable technological problem) and was tailgating (an easily-fixable suboptimal operational policy). It's a mistake to blame the wreck on either. It's enlightening to point out that either a good policy or a more timely upgrade schedule would have made the problem not occur. The problem right now is not that MD5, SHA1, etc. are broken. It is that they are broken in ways that you have to be an expert to understand and even the experts get into entertaining debates about. Any operational expert worth their salt should run screaming from a technology that the boffins have debates about flaws over dinner. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: SHA-1 collisions now at 2^{52}?
On Apr 30, 2009, at 4:31 PM, Perry E. Metzger wrote: Eric Rescorla e...@networkresonance.com writes: McDonald, Hawkes and Pieprzyk claim that they have reduced the collision strength of SHA-1 to 2^{52}. Slides here: http://eurocrypt2009rump.cr.yp.to/ 837a0a8086fa6ca714249409ddfae43d.pdf Thanks to Paul Hoffman for pointing me to this. This is a very important result. The need to transition from SHA-1 is no longer theoretical. Let me make a couple of comments, one from each side of my mouth. * I would like to see an implementation of this result, producing a collision. 2^52 is a nice number, but it needs a scale. I'm not worried about 2^52 years. Or even seconds. I say this solely because I expected a practical 2^63 collision by now, and have been wondering about what the scale of that 2^63. I would like to see an implementation. * What do you mean by no longer theoretical? The accepted wisdom on 80-bit security (which includes SHA-1, 1024-bit RSA and DSA keys, and other things) is that it is to be retired by the end of 2010. The end of 2010 fast approacheth. If you include into development time some reasonable level of market adoption, one might convincingly argue that the end of SHA-1 ought to be shipping this summer, or certainly in the fall, and no later than the *start* of 2010. The need to transition from SHA-1 is apparent and manifest. New results merely confirm conventional wisdom. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Property RIghts in Keys
On Feb 12, 2009, at 11:24 AM, Donald Eastlake wrote: On Thu, Feb 12, 2009 at 12:58 PM, Perry E. Metzger pe...@piermont.com wrote: s...@acw.com writes: ... There are four kinds of intellectual property. Is it a trade secret? No. Is it a trademark or something allied like trade dress? No. Is it patentable? No. Is it copyrightable? No. So, depending on how creative the extension fields are :-), or may not dependent on that, why isn't it copyrightable? For the same reason that phone books are not copyrightable. A certificate is nothing more than a directory entry with frosting and sprinkles. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: MD5 considered harmful today, SHA-1 considered harmful tomorrow
I have a general outline of a timeline for adoption of new crypto mechanisms (e.g. OAEP, PSS, that sort of thing, and not specifically algorithms) in my Crypto Gardening Guide and Planting Tips, http://www.cs.auckland.ac.nz/~pgut001/pubs/crypto_guide.txt , see Question J about 2/3 of the way down. It's not meant to be definitively accurate for all cases but was created as a rough guideline for people proposing to introduce new crypto mechanisms to give an idea of how long they should expect to wait to see them adopted. I've always been pleased with your answer to Question J, so I'll say what we're doing at PGP. We deprecated MD5 in '97. That was one of the main points of the new formats that became OpenPGP was that agility has its own challenges, but it's worth it. We had a meeting recently to look at what we're going to do. Our first thoughts were that we would scrub MD5 from the UI and be done with it. Then we realized that we need to leave enough of the old UI so that people can *remove* MD5 from their use. We decided that we'll issue warnings in the annotations when we verify MD5 signatures. We can't stop verifying them, but we'll do an equivalent to what we do with 40-bit crypto in S/MIME. (40-bit still harries S/MIME; it's really a pity that we have to deal with it. Our solution is that 40-bit crypto is just a fancy form of plaintext. We decode it the way we decode quoted-printable, base64, and other fancy forms of plaintext.) We debated removing it from the APIs, and concluded that that is asking for trouble, because someone will need to do that for diagnostic and testing purposes. We've started deprecating the 160-bit hashes. There will be comments in the UI for both SHA-1 and RIPE-MD/160. We think NIST's advice for phasing them out next year is just fine, and so we'll start really phasing them out next year. Lastly, we considered other options for hash algorithms. Presently, it's too early to do anything, but we'll look at it again when we do more work on the 160-bit hashes. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: AES HDD encryption was XOR
In the NBC TV episode of /Chuck/ a couple of weeks ago, the NSA cracked a 512-bit AES cipher on a flash drive trying every possible key. Could be hours, could be days. (Only minutes in TV land.) http://www.nbc.com/Chuck/video/episodes/#vid=838461 (Chuck Versus The Fat Lady, 4th segment, at 26:19) It's no wonder that folks are deluded, pop culture reinforces this. No, this is simple to do. What you is to start with a basic cracking engine. And then you add another one an hour later, and then an hour later add two, then add four the next hour and so on. If you assume that the first cracker can do 2^40 keys per second, then you're guaranteed to complete in 472 hours, which is only 20 days. And of course there's always the chance you'd do it in the first hour. For those who doubt being able to double the cracking power, Moore's law proves this is possible. QED. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Certificates turn 30, X.509 turns 20, no-one notices
On Nov 24, 2008, at 8:54 PM, Peter Gutmann wrote: This doesn't seem to have garnered much attention, but this year marks two milestones in PKI: Loren Kohnfelder's thesis was published 30 years ago, and X.509v1 was published 20 years ago. As a sign of PKI's successful penetration of the marketplace, the premier get- together for PKI folks, the IDtrust Symposium (formerly the PKI Workshop and now in its eighth year) authenticates participants with... username and password, for lack of a working PKI. (OK, it's a bit of a cheap shot and it's been done before, but I thought it was especially significant this year :-). Yeah, they should be using OpenID. :-) Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: combining entropy
On Sep 29, 2008, at 5:13 AM, IanG wrote: If I have N pools of entropy (all same size X) and I pool them together with XOR, is that as good as it gets? My assumptions are: * I trust no single source of Random Numbers. * I trust at least one source of all the sources. * no particular difficulty with lossy combination. It's within epsilon for a good many epsilon. I'm presuming you want the resultant size to be X, as well. Otherwise, the suggestion that Ben has, concatenation is obviously better, and you can solve obvious problems. Another solution is to hash the N pools together with a suitably secure function. (Most the available algorithms are suitably secure for this purpose.) The downside of this is that you are capping your entropy at the size of the hash function. It's better than XOR because it's not linear, blah, blah, blah. However, if you had three pools, each relatively large, it doesn't hurt anything to XOR them together. It's pretty easy to prove that the result does not decrease entropy, but I think it's impossible to prove that it increases it. XORing is really taking the max of the N pools. You have to realize that XOR is bad if there's a chance to leak the entropy pool, XOR is a bad function. If whoever produced pool X sees X^Y, then they know Y. But you know that, too. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Fake popup study
At one time, we believed that with enough crypto, we would be safe, but we were disabused of that notion -- crypto is a great tool but not a panacea. Now the notion seems to be that with enough human factors, we will be safe. It appears this, too, is not a panacea. What you mean, We? I said ages ago that you cannot produce trust with cryptography, no matter how much cryptography you use. That's a bow towards Lao Tzu's original, you cannot produce kindness with cruelty, no matter how much cruelty you use. To quote Crispin Cowan on phishing, it (and other con jobs) are a security failure on the device that sits between the keyboard and chair. Until we can issue patches on that device, we're getting nowhere. Even after, it's a long road ahead. I think you can prove that it's impossible to stop cons. What we *can* do is lower the number of them. But we're not going to get anywhere when we blame the victims. I'm with Jim Youll on this, the people who think the users are idiots just don't get it. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Lava lamp random number generator made useful?
A cheap USB camera would make a good source. The cheaper the better, too. Pull a frame off, hash it, and it's got entropy, even against a white background. No lava lamp needed. I sort of agree, but I feel cautious about recommending that people use their holiday snaps. And then post them on line... if you see where I am going :) But it is a good suggestion. That's not at all what I suggested. There are so many ways that one can creatively screw up reasonable cryptographic advice that I don't think it's worth bothering with. The point is that if you take a cheap 640x480 (or 320x240) webcam and point it against a photographic grey card, there's going to be a lot of noise in it, and this noise is at its bottom quantum in nature. Thus, there's a lot of entropy in that noise. Photographic engineers work *hard* to remove that noise, and you pay for a lack of noise. I'm willing to bet that if I give you hashes of frames, knowing this process, you can't get pre-images. I'll bet that you can't get pre- images even if I let you put a similar camera next to the one I'm using. In short, I'm willing to bet that a cheap camera is a decent random number source, even if you try to control the image source, to the tune of 128-256 bits of entropy per frame. No lava lamps are needed, no weird hardware. Just use the noise in a CCD. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Lava lamp random number generator made useful?
Does anyone know of a cheap USB random number source? As a meandering comment, it would be extremely good for us if we had cheap pocket random number sources of arguable quality [1]. I've often thought that if we had an open source hardware design of a USB random number generator ... that cost a few pennies to add onto any other USB toy ... then we could ask the manufacturers to throw it in for laughs. Something like a small mountable disk that returns randoms on every block read, so the interface is trivial. Then, when it comes time to generate those special keys, we could simply plug it in, run it, clean up the output in software and use it. Hey presto, all those nasty software and theoretical difficulties evaporate. A TPM has random numbers of arguable quality. I'm happy to argue either side of it, but that's not what you asked. A cheap USB camera would make a good source. The cheaper the better, too. Pull a frame off, hash it, and it's got entropy, even against a white background. No lava lamp needed. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Generating AES key by hashing login password?
We were wondering if it was possible to use a hash function instead. Using the password he provided at the login screen and hash it n times. Master Password: hash(hash(login_password)) Would this be a good idea if we've used this generated hash as a key for AES? Would the hashing be secure enough against different kinds of attacks? The short answer is yes. A better answer is that you want to salt the password before you hash it many times, to keep from having rainbow tables created. Another better answer is that you want to hash many times to slow down password crackers. As others have mentioned, there are standards that can show you the way. PKCS#5 has a mechanism for this. OpenPGP does, too. They're subtly different, and understanding the differences can help you roll your own. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: OpenSparc -- the open source chip (except for the crypto parts)
On May 6, 2008, at 1:14 AM, James A. Donald wrote: Perry E. Metzger wrote: What you can't do, full stop, is know that there are no unexpected security related behaviors in the hardware or software. That's just not possible. Ben Laurie wrote: Rice's theorem says you can't _always_ solve this problem. It says nothing about figuring out special cases. True, but the propensity of large teams of experts to issue horribly flawed protocols, and for the flaws in those protocols to go undiscovered for many years, despite the fact that once discovered they look glaringly obvious in retrospect, indicates that this problem, though not provably always hard, is in practice quite hard. Yes, but. I tend to agree with Marcos, Ben, and others. It is certainly true that detecting an evil actor is ultimately impossible because it's equivalent to a non-computable function. It doesn't matter whether that actor is a virus, an evil vm, evil hardware, or whatever. That doesn't mean that you can't be successful at virus scanning or other forms of evil detection. People do that all the time. Ben perhaps over-simplified by noting that a single gate isn't applicable to Rice's Theorem, but he pointed the way out. The way out is that you simply declare that if a problem doesn't halt before time T, or can't find a decision before T, you make an arbitrary decision. If you're optimistic, you just decide it's good. If you're pessimistic, you decide it's bad. You can even flip a coin. These correspond to the adage I last heard from Dan Geer that you can make a secure system either by making it so simple you know it's secure, or so complex that no one can find an exploit. So it is perfectly reasonable to turn a smart analyzer like Marcos on a system, and check in with him a week later. If he says, Man, this thing is so hairy that I can't figure out which end us up, then perhaps it is a reasonable decision to just assume it's flawed. Perhaps you give him more time, but by observing the lack of a halt or the lack of a decision, you know something, and that feeds into your pessimism or optimism. Those are policies driven by the data. You just have to decide that no data is data. The history of secure systems has plenty of examples of things that were so secure they were not useful, or so useful they were not secure. You can, for example, create a policy system that is not Turing-complete, and then on to being decideably secure. The problem is that people will want to do cool things with your system than it supports, so they will extend it. It's possible they'll extend it so it is more-or-less secure, but usable. It's likely they'll make it insecure, and decideably so. Jon Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Protection for quasi-offline memory nabbing
On Mar 19, 2008, at 6:56 PM, Steven M. Bellovin wrote: I've been thinking about similar issues. It seems to me that just destroying the key schedule is a big help -- enough bits will change in the key that data recovery using just the damaged key is hard, per comments in the paper itself. It is. That's something everyone should consider doing. However, I was struck by the decay curves shown in the Cold Boot paper. The memory decays in an S-curve. Interestingly, both the smoothest S-curve and the sharpest were in the most recent equipment. However, this suggests that for a relatively small object (like a 256- bit key) is apt to see little damage. If you followed the strategy of checking for single-bit errors, then double-bit, then triple-bit, I hypothesize that this simple strategy would be productive, because of that curve. (I also have a few hypotheses on which bits will go first. I hypothesize that a high-power bit surrounded by low-power ones will go first, and a low-power bit amongst high-power ones will go last. I also hypothesize that a large random area is reasonably likely to get an early single-bit error. My rationale is that the area as a whole is going to have relatively high power 'consumption' because it is random, but the random area is going to have local artifacts that will hasten a local failure. Assuming that 1 is high-power and 0 is low- power, you expect to see a bitstring of 00100 or 0001000 relatively often in a blob of 32kbits (4KB) or 64kbits (8KB), and those lonely ones will have a lot of stress on them.) Despite that my hypotheses are only that, and I have no experimental data, I think that using a large block cipher mode like EME to induce a pseudo-random, maximally-fragile bit region is an excellent mitigation strategy. Now all we need is someone to do the work and write the paper. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: delegating SSL certificates
On Mar 16, 2008, at 8:50 AM, John Levine wrote: So at the company I work for, most of the internal systems have expired SSL certs, or self-signed certs. Obviously this is bad. You only think this is bad because you believe CAs add some value. Presumably the value they add is that they keep browsers from popping up scary warning messages. There are all sorts of reasonable arguments to be made that the browsers are doing the wrong thing (and the way that Microsoft prevents you from ever deleting any of their preinstalled CA certs is among the wrongest.) Yes, but. If a browser handled unknown certificates similarly to the way SSH does -- to alert the user when it sees an unknown, unrooted certificate, and then only again when there is a mis-match, you would have an incentive to get a CA certificate (because businesses don't want their customers to see that scary message even once), while supporting ad-hoc infrastructures. This would require only software changes, not changes in the trust models, CAs, procedures, etc. A wicked person would suggest that this is because the present system was designed to support the business model, not the security model. I'm not a wicked person and would never suggest that. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Protection for quasi-offline memory nabbing
Such as Cold Boot, etc. There have been a number of conversations among my colleagues on how to ameliorate this, particularly with an eye to making suspend mode safer. In the Cold Boot paper, the authors suggested XORing a piece of random memory onto the dangerous bits, so as to fuzz them. This is a clever idea, but we didn't like it, particularly because XOR doesn't have the best diffusion in the world. The solution we came up with is to use EME mode (or equivalent) with a fixed key. The outline is that you encrypt all dangerous data, like volume key, key expansion, etc, with a fixed key into a chunk that you keep to the side. This relies upon the property of EME (and other large-block, tweakable modes) that a single-bit error in the ciphertext propagates to an error in the entire plaintext. Consequently, a very low rate of memory decay turns into complete protection of that sensitive data. Upon suspend, you erase and deallocate the active store, and on wake you decrypt the fuzzed copy to get your keys and state variables back. If you want to one-plus this, you could have a timeout on the drive so that if it's idle for N seconds, you do same. When we came up with this, we wondered if it was patentable. We've decided that it isn't, that this is something that is obvious to someone skilled in the art. Our reasoning is something like: Cold Boot paper suggests XORing random memory but -- XOR has cruddy diffusion What has better diffusion? (discard suggestions like lead, churches, and very small rocks) Block ciphers have great diffusion but -- block ciphers operate on only a small chunk What operates like a block cipher on a large chunk? Tweakable modes like EME. QED The rest is just software engineering. The cool thing about using EME (or equivalent) is that the larger the chunk you create, the better you survive a Cold Boot attack. Note, however, that an attacker who can grab memory with no errors in it, such as someone who is playing DMA games, still gets the keys. To protect against that, you have to have an authentication mechanism, which is outside the scope of this -- we want something that is transparent, but can make people worry less about suspending their laptop. Also note that you don't really need a full cipher. All you need is reversible diffusion that maximizes damage on a single-bit error. However, the danger in coming up with another function is that you're effectively designing special-purpose crypto. Yes, it's really special- purpose coding, not crypto, but it's a lot safer to use crypto. We understand it better. A number of people participated in our discussions and at least two people independently thought of the core idea. The people include but are not limited to (which means I apologize to everyone I forgot): Colin Plumb, Phil Zimmermann, Hal Finney, Andrey Jivsov, Will Price, David Finkelstein, and Bill Zhao. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: cold boot attacks on disk encryption
So, is anyone else as amused as I am that Apple can release an EFI firmware update to zeroize MacBook Air memory at boot-time, turning the heretofore widely-decried inability to upgrade that laptop's RAM -- due to the chips being soldered to the motherboard -- into an advantage, and making the Air the laptop of choice for discriminating, fashion-aware, security-conscious professionals the world over? No. Apple (or anyone doing EFI boot, for example, someone doing WDE for OS X) can easily modify the EFI boot to zero memory. It isn't just the Air, it's any Intel Mac, but remember those are just Intel EFI systems. Note, however, that this does not completely solve the attack. If someone hits the reset button or yanks power, then you don't get to erase. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Interesting New Developments in SocGen
http://news.bbc.co.uk/2/hi/business/7255685.stm Excerpt: An internal investigation into billions of euros of losses at Societe Generale has found that controls at the French bank lacked depth. The results of the investigation also show that rogue trades were first made back in 2005. http://news.bbc.co.uk/2/hi/business/7256102.stm Excerpt: Societe Generale made a profit in 2007 despite a trading scandal that cost the bank 4.9bn euros ($7bn; £3.7bn). The French bank said it made a net profit of 947m euros for the year, although this was down 82% from 2006. I think these two things are very interesting from a viewpoint of security and economics. This fellow had been making unauthorized trades for two to three years, and when it all came tumbling down, it knocked off at most 82% of one year's profits. (I say at most because it's reasonable to think that in a year of subprime issues, they'd have been down 30-50%.) Compare and contrast with Nick Leeson's sinking of Baring's, which was a mere $1.4bn. We can even double-ish that to say $3bn to account for the intervening time. Both cases were unauthorized trades spinning out of control in attempts to cover small losses, but Baring's was sunk for the (adjusted) $3bn and SocGen merely loses 80% of one year's profits at $5bn. Does this suggest that what is really needed is a way to detect losses that could spin out of control before they do, as opposed to direct security mechanisms? Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: cold boot attacks on disk encryption
On Feb 21, 2008, at 12:14 PM, Ali, Saqib wrote: However, the hardware based encryption solutions like (Seagate FDE) would easily deter this type of attacks, because in a Seagate FDE drive the decryption key never gets to the DRAM. The keys always remain in the Trusted ASIC on the drive. Umm, pardon my bluntness, but what do you think the FDE stores the key in, if not DRAM? The encrypting device controller is a computer system with a CPU and memory. I can easily imagine what you'd need to build to do this to a disk drive. This attack works on anything that has RAM. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Poor password management may have led to bank meltdown
On Feb 4, 2008, at 1:55 PM, Arshad Noor wrote: Do business people get it? Do security professionals get it? Apparently not. Arshad Noor StrongAuth, Inc. Huge losses reported by Société Générale were apparently enabled by forgotten low-level IT chores such as password management. http://www.infoworld.com/article/08/02/04/Poor-password-management-may-have-led-to-bank-meltdown_1.html Yes, but get what? It is a vague noun. The reporter showed some wit by using the word may. This was an attack by an evil (or crazy) insider. Evil insider attacks are the hardest to protect against. If the insider decided that he was going to start making trades for whatever reason, then he'd find a weak point that would allow him to make trades, and use it, no matter what it is. (My personal hypothesis is a variant of a mad-scientist attacker -- They laughed at me when I told them my trading theories! Laughed! But I'll show them! I'll show them ALL!!!) If this person had worked for 1000 hours to get a hardware token, he would have just done the work. The result may have been an order of magnitude more. High-security procedures tend to be more brittle for psychological reasons. If you have the magic dingus, then you are authorized, and no one ever questions the dingus. Also, one must look at the economics and psychology of the situation. Traders are prima-donna adrenaline junkies who trade vast sums of money all the time and are not shy about expressing their frustrations. Looking at the sheer economics first: * A trader trades C units of currency every hour, with an average profit of P (for example 5% profit is P=1.05). * There are T traders in the organization. * The extra authentication produces a productivity drop of D. For example, let us suppose a trader has to authenticate once per hour, and it takes 10 seconds to authenticate. This gives us a D of .9972 or 3590/3600. So the operational cost of your authentication is (1-D)*T*C*P per hour. Divide €4.9G by that, and you get the number of hours for the raw break-even time on this. Add to this the probability that the hassle will convince a trader to jump ship to another firm (J), times the number hours of trading lost until you find a replacement (H). We'll assume the replacement needs no spinup time to become as productive as the previous trader. That's an additional cost of J*H*T*C*P. This is the psychological factor. As I said, traders are prima donnas who are used to getting their own way. People have criticized post-9/11 airline security on similar grounds. They observe that some number of people drive rather than fly, and calculate out the difference in deaths-per-passenger-mile. I've seen numbers that work out to a handful of 9/11s per year caused by traffic displacement. They also observe that large numbers of people spend extra time in lines, which works out to a lost life number. For example, if you assume that passengers spend 10 extra minutes clearing security and a life is 70 years, then roughly 6 million passengers represents one lost life. There's always much to criticize in these models. I could write a reply to this message with criticisms, and so can you. Nonetheless, the models show that there's more than just the raw security to think about. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Hushmail in U.S. v. Tyler Stumbo
I don't know anything about this case, so everything I say is pure supposition. Let's suppose you have Alice and Bob who are working together on some sort of business, and they are using some OpenPGP [1] software to encrypt their emails that pertain to that business. Let's suppose that the authorities then decide to raid Bob. Let us then suppose that they go to Alice's ISP and get a lot of encrypted email, by warrant, subpoena, etc. It doesn't matter for our purposes what ISPs Alice and Bob are using, nor what OpenPGP software they are using. * Let us consider the case where Bob turns state's evidence. If those emails were encrypted to both Alice's key and Bob's key, after Bob turns state's evidence, the authorities can decrypt all the messages they seized from Alice's ISP. It doesn't matter what Alice did with her key or what Alice's ISP did with it. They can be decrypted because Bob's key has been compromised. * Let us consider the same basic scenario where all the messages are encrypted to the sender's, but not the recipient's key. In this case, the authorities can decrypt all of Alice's messages to Bob, but not Bob's messages to Alice. After they have compromised Bob, all of Alice's messages to Bob can be decrypted. The fact that Alice's security is untouched is mostly irrelevant. Alice is likely toast, not because of the cryptography, but because Bob has been compromised, and Bob's key decrypts mail Alice has sent. * Let us consider a slightly different scenario in which neither Alice nor Bob are compromised, but Bob is detained. If the authorities raid Alice's ISP, despite the fact that they cannot decrypt the messages, they may be able to show a connection between Alice and Bob. If they have been CCing themselves, then you'll find the same undecryptable message in each mailbox. If they have been using reply, there's probably metadata in the plaintext headers that shows that M_n is a reply to M_{n-1} ... M_1, and thus you have a chain of messages. If there is other evidence, such as Bob sending checks to Alice every so often, the cryptography may be moot or worse than moot. (If those messages are harmless, why don't you decrypt them? Yes, this can get into many interesting discussions like the applicability of Amendments 4 and 5, but these are also not cryptographic. I really don't want to discuss them because I'll bet we agree.) Cryptography is not magic pixie dust that you can sprinkle on a security problem and make it go away. If your adversary is a major national government, you have operational security issues, as well. If your adversary is a major national government that has direct authority over where you live, then you have a much larger problem. The adversary is going to use forensic analysis, traffic analysis, and anything else they can think of. They are also not dumb. You also have to expect that third parties, including ISPs, are unlikely to see why they should fail to comply with legal documents like subpoenas and warrants because of what you did. Smart cryptographers make sure there are no backdoors in the crypto, because if there were, then every beat cop and two-bit mafioso will want you to break just that one message -- or else. If the system is strong, it all comes down to your operational security. Jon [1] I have to give a now-usual rant. PGP is a trademark of PGP Corporation and refers to software it makes. OpenPGP is an IETF standard that covers encryption, certificates, and digital signatures. There are many products that implement the OpenPGP standard. PGP software is one of those. But other products, such as GnuPG, Hushmail, Bouncy Castle, and so on also implement the OpenPGP standard. Futhermore, PGP software implements other standards than OpenPGP. For example, PGP software implements the S/MIME and X.509 standards as well as the OpenPGP standard. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Hushmail in U.S. v. Tyler Stumbo
On Nov 1, 2007, at 10:49 AM, John Levine wrote: Since email between hushmail accounts is generally PGPed. (That is the point, right?) Hushmail is actually kind of a scam. In its normal configuration, it's in effect just webmail with an HTTPS connection and a long password. It will generate and verify PGP signatures and encryption for mail it sends and receives, but they generate and maintain their users' PGP keys. There's a Java applet that's supposed to do end to end encryption, but since it's with the same key that Hushmail knows, what's the point? I'm sorry, but that's a slur. Hushmail is not a scam. They do a very good job of explaining what they do, what they cannot do, and against which threats they protect. You may quibble all you want with its *effectiveness* but they are not a scam. A scam is being dishonest. You also mischaracterize the Hushmail system. The classic Hushmail does not generate the keys, and while it holds them, they're encrypted. The secrets Hushmail holds are as secure as the end user's operational security. I know what you're going to say next. People pick bad passphrases, etc. Yes, you're right. That is not being a scam. They have another system that is more web-service oriented, and they explain it on their web site far better than I could. It has further limitations in security but with increased usability. It is also not a scam. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Elcomsoft trying to patent faster GPU-based password cracker
On Oct 24, 2007, at 1:21 PM, Steven M. Bellovin wrote: I hope they don't get the patent. The idea of using a GPU for cryptographic calculations isn't new; see, for example, Remotely Keyed Cryptographics: Secure Remote Display Access Using (Mostly) Untrusted Hardware (http://www1.cs.columbia.edu/~angelos/Papers/2005/ rkey_icics.pdf) Debra L. Cook, Ricardo Baratto, and Angelos D. Keromytis. In Proceedings of the 7th International Conference on Information and Communications Security (ICICS), pp. 363 - 375. December 2005, Beijing, China. An older version is available as Columbia University Computer Science Department Technical Report CUCS-050-04 (http://mice.cs.columbia.edu/getTechreport.php? techreportID=110format=pdf), December 2004. I agree completely. If the PTO does their job, they won't get it. This is like claiming that once we know that making daiquiris in a blender is possible, it's patentable to improve that by making pina coladas. If you're skilled in the art, you know that this is pretty obvious. Crypto extended to cryptanalysis is less of a stretch than strawberries extended to coconut and pineapple. Unfortunately, the PTO hands out patents for things like using a laser pointer as a cat toy. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Quantum Crytography to be used for Swiss elections
On Oct 22, 2007, at 12:07 PM, Steven M. Bellovin wrote: On Thu, 18 Oct 2007 12:49:40 -0700 Jon Callas [EMAIL PROTECTED] wrote: Ah, there are some trustworthy photons. Oops, we can trust them, but we don't know if they are relevant. Ah, there's a relevant photon And we know they are trustworthy photons because they have certificates signed by an accredited third-party boson. Boson or bogon? Boson. Bosons are force-carrier particles, as opposed to fermions. Photons are themselves bosons, but there are other bosons that carry other forces. There's the Higgs boson, W and Z bosons, and so on. Gluons, the particles that hold atomic nuclei together are also bosons. Bogons are, technically, bosons as they are the particle that carries a quantum unit of bogosity. However, you yourself have criticized people who discuss the role of bogosity in quantum cryptography [sic] (I prefer the term quantum secrecy), and therefore I will say no more about bogons and QC. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Another Snake Oil Candidate
I'm a beta-tester for it, and while I can understand a small twitch when they talk about miltary and beyond military levels of security, it is very cool. It has hardware encryption and will erase itself if there are too many password failures. I consider that an issue, personally, but it appeals to people. The reason I consider it an issue is that I have had to use a brain-dead-simple password I'm not going to forget because if I get cute and need to try a number of things, poof, I'm dead. Yeah, it's using AES CBC mode, but that's a good deal better than a lot of encrypted drives that are using ECB. It also has their own little suite of Mozilla plus Tor and Privoxy for browsing and they've set it up so that you can run that on another computer from the drive. It's not bad at all. My only real complaint is that it requires Windows. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: a new way to build quantum computers?
Via Farber's list: From: Rod Van Meter [EMAIL PROTECTED] Date: August 18, 2007 11:39:47 AM EDT To: [EMAIL PROTECTED] Subject: Re: [IP] Light pulses crack security codes within seconds http://www.tgdaily.com/content/view/33425/118/ Wow, that's one of the most egregious quantum computing-related articles I've ever seen. I'm not even sure where to start. First off, let's point at the real research paper: http://www.sciencemag.org/cgi/content/abstract/317/5840/929 Coherent Optical Spectroscopy of a Strongly Driven Quantum Dot Xiaodong Xu, Bo Sun, Paul R. Berman, Duncan G. Steel, Allan S. Bracker, Dan Gammon, L. J. Sham I read it. It's an advance, but does not yet mean anything at all is practical. Their work is on the optical properties of self-assembled quantum dots. There are two major categories of quantum dots in semiconductors, self-assembled and lithographically created (and within each of those, many types). The self-assembled dots are a compound grown on top of a substrate of a different kind. Differences in the crystalline structure mean that the deposited material beads up, like water on a freshly-waxed car. The quantum dot itself then is a place where the motion of electrons can be confined to a small two-dimensional area at the interface between the materials, creating a place where quantum wave functions can behave like an artificial atom. The work presented in the paper is some of the first solid experimental work on the optical properties of self-assembled dots that I have seen, though I'm not an expert. Various groups, including that of my adviser, Kohei M. Itoh ( http://www.appi.keio.ac.jp/Itoh_group/ ), have been working for years on the growth and mechanical characteristics (stress/strain, size and shape, etc.) of self-assembled dots. All of that has been very hard work, and as far as I know no one has a reliable way to grow the dots in a given place. I wish they had a micrograph of the device, I'd like to see it. But the TG article talks only a little about the research itself; it's mostly breathless pie-in-the-sky reporting on the possibilities of quantum computers. Light pulses crack security codes within seconds, the title reads. Wow. Well, first off, it can't be done yet, and won't be done for years, despite the present tense. Second, saying it's done with light pulses is like saying we compute today with electrons. It's true, but tells you nothing about transistors or computer architecture. Third, crack security codes is as vague and non-technical as it gets, not to mention outright wrong (we'll come back to that). Fourth, within seconds presumes many things about a quantum computer that are not yet defined to any level of precision. This topic is the focus of my research: how do you build a large-scale quantum computer out of a given technology? No one really knows yet. Which security codes does a paper on the spectroscopy of a quantum dot break? Well, none, really. But where they're headed with that is obviously Shor's algorithm for factoring large numbers on a quantum computer. If the algorithm can be efficiently implemented, it is theoretically capable of breaking RSA public-key cryptography and elliptic curve crypto. HOWEVER, the advantage may well be with the defenders on this one. Shor turns a super-polynomial problem (factoring) into a polynomial one. Not coincidentally, the complexity of running Shor is similar to the complexity of doing the encryption in the first place. And running an algorithm of the same computational class on a quantum machine will probably always be harder than running an algorithm on a classical computer. So, raise your key length and you might be okay. Shor does nothing to affect symmetric key cryptography, or any system not dependent on the factoring problem. I hesitate to mention this, for fear it will be misinterpreted, but in my opinion there is still some small doubt about whether Shor can in practice be scaled to large sizes, on theoretical grounds, let alone the practical difficulties of building using any given technology. The problem is the quantum Fourier transform (QFT) that is the key to Shor requires, in the abstract, exponentially precise gates as the problem size grows. Most researchers believe that the QFT can be truncated at some reasonable level and will still have a high probability of success. However, the several papers on the topic (including one by a collaborator of mine) in the last decade have taken different approaches to the calculation, and come up with substantially different answers, making different assumptions about the problem. The theorists seem confident, but I will give only provisional assent until I see it implemented. Perhaps I'm just not smart enough to fully grasp the arguments in the papers. Breaking a code in seconds really depends on both the problem and the machine. A major factor is how many levels of quantum error correction (QEC) are necessary, which is directly
Re: Quantum Cryptography
On Jun 26, 2007, at 10:10 AM, Nicolas Williams wrote: This too is a *fundamental* difference between QKD and classical cryptography. What does this classical word mean? Is it the Quantum way to say real? I know we're in violent agreement, but why are we letting them play language games? IMO, QKD's ability to discover passive eavesdroppers is not even interesting (except from an intellectual p.o.v.) given: its inability to detect MITMs, its inability to operate end-to-end across across middle boxes, while classical crypto provides protection against eavesdroppers *and* MITMs both *and* supports end-to-end operation across middle boxes. Moreover, the quantum way of discovering passive eavesdroppers is really just a really delicious sugar coating on the classical term denial of service. I'm not being DoSed, I'm detecting a passive eavesdropper! Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Free Rootkit with Every New Intel Machine
On Jun 25, 2007, at 7:23 PM, Matt Johnston wrote: On Mon, Jun 25, 2007 at 04:42:56PM +1200, David G. Koontz wrote: Apple (mis)uses TPM to unsuccessfully prevent OS X from running on non-Apple Hardware. All Apple on Intel machines have TPM, that's what 6 percent of new PCs? To nit pick, the TPM is only present in some Apple Intel machines and isn't used in any of them. See http://osxbook.com/book/bonus/chapter10/tpm/ Their OS decryption key is just stored in normal firmware, unprotected AIUI. They've apparently stopped shipping TPMs. There isn't one on my MacBook Pro from last November, and it is missing on my wife's new Santa Rosa machine. If you want to see if a machine has one, then the command: sudo ioreg -w 0 | grep -i tpm should give something meaningful. Mine reports the existence of ApplePCISlotPM, but that's not the same thing. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Quantum Cryptography
On Jun 22, 2007, at 10:44 AM, Ali, Saqib wrote: ...whereas the key distribution systems we have aren't affected by eavesdropping unless the attacker has the ability to perform 2^128 or more operations, which he doesn't. Paul: Here you are assuming that key exchange has already taken place. But key exchange is the toughest part. That is where Quantum Key Distribution QKD comes in the picture. Once the keys are exchanged using QKD, you have to rely on conventional cryptography to do bulk encryption using symmetric crypto. Using Quantum Crypto to do bulk encryption doesn't make any sense. It is only useful in key distribution. Let me create an aphorism to sum up what Paul, Perry, and others have said in detail before I address your comment: If Quantum Cryptography does what is claims, then it is strengthening the strongest link in the chain of security. Now to your comment. If you do a 3000 bit Diffie-Hellman exchange, you have a key exchange with 2^128 security, to the best of our knowledge, assuming this and that, blah, blah, blah. If you don't like 3000 bit integers, go to elliptic curve. I have in some of my talks, renamed Quantum Cryptography to Quantum Secrecy. If the QC people would stop calling it cryptography, a good deal of the hostility you find among us crypto people would evaporate. Let me give an analogy. I will posit Quantum Message Teleportation. Using QMT, Alice can write her message on a piece of paper, close her eyes, and it will disappear from her hand and appear in Bob's hand. This is cool. This is useful. It is amazing. It is also not cryptography. It also has all the problems that Perry points out in QC, like a lack of authentication and so on. Like QC, adding cryptography to it makes it even more useful. The QC people should change their song to QS, and stop bashing the mathematicians with arguments we can show are somewhere between incomplete and fallacious. Then they might find us drift over to supporting them because while Quantum Secrecy is not practical, it is very cool. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Blackberries insecure?
On Jun 20, 2007, at 8:41 PM, Steven M. Bellovin wrote: According to the AP (which is quoting Le Monde), French government defense experts have advised officials in France's corridors of power to stop using BlackBerry, reportedly to avoid snooping by U.S. intelligence agencies. That's a bit puzzling. My understanding is that email is encrypted from the organization's (Exchange?) server to the receiving Blackberry, and that it's not in the clear while in transit or on RIM's servers. In fact, I found this text on Blackberry's site: There have been rumors for years that the BlackBerry protocol is compromised by some government or other. I've heard them for years. Ultimately, no one knows, and there's no way to know. It boils down to whether you trust RIM or not. There is a PGP software package for the BlackBerry that will further encrypt the content before it's sent out. I use it, and it's quite nice. It cooperates really nicely with one of my PGP Universal servers, as well. It's one of the best integrations of crypto into a mail package I've ever seen. However, you still have to trust RIM. I've never seen any of the code, myself. and to my knowledge no one outside RIM has. There are any number of ways that the implementation could be compromised, with or without RIM's knowledge. Paranoia is the *unwarranted* belief that people are out to get you. The warranted belief that people are out to get you is caution. Personally, I think that this is pure paranoid rumor and innuendo. That doesn't mean it's wrong, it just means it's unwarranted. Last week, I got sent a posting on a web site that someone made that said that he had secret knowledge that the USG could break RSA for all key sizes that anyone uses, so you should just stop using any cryptosystem that uses it. Of course, he couldn't tell us anything more to protect the position of the person who told him that. I said that if someone told you that an unidentified friend had secret knowledge that banks were unsafe and so you shouldn't keep keep your money there, your I'm being scammed hairs on the back of your neck would stand up. But if some unidentified someone tells you that the crypto's bad, it's met with complete credulity. I have no doubt that people in various governments want to spy on high-ranking French. Duh. But what's more likely, that there are secret government compromises of security, or that there's a secret disinformation campaign with the goal of convincing these people that the crypto is compromised. Of course, the really delicious theory is that they've compromised the crypto and then started the disinformation campaign in order to get people like me to discredit the disinformation campaign and thus reassure people that the crypto isn't broken, when in fact it is. Is this paranoid, or merely cautious? Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: question re practical use of secret sharing
On Jun 13, 2007, at 4:47 AM, Charles Jackson wrote: A quick question. Is anyone aware of a commercial product that implements secret sharing? If so, can I get a pointer to some product literature? PGP. http://www.pgp.com/ I can tell you more gory details than you're probably interested in. But you can go get a free trial and play with it. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
PRZ status
He's out of surgery, doing well, and the doctors say he'll be better than he's been for ten years. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Enterprise Right Management vs. Traditional Encryption Tools
On May 9, 2007, at 5:01 PM, Ali, Saqib wrote: Hi Jon, Rights management systems work against polite attackers. They are useless against impolite attackers. Look at the way that entertainment rights management systems have been attacked. The rights management system will be secure so long as no one wants to break them. There is tension between the desire to break it and the degree to which its users rely on it. At some point, this tension will snap and it's going to hurt the people who rely on it. A metaphor involving a rubber band and that smarting is likely apt. What about DRM/ERM that uses TPM? With TPM the content is pretty much tied to a machine (barring screen captures etc) Will ERM/DRM be ineffective even with the use of TPM? Thanks Saqib Ali Your comment of barring screen captures etc. is a bit like saying that won't a bank be safe from robberies barring someone waving a gun in a teller's face, etc. Yeah, sure, but doesn't that kinda miss the point? DRM works if the attackers are polite. The less polite they are, the less well it works. DRM systems for media are probably more immune to analog hole attacks ERM systems. Imagine that someone ERM protected an email showing things that Gonzales couldn't remember when he was testifying to Congress, or in some stock scandal, etc. A photo of a screen with a cell phone camera would be sufficient. We have not (yet) seen an attack where someone got a pre-release of a movie and then pointed a camera at a laptop screen, but we will. If you add in a TPM, it depends entirely on how impolite the attackers are, as well as the construction of the TPM. One of the recent attacks against AACS involved the attackers unsoldering the chip and attacking it directly. That's pretty rude, but it worked. If someone is so impolite that they'll put the TPM chip under a scanning electron microscope, they can probably just read the bits off. Very few smart cards can survive that. Remember, this is all a trade-off between the cost of the device and the devotion of the attacker. TPM chips have to be very cheap, because the customer is ultimately paying for it. That means its defenses can't be very thorough. Furthermore, while the owner of the device is the attacker, you can't afford very many defenses. If a music player, for example, went DOA because it it was dropped, went over/under temperature, and so on, it would be a financial nightmare, as you probably have to replace them under warranty. People who hate DRM would buy devices, monkeywrench them, and then demand a refund. ERM systems have the advantage that in general the attackers are more polite. More people want to break AACS than rights-controlled analyst reports. However, once something really juicy happens, like just needing the content registration key for a document that will get a politician in jail -- well, plenty of people can hack that. Now, all of a sudden, the attackers won't be polite, and that metaphor I made about a rubber band snapping will seem modest. Really, you're much better off with real crypto and personnel policies. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
PRZ going in for heart surgery
Phil Zimmermann is going in tonight (7 May) for heart bypass surgery. He's not in immediate danger -- he's not having a heart attack, he's not no in immediate danger, but they're pushing him into the hospital quicker than any reasonable person would like. Obviously, that makes for worries. He meets with his surgeon tomorrow morning, and likely will have surgery tomorrow (8 May). Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Enterprise Right Management vs. Traditional Encryption Tools
On May 8, 2007, at 10:16 AM, Ali, Saqib wrote: I was recently asked why not just deploy a Enterprise Right Management solution instead of using various encryption tools to prevent data leaks. Any thoughts? What problem are you trying to solve? If you're dealing with a rights-management problem, such as how do you give someone a document that they can read on the screen but not print, you aren't going to solve that with a cryptosystem. However, rights management systems have characteristics that are different. Rights management systems work against polite attackers. They are useless against impolite attackers. Look at the way that entertainment rights management systems have been attacked. The rights management system will be secure so long as no one wants to break them. There is tension between the desire to break it and the degree to which its users rely on it. At some point, this tension will snap and it's going to hurt the people who rely on it. A metaphor involving a rubber band and that smarting is likely apt. One way this fails is the good old analog hole. People can still take pictures of their screens. Another way this fails is for people to rely upon rights management as a cover for sloppiness, anger, or mendacity. If you think you can revoke a message or send Mission Impossible documents, you will. Someday, someone on the receiving end will use the analog hole. Oops. Imagine the case where a tech support person tells off an obnoxious customer, who takes a picture of the screen. Furthermore, there are subtle problems with rights-management and policy. Let's suppose that I run an organization that needs to archive documents. I therefore *must* reject documents that I cannot archive. I have personally stuck more to having crypto be a form of access control (once you get to a document, you have it) than as use control because: * The former problem is hard enough * We know that DRM of any sort will untimately fail * Human nature will lead people to get into trouble *because* of rights management. I think that the operational issue -- that rights management *cannot* work -- trumps everything else, and turns the social issues (if you can tell someone off and deny it, will you?) into -- into nothing other than a information bomb. You're going to end up looking like Wile E. Coyote, with a blackened face and stunned, blinking eyes. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: can a random number be subject to a takedown?
On May 1, 2007, at 12:53 PM, Perry E. Metzger wrote: A lot of sites have been getting DMCA takedowns for the HD-DVD processing key that got leaked recently. My question to the assembled: are cryptographic keys really subject to DMCA subject to takedown requests? I suspect they are not copyrightable under the criterion from the phone directory precedent. My tongue is slightly in my cheek as I say this: once a random number is known, it's not random any more. An idealized property of random numbers like keys is that there be no algorithm for producing it that is better than guessing. I can presently guess this key with probability greater than 2^-128 using this algorithm in a C-like pseudocode: unsigned char* guess_key(void) { unsigned char key[] = {0x0a, 0xFa, 0x12, 0x03, 0xD9, 0x42, 0x57, 0xC6, 0x9E, 0x75, 0xE4, 0x5C, 0x64, 0x57, 0x89, 0xC1}; return key; } (Or it would if I'd put the actual AACS key in there.) The question is if a *specific* key can be taken down. This is open to argument, because the DMCA only applies to things that are copyrightable, and one can argue that keys are not copyrightable convincingly. (Sketch of argument: if keys were copyrightable then I could copyright a list of all keys. I can't copyright a database, or even a phone book, so the notion that I could copyright a list of all numbers in the set [0..N] is absurd.) As far as anti-circumvention goes, keys themselves can't be used for circumvention. Assuming that the above were the AACS key, I couldn't use it to circumvent because I don't know the right protocol to use. Consider another scenario: one can use a brick to smash a window, but possessing a brick does not mean you've broken windows. If I have a proper key, but no software, I am not capable of circumventing. Likewise, if I had software that could do the crypto, but no key, I'm not capable. It is only if I have both the software and the key that I have something that *might* be a circumvention device. Even things that might be circumvention devices are not always. The test in the DMCA is if its primary purpose is for circumvention. This is why debuggers are not circumvention devices. It is only when you use the potential circumvention device to circumvent that you've done the equivalent of throwing the brick through the window. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: quantum crypto rears its head again.
On 13 Dec 2006, at 11:57 AM, Perry E. Metzger wrote: I saw this link on Slashdot (and it was also on Ekr's blog): http://hackreport.net/2006/12/13/quantum-cryptography-its-some-kind- of-magiq/ It appears that the quantum crypto meme just won't go away. Bob Gelfond of MagiQ promises us that for only $100,000, plus monthly leasing of a dry fiber optic home run between your end systems, you can have security that isn't even as good as what nearly free software will give commodity computers over the unsecured public internet. I wonder if this idea is ever going to die. My guess is it will, but not until the people who have thrown away their money investing in this technology go bankrupt. Thanks for writing your note at the bottom. Quantum cryptography is a fascinating thing, but first of all, it's not cryptography. It should be called quantum secrecy, or something akin to that. Next, its proponents have a tendency to effectively say, Oh, math, that's something that could go bad. But physics, *that* will always be good! Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: [-SPAM-] Re: Can you keep a secret? This encrypted drive can...
On 5 Dec 2006, at 3:22 PM, Brian Gladman wrote: For AES the round function and key scheduling cost per round are basically the same for both AES-128 and AES-256. In consequence I would expect the speed ratio to be close to the ratio of the number of rounds, which is 14 / 10 or 40%. My own figures on AMD64 are 1.35 for encryption and 1.39 for decryption. And on a P4 they are 1.36 and 1.38 respectively. These are hence close to the expected 40% figure. This suggests to me that a figure around 20% would apply in applications in which about half the time is spent in encryption and half in other higher level activities. Can I hence assume that your benchmark is being run at application level rather than algorithm level? If not why is the ratio only 22% on the PPC-32? That was using pgp --speed-test. It's an algorithm-level test, but it's calling the SDK so there's some API-level overhead involved. I got the number from a 3.0GHz x86, and it was 1.36 for encryption and 1.37 for decryption. But I also got the numbers from a 2GHz Core Duo laptop and it was 1.12 for encryption and decryption. On the other hand, the fast machine was encrypting AES-128 at 66389.45 KB/s and the slow one at 22217.39 KB/s, which means that the 3GHz machine is running at just shy of 3x the speed of the 2GHz machine! Obviously, there are other factors, such as cache, memory, and so on that are huge differences. I'd take a slowdown of 12% to 40% if I was getting a 300% base speedup. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Can you keep a secret? This encrypted drive can...
I just ran a speed test on my laptop. Here are some relevant excerpts: CipherKey Size Block Size Enc KB/sec Dec KB/sec -- -- -- IDEA 128 bits 8 bytes 24032.0924030.66 3DES 192 bits 8 bytes 10387.6710399.30 CAST5 128 bits 8 bytes 29331.1729459.49 Twofish 256 bits 16 bytes 20233.6319185.82 AES-128 128 bits 16 bytes 44100.2346266.98 AES-192 192 bits 16 bytes 39731.3341228.87 AES-256 256 bits 16 bytes 36017.9537302.43 Blowfish 128 bits 8 bytes 35347.3438311.22 Comparing AES-128 and AES-256, encrypt speed is 1.2243959x and decrypt is 1.2403208x. So that makes my lick-your-finger-and-stick-it- in-the-wind rule of thumb of 20% slower okay. I'll try to say 20-25% in the future. Of course, though, implementation matters a lot. I'm running a PPC-32 machine. You'll get different answers on an ia32, and different ones an AMD64. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: RFID passport article in the UK's Guardian newspaper...
On 20 Nov 2006, at 9:46 PM, Steve Schear wrote: Assume that smartcard based passports will be used in the same way the current variety are, that is swiped in or placed near a contact or contact-less reader by the immigration officer within a meter or so of the passport presenter. Why not create a relay chip that provides all of the expected interfaces to the reader but also uses a wireless link to a pocket ASIC carried by the passport presenter or someone else nearby with the necessary computational power. I suspect such a relay chip would be much cheaper to design and manufacture than the real smartcard chip. Could be really useful for other apps as well, I suspect. I think that all this is just more reason that they should do what I suggested ages ago -- 2D barcodes. Two facing pages in the passport could be easily scanned, and would have plenty of information. it's not sexy, but it would work. It also wouldn't have the inherent misfeature of being able to be read from a distance through someone's pocket. That would mean it could even be plaintext. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Can you keep a secret? This encrypted drive can...
Just wondering about this little piece. How did we get to 256-bit AES as a requirement? Just what threat out there justifies it? There's no conceivable brute-force attack against 128-bit AES as far out as we can see, so we're presumably begin paranoid about an analytic attack. But is there even the hint of an analytic attack against AES that would (a) provide a practical way in to AES-128; (b) would not provide a practical way into AES-256? What little I've seen in the way of proposed attacks on AES all go after the algebraic structure (with no real success), and that structure is the same in both AES-128 and AES-256. There is no requirement for it. However, as others have noticed, to the casual observer, 256 is twice as good as 128. You don't want to end up with a product review saying, Product X is solid with 128-bit encryption, but for the ultra-paranoid, product Y is using 256! Moreover, AES-256 is 20-ish percent slower than AES-128. That difference can be completely irrelevant in the context of the entire system. That means that there is coolness pressure pushing to 256, and relatively little performance backpressure. The result is that you use AES-256 except where the performance is so tetchy that you really need to back off to 128. I've been spouting off about how 128 is enough, but not fighting the trend even an iota. It's not worth the bother. Besides, I find the irony that AES is pushing us from debates about how 56 oughta be good enough to why 256 is just inevitable in less than a decade to be amusing. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: A note on vendor reaction speed to the e=3 problem
This amounts to *not* using ASN.1 - treating the ASN.1 data as mere arbitrary padding bits, devoid of information content. That is correct, it has the advantage of being merely a byte string that denotes a given hash. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: signing all outbound email
On 5 Sep 2006, at 2:40 AM, Massimiliano Pala wrote: This approach is MTA-to-MTA... if you want something more MTA-to- MUA Not precisely. It is *primarily* MTA-to-MTA, for a number of very good reasons, like privacy. However, a number of people will be implementing DKIM verification in the MUA, including Yahoo!. (I've seen UI mockups, but they may have it shipping for all I know.) The protocol itself is completely agnostic on that. The signature travels with the message and the signing key is in the network. As long as you have both, you can verify the signatures. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: signing all outbound email
On 4 Sep 2006, at 4:13 AM, Travis H. wrote: Has anyone created hooks in MTAs so that they automagically sign outbound email, so that you can stop forgery spam via a SRV DNS record? Take a look at DKIM (Domain Keys Identified Mail) which does precisely that. There is an IETF working group for it, and it is presently being deployed by people like Yahoo, Google, and others. There's support for it in SpamAssassin as well as a Sendmail milter. Go look at http://www.dkim.org/ for many more details. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: A security bug in PGP products?
On 21 Aug 2006, at 3:36 PM, Max A. wrote: Hello! Could anybody familiar with PGP products look at the following page and explain in brief what it is about and what are consequences of the described bug? http://www.safehack.com/Advisory/pgp/PGPcrack.html The text there looks to me rather obscure with a lot of unrelated stuff. The guy's basically confused. I wrote a long thing at the time to bugtraq with lots of detail. He's got two basic claims. The first is that if he makes a copy of a disk file, changes the passphrase on the copy, and then uses a hex editor to paste the passphrase reduction back onto the copy. Poof, the old passphrase works again. This is like saying that you can use emacs to edit a file and change 123 to ABC and then use a hex editor to change 0x41 0x42 0x43 to 0x31 0x32 0x33 and ZOMG! The change magically vanishes! As Ondrej Mikle points out, the disk hasn't been re- encrypted. If you want the disk to be re-encrypted, you press the big Re-encrypt button in panel. The other thing he did was that he found some code that basically does: if (user-types-right-passphrase) then mount-the-disk else display-error endif And then he patches out the if statement and notices that the disk will mount, but curiously is lots of random garbage. He leaves as an open problem how to make the disk readable after patching out the if statement. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Secure phones from VectroTel?
On 23 May 2006, at 8:19 AM, Perry E. Metzger wrote: Following the links from a /. story about a secure(?) mobile phone VectroTel in Switzerland is selling, I came across the fact that this firm sells a full line of encrypted phones. http://www.vectrotel.ch/ The devices apparently use D-H key exchange to produce a 128 bit AES key which is then used as a stream cipher (presumably in OFB or a similar mode). Authentication appears to be via a 4 digit pin, certainly not the best of mechanisms. Does anyone out there know much about these products and their security properties (or lack thereof)? My guess from looking at the web site is that it's AES-128 counter mode (but it could be OFB or something like it) derived directly from a 1K ephemeral DH. My reading from some of the pages is that the four- digit thing is not that it's a PIN, but a Short Authentication String, a la ATT3600, Blossom COMSEC phone, PGPfone, and Zfone. Interestingly, they are doing the encrypted voice over the data channel. The FAQ notes that they have perfect forward secrecy and no stored keys. Sadly, they don't release source code and say there will be no updates. Nonetheless, it passes the sniff test. The limitations on its use give some further clues about implementation. Half-second delay, slightly metallic voice, setup time of 10-30s. I have my guesses on what codec, cpu, and other things they're using from that. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: NPR : E-Mail Encryption Rare in Everyday Use
I have to chime in on a number of points. I'll try to keep commercial plugs to a minimum. * An awful lot of this discussion is some combination of outdated and true but irrelevant. For example, it is true that usability of all computers is not what it could be. But a lot of what has cruised by here is similar to someone saying, Yes, usability is atrocious -- here, look at this screenshot of Windows 3.1. Someone else pipes up, You think that's bad, let me show you this example from the Xerox Alto. What*ever* were they thinking? And then someone else says, Yeah, and if you think that's bad, look at what 'ls' did in Unix V6! Then when someone else says, Y'know, I'm using the latest version of Firefox, and it's actually pretty good the next message says, But what about the Y2K issues, and what happens when in 2038? I swear, guys, this thread is the crypto version of the Monty Python Luxury sketch. * Whitten and Tygar is a great paper, but it was written ages ago on software that was released in 1997. Things aren't perfect now, but let's talk about what's out there now. Even at the time, one of Whitten's main points is how hard it is to apply usability to security, because of how odd it is. As a very quick example, in most forms of user design, you let exploration take a prominent place. But it doesn't work in security because you can't click undo when you do something you didn't intend. * There are new generations of crypto software out there. I produce the PGP products, and PGP Desktop and PGP Universal are automatic systems that look up certs use them, automatically encrypt, and even does both OpenPGP and S/MIME. They're not perfect, and lead to other amusing issues. For example, an hour ago, I was coordinating with someone that I'm meeting at a conference. I got a reply saying, I'm at the airport and can't decrypt your message from my phone. I hadn't realized that I *had* encrypted my message, because my system and my colleague's system had been doing things for us. I habitually send most of my email securely, but I don't think about it. My robots take care of it for me. I tune policies, I don't encrypt messages. If you don't want to use my products, as Ben Laurie pointed out, there's a very nice plugin for Thunderbird called Enigmail that makes doing crypto painless. * There are also new generations of keyservers out there that work on the issues of the old servers to trim defunct keys, and manage other issues. I have out there the PGP Global Directory. Think of it as a mash-up of a keyserver along with Robot CA concepts and user management goodness adapted from modern mailing list servers like Mailman. * A number of us are also re-thinking other concepts such as using short-lived certificates based on the freshness model to constrain lifecycle management issues. * There are many challenges remaining. Heck, the fact that people here apparently have not updated their knowledge any time this century is part of the problem. But let me tell you that email encryption is growing, and growing strongly. However, most of the successes are not happening where you see them. They're happening in business, where communities of partners decide they need to do secure email, and then they do. This is another place where things have changed radically. A decade ago, we thought that security would be a grass-roots phenomenon where end-users and consumers would push security into those stodgy businesses. What's happening now is the exact opposite -- savvy businesses are putting together sophisticated security systems, and that's slowly starting to get end-users to wake up. I'd be happy to discuss at length where things are getting better, where they aren't, and where some issues have been shuffled around. But we do need to talk about what's going on now, not ten years ago. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: gonzo cryptography; how would you improve existing cryptosystems?
On 4 Nov 2005, at 5:23 PM, Travis H. wrote: For example, pgp doesn't hide the key IDs of the addressees. But OpenPGP does. Here's an extract fro RFC 2440: 5.1. Public-Key Encrypted Session Key Packets (Tag 1) [...] An implementation MAY accept or use a Key ID of zero as a wild card or speculative Key ID. In this case, the receiving implementation would try all available private keys, checking for a valid decrypted session key. This format helps reduce traffic analysis of messages. Now, there has been much discussion about how useful this is, and there are other related issues like how you do the UI for such a thing. But the *protocol* handles it. You might also want to look at the PFS extensions for OpenPGP: http://www.apache-ssl.org/openpgp-pfs.txt and even OTR, which is very cool in its own right (and is designed to take care of the sort of edge conditions all of these other things have): http://www.cypherpunks.ca/otr/ Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Is 3DES Broken?
On 4 Feb 2005, at 10:51 AM, Greg Rose wrote: I'm surprised that no-one has said that ECB mode is unsafe at any speed. Because if they did, some smartass would chime in and say that ECB mode is perfectly fine at some speeds. For example, you could safely encrypt one bit in ECB mode, particularly if you permitted, nay encouraged the other 63 or 127 to be arbitrary nigh unto random. Surely you don't need to have an IV and padding and all if the small block were random-padded. We'd then get into a long debate over how many bits can be handled in such a system. 32? 127? 128? Then some other smartass would suggest that it's more efficient in such a case to just XOR the key on the data and effectively just use a one-time pad. And then we'd digress into a rambling discussion on one-time pads and how practical they are in real applications. Finally, some uber-smartass would point out that you can even get rid of the OTP by taking those small bits of data and padding appropriately and using a public-key op. By then, we'd all have lost sight of the fact that the main topic here is whether 3DES is broken and that the answer is a simple, no. (And it's a good thing that this is Cryptography, not Cypherpunks, as then there'd be another digression about Nader and how good the Corvair was or wasn't, along with URLs of nicely restored examples on eBay.) This is why no one has had the temerity to suggest that ECB mode is unsafe at any speed. Helpfully, Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Crypto blogs?
On 18 Oct 2004, at 12:49 PM, Hal Finney wrote: Does anyone have pointers to crypto related weblogs? Bruce Schneier recently announced that Crypto-Gram would be coming out incrementally in blog form at http://www.schneier.com/blog/. I follow Ian Grigg's Financial Cryptography blog, http://www.financialcryptography.com/. Recently I learned about Adam Shostack's http://www.emergentchaos.com/, although it seems to be more security than crypto. Any other good ones? Matt Hamrick's Cryptonomicon.net is good. There are also my PGP CTO corner articles at http://www.pgp.com/resources/ctocorner/. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: New Attack on Secure Browsing
On 15 Jul 2004, at 9:36 PM, Aram Perez wrote: I'm not sure if PGP deliberately set out to confuse naïve users since their logo has been the padlock for a while. Many web sites have their logo displayed on the address bar (and tab) when you go to there site, see http://www.yahoo.com or http://www.google.com. Maybe Jon can answer the question. (Sent from this account, since I am subscribed from here.) This is a favicon -- a logo icon for the site. Lots of sites use them. PGP has had this on our for a couple of years, now. I vaguely remember there being one in The Dark Days, but I could be misremembering. This is the first bit of confusion I've heard about it. PGP's logo icon has been a padlock at least since the O'Reilly book used it in January of '95. This is before there even was an SSL. That particular icon is the very same one that was used as the tray icon in some version of PGP or other (we think PGP 7). We're giving this all due consideration. Would it help if we changed the metal, perhaps from the current four-plane brass to eight-plane steel or even to alpha-channel Jolly Rancher iridescent translucent anodized titanium? Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Gresham's Law?
On Saturday, October 25, 2003, at 08:29 AM, Russell Nelson wrote: I wonder if the DMCA (why do those initials bring to mind a song by The Village People?) isn't invoking Gresham's Law? Gresham's Law says bad money drives out good, but it only applies when there is a legal tender law. Such a law requires that all money be treated equally -- as legal tender for all debts. Gresham's Law predicts that people will hoard good money and spend bad money, since it's all the same to them. This is exactly what I said in my talks and testimony about the DMCA. I referred to Gresham's Law as it applies to security. I also have called the DMCA The Snake-Oil Protection Act. This is indeed the only case I know of where government has given protection and preference to inferior systems over superior ones. Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]