example: secure computing kernel needed
Previous discussions of secure computing technology have been in some cases sidetracked and obscured by extraneous notions such as -- Microsoft is involved, therefore it must be evil. -- The purpose of secure computing is DRM, which is intrinsically evil ... computers must be able to copy anything anytime. Now, in contrast, here is an application that begs for a secure computing kernel, but has nothing to do with microsoft and nothing to do with copyrights. Scenario: You are teaching chemistry in a non-anglophone country. You are giving an exam to see how well the students know the periodic table. -- You want to allow students to use their TI-83 calculators for *calculating* things. -- You want to allow the language-localization package. -- You want to disallow the app that stores the entire periodic table, and all other apps not explicitly approved. The hardware manufacturer (TI) offers a little program that purports to address this problem http://education.ti.com/us/product/apps/83p/testguard.html but it appears to be entirely non-cryptologic and therefore easily spoofed. I leave it as an exercise for the reader to design a calculator with a secure kernel that is capable of certifying something to the effect that "no apps and no data tables (except for ones with the following hashes) have been accessible during the last N hours." Note that I am *not* proposing reducing the functionality of the calculator in any way. Rather I am proposing a purely additional capability, namely the just-mentioned certification capability. I hope this example will advance the discussion of secure computing. Like almost any powerful technology, we need to discuss -- the technology *and* -- the uses to which it will be put ... but we should not confuse the two. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Gresham's Law?
On 11/19/2003 07:51 PM, Jon Callas wrote: This is indeed the only case I know of where government has given protection and preference to inferior systems over superior ones. It's not hard to discover other cases. At the philosophical level, one could argue that protecting the weak is one of the most fundamental raisons d'etre for a government. If you don't like the effects of Gresham's law, replacing it with the law of the jungle isn't really an improvement. Practicality lies in the vast gray area in the middle. As a specific example, consider the legal status of the lock on my door. Any burglar with even rudimentary skills could pick the lock. One with even less skill could break the fancy glass beside the door. More-secure locks and more-secure doors are readily available. Yet the law takes notice of the lock. If I don't have a door, if you waltz in it might be trespass or it might be no offence at all. But if you pick or smash your way past a locked door, without permission or some very special reason, it's likely to be felony breaking and entering. Maybe you think that the B&E laws should only apply to state-of-the-art high security vaults. I don't. I think the existing lock performs a useful symbolic role: it puts you on notice that you don't belong there. You can't get past it by accident. The law takes over from there. > . in my talks and testimony about the DMCA. > I referred to Gresham's Law as it applies to security. I also have > called the DMCA "The Snake-Oil Protection Act." A friend of mine once told me: Never support a strong argument with a weak one. There exist strong arguments why DMCA is a bad law. Boldly asserting that the government has never heretofore built laws around imperfect technology is not going to impress any lawmakers. Also, if you're going to argue against something, it pays to know where the other side is coming from. In the areas where cypto works well, it works so extraordinarily well that bad systems can, over time, be drowned in their own snake-oil and forgotten. If protecting substandard crypto were the only issue, I doubt anybody would have gone to the trouble of passing a law. The point of the law is elsewhere: the proponents are worried about what happens in the thousand and one cases where strong crypto doesn't solve the problem. To repeat: If you want to make an argument against the other side, it's a bad strategy to start by misjuding what they're arguing for. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: SSL, client certs, and MITM (was WYTM?)
On 10/22/2003 04:33 PM, Ian Grigg wrote: > > The frequency of MITM attacks is very low, in the sense that there > are few or no reported occurrences. We have a disagreement about the facts on this point. See below for details. > This makes it a challenge to > respond to in any measured way. We have a disagreement about the philosophy of how to "measure" things. One should not design a bridge according to a simple "measurement" of the amount of cross-river traffic in the absence of a bridge. One should not approve a launch based on the "observed fact" that previous instances of O-ring failures were non-fatal. Designers in general, and cryptographers in particular, ought to be proactive. But this philosophy discussion is a digression, because we have immediate practical issues to deal with. > Nobody doubts that it can occur, and that it *can* occur in practice. > It is whether it *does* occur that is where the problem lies. According to the definitions I find useful, MITM is basically a double impersonation. For example, Mallory impersonates PayPal so as to get me to divulge my credit-card details, and then impersonates me so as to induce my bank to give him my money. This threat is entirely within my threat model. There is nothing hypothetical about this threat. I get 211,000 hits from http://www.google.com/search?q=ID-theft SSL is distinctly less than 100% effective at defending against this threat. It is one finger in a dike with multiple leaks. Client certs arguably provide one additional finger ... but still multiple leaks remain. == The expert reader may have noticed that there are other elements to the threat scenario I outlined. For instance, I interact with Mallory for one seemingly trivial transaction, and then he turns around and engages in numerous and/or large-scale transactions. But this just means we have more than one problem. A good system would be robust against all forms of impersonation (including MITM) *and* would be robust against replays *and* would ensure that trivial things and large-scale things could not easily be confused. Et cetera. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
On 10/16/2003 07:19 PM, David Honig wrote: > > it would make sense for the original vendor website (eg Palm) > to have signed the "MITM" site's cert (palmorder.modusmedia.com), > not for Verisign to do so. Even better, for Mastercard to have signed > both Palm and palmorder.modusmedia.com as well. And Mastercard to > have printed its key's signature in my monthly paper bill. Bravo. Those are golden words. Let me add my few coppers: 1) This makes contact with a previous thread wherein the point was made that people often unwisely talk about identities when they should be talking about credentials aka capabilities. I really don't care about the identity of the order-taking agent (e.g. palmorder.modusmedia.com). What I want to do is establish the *credentials* of this *session*. I want a session with the certified capability to bind palm.com to a contract, and the certified capability to handle my credit-card details properly. 2) We see that threat models (as mentioned in the Subject: line of this thread), while an absolutely vital part of the story, are not the whole story. One always needs a push-pull approach, documenting the good things that are supposed to happen *and* the bad things that are supposed to not happen (i.e. threats). 3) To the extent that SSL focuses on IDs rather than capabilities, IMHO the underlying model has room for improvement. 4a) This raises some user-interface issues. The typical user is not a world-class cryptographer and may not have a clear idea just what ensemble of credentials a given session ought to have. This is not a criticism of credentials; the user doesn't know what ID the session ought to have under the current system, as illustrated by the Palm example. The point is that if we want something better than what we have now, we have a lot of work to do. 4b) As a half-baked thought: One informal intuitive notion that users have is that if a session displays the MasterCard *logo* it must be authorized by MasterCard. This notion is enforceable by law in the long run. Can we make it enforceable cryptographically in real time? Perhaps the CAs should pay attention not so much to signing domain names (with some supposed responsibility to refrain from signing abusively misspelled names e.g. pa1m.com) but rather more to signing logos (with some responsibility to not sign bogus ones). Then the browser (or other user interface) should to verify -- automatically -- that a session that wishes to display certain logos can prove that it is authorized to do so. If the logos check out, they should be displayed in some distinctive way so that a cheap facsimile of a logo won't be mistaken for a cryptologically verified logo. Even if you don't like my half-baked proposal (4b) I hope we can all agree that the current ID-based system has room for improvement. = Tangentially-related point about credentials: In a previous thread the point was made that anonymous or pseudonymous credentials can only say positive things. That is, I cannot discredit you by giving you a discredential. You'll just throw it away. If I somehow discredit your pseudonym, you'll just choose another and start over. This problem can be alleviated to some extent if you can post a fiduciary bond. Then if you do something bad, I can demand compensation from the agency that issued your bond. If this happens a lot, they may revoke your bond. That is, you can be discredited by losing a credential. This means I can do business with you without knowing your name or how to find you. I just need to trust the agency that issued your bond. The agency presumably needs to know a lot about you, but I don't. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: cryptographic ergodic sequence generators?
Perry E. Metzger wrote: >>I've noted to others on this before that for an application like >>the IP fragmentation id, it might be even better if no repeats >>occurred in any block of 2^31 (n being 32) but the sequence did not >>repeat itself (or at least could be harmlessly reseeded at very very >>long intervals). I assume the point of the reseeding is to make the ID-values more unpredictable. On 09/07/2003 11:18 AM, David Wagner wrote: > > Let E_k(.) be a secure block cipher on 31 bits with key k. > Pick an unending sequence of keys k0, k1, k2, ... for E. > > Then your desired sequence can be constructed by > E_k0(0), E_k0(1), E_k0(2), ..., E_k0(2^31 - 1), > 2^31 + E_k1(0), 2^31 + E_k1(1), ..., 2^31 + E_k1(2^31 - 1), > E_k2(0), E_k2(1), E_k2(2), ..., E_k2(2^31 - 1), > 2^31 + E_k3(0), 2^31 + E_k3(1), ..., 2^31 + E_k3(2^31 - 1), Again if we assume the point is to make the values unpredictable (not just ergodic), then there is room for improvement. To see what I mean, divide the values into generations G=0,1,2,3... where each row in the tableau above is one generation. The problem is that at the end of each generation, the values become highly predictable, à la Blackjack. David's proposal can be improved by the method used by Blackjack dealers: shuffle early. In each generation, let the argument of E_kG(.) max out at some fraction (f) of 2^(n-1). A limit of f=1/2 is the obvious choice, although other f values e.g. f=2/3 work nicely too. The domain and range of E_kG(.) are still unrestricted (n-1)-bit numbers. This gives us the following properties -- Guaranteed no repeats within the last f*2^(n-1) IDs. -- Probably no repeats in an even longer time. -- Even if the opponent is a hard-working Blackjack player, he has only one chance in (1-f)*2^(n-1) of guessing the next value. To put this number in context, note that the opposition has one chance in 2^(n-1) of guessing the next value without any work at all, just by random guessing. Setting f too near zero degrades the no-repeat guarantee. Setting f too near unity leads to the Blackjack problem. Setting f somewhere in the middle should be just fine. = Discussion of conceivable refinements: A proposal that keeps coming up is to take the values generated above and run them through an additional encryption stage, with a key that is randomly chosen at start-up time (then held fixed for all generations). The domain and range of this post-processing stage are n-bit numbers. This makes the output seem more elegant, in that we have unpredictability spread over the whole n-bit word, rather than having n-1 hard-to-predict bits plus one almost-constant bit. Define the phase to be P := (G mod 2). The opponent will have to collect roughly 2^n data points before being able to figure out which values belong to which phase, so initially his guess rate will be closer to one in 2^n, which is a twofold improvement ... temporarily. This temporary improvement is not permanent, if we allow the opponent to have on the order of 2^n memory. He will in the long run learn which values belong to which phase. I see no way to prevent this. So as far as I can tell, the proposed post-processing is more in the nature of a temporary annoyance to the opposition, and should not be considered industrial-strength cryptography. Perhaps more to the point, if we are going to allow the opposition to have 2^n memory, it would be only fair to allow the good guys to have 2^n memory. In that case, all the schemes discussed above pale in comparison to something I suggested previously, namely generating an ID absolutely randomly, but using a look-up table to check if it has been used recently, in which case we veto it and generate another. If you can afford the look-up table, these randomly generated IDs have the maximum possible unpredictability. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
anonymity +- credentials
On 10/03/2003 01:26 PM, R. A. Hettinga wrote: > > It seems to me that perfect pseudonymity *is* anonymity. They're not quite the same thing; see below. > Frankly, without the ability to monitor reputation, you don't have > ways of controlling things like transactions, for instance. It's just > that people are still mystified by the concept of biometric > is-a-person identity, which strong cryptography can completely > divorce from reputation. We agree that identification is *not* the issue, and that lots of people are confused about this. I'm not sure "reputation" is exactly the right concept either; the notion of "credentials" is sometimes better, and the operating-systems folks speak of "capabilities". There are three main possibilities: -- named (unique static handle) -- pseudonymous (dynamic handles) -- anonymous (no handle all) Sometimes pseudonyms are more convenient than having no handle at all. It saves you the trouble of having to re-validate your credentials at every micro-step of the process (whatever the process may be). Oftentimes pseydonyms are vastly preferable to a static name, because you can cobble up a new one whenever you like, subject to the cost of (re)establishing your credentials from scratch. The idea of linking (bidirectionally) all credentials with the static is-a-person identity is a truly terrible idea. It dramatically *reduces* security. Suppose Jane Doe happens to have the following credentials -- Old enough to buy cigarettes. -- Has credit-card limit > $300.00 -- Has credit-card limit > $3000.00 -- Has car-driving privileges. -- Has commercial pilot privileges. -- Holds US citizenship. -- Holds 'secret' clearance. When Jane walks into a seedy bar, someone can reasonably ask to verify her "old-enough" credential. She might not want this query to reveal her exact age, and she might *really* not want it to reveal her home address (as many forms of "ID" do), and she might *really* *really* not want it to reveal all her other credentials and capabilities. *) There is an exploding epidemic of "ID" theft. That is a sure sign that people keep confusing capability --> identity and identity --> capabilities. *) There are those who want us to have a national ID-checking infrastructure as soon as possible. They think this will increase security. I think it is a giant step in the wrong direction. *) Reputation (based on a string of past interactions) is one way, but not the only way, to create a credential that has some level of trust. = We need a practical system for anonymous/pseudonymous credentials. Can somebody tell us, what's the state of the art? What's currently deployed? What's on the drawing boards? - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Monoculture
On 10/01/2003 11:22 AM, Don Davis wrote: > > there's another rationale my clients often give for > wanting a new security system, instead of the off- > the-shelf standbys: IPSec, SSL, Kerberos, and the > XML security specs are seen as too heavyweight for > some applications. the developer doesn't want to > shoehorn these systems' bulk and extra flexibility > into their applications, because most applications > don't need most of the flexibility offered by these > systems. Is that a rationale, or an irrationale? According to 'ps', an all-up ssh system is less than 3 megabytes (sshd, ssh-agent, and the ssh client). At current memory prices, your clients would save less than $1.50 per system even if their custom software could reduce this "bulk" to zero. With the cost of writing custom software being what it is, they would need to sell quite a large number of systems before de-bulking began to pay off. And that's before accounting for the cost of security risks. > some shops experiment with the idea of using only > part of OpenSSL, but stripping unused stuff out of > each new release of OpenSSL is a maintenance hassle. 1) Well, they could just ignore the new release and stick with the old version. Or, if they think the new features are desirable, then they ought to compare the cost of "re-stripping" against the cost of implementing the new desirable features in the custom code. I'm just trying to inject some balance into the balance sheet. 2) If you do a good job "stripping" the code, you could ask the maintainers to put your #ifdefs into the mainline version. Then you have no maintenance hassle at all. > they want their crypto clothing > to fit well, but what's available off-the-rack is > a choice between frumpy Aha. They want to make a fashion statement. That at least is semi-understandable. People do expensive and risky things all the time in the name of fashion. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: quantum hype
On 09/19/2003 12:07 PM, Matt Crawford wrote: I'm always stuck on that little step where Alice tells Bob what basis she used for each photon sent. Tells him how? That's a fair question. Here's an outline of the answer. We choose an eps << 1. We ask how many people accurately received a fraction (1-eps) of the bits. -- perhaps nobody received that many. This will be detected. No key exchange will take place. Start over. Do not pass Go, do not collect $200.00. -- perhaps one person did. In this case, without loss of generality, we call this person Bob. -- the laws of quantum mechanics assure us that not more than one person will receive that many bits. Quanta cannot be copied. Alice can then publish in the clear (e.g. on netnews) what basis she used for transmitting. This information is of little use to anyone except Bob (exponentially little, as a function of eps and other parameters). Anyone who tampers with this message can cause a DoS but not a compromise of the data. Alice and Bob proceed with the integrity checks leading to the key exchange as previously described. After the key exchange has taken place, Alice and Bob can use the key to set up a tunnel to keep their discussions private. Probably one of the first things they will do is exchange authentication messages through the newly created tunnel. Thereby Alice can decide whether this Bob is the Bob she wanted to talk to, as opposed to an impersonator. Similarly Bob ought to check Alice's creds. > They need integrity protection and endpoint authentication for N bits of basis. No, the authentication etc. can quite nicely come after the quantum key exchange, as I previously mentioned. > Is the quantum trick ... really as exciting as it's made out to be? We need a more specific question. Does quantum key exchange solve all of the world's problems? Surely not. Does quantum key exchange solve *any* of the world's problems? More specifically, is there any plausible scenario where QKE is more cost-effective than conventional modern crypto, within (say) the next ten years? I tend to doubt it, but it's hard to be sure. What is the chance of a treeemendous cryptanalytic breakthrough that will defeat all or most of the currently-used ciphers? I'd say the chance is less than 1%. But is it less than one in a million? Or perhaps more relevantly, what is the chance that an enemy black-bag artist or a traitor or a bungler will compromise all my keys and/or all my plaintext? The latter is not to be sneezed at, and puts an upper bound on what I'm willing to pay for fancy crypto. To calibrate the sincerity of my estimate: I walked away from a potential job managing some major programs in this area. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: quantum hype
I wrote: >> >> *) In each block, Mallory has a 50/50 chance of being able to >> copy a bit without being detected. On 09/18/2003 12:02 PM, martin f krafft wrote: > > This is what I don't buy. If Mallory sees the data, it must be > detected, because otherwise the approach is flawed. But in any case > does Mallory have the means to completely DoS any attempt of > communication between the parties, simply by reading along, unless > there is a dedicated channel between Alice and Bob. In which case, > why is there a need for quantum cryptography in the first place? Yes, Mallory can DoS the setup by reading (and thereby trashing) every bit. But Mallory can DoS the setup by chopping out a piece of the cable. The two are equally effective and equally detectable. Chopping is cheaper and easier. Other key-exchange methods such as DH are comparably incapable of solving the DoS problem. So why bring up the issue? >>There is only one chance in 2^-C that Mallory knows this bit. > One chance in 2^C, otherwise it would be deadly, no? But in any > case, Reasonable keysized DH exchanges give me the same security > with a lot more flexibility, and a lot less chance for DoS. I still > don't buy it. The claim that DH is "secure" rests on certain assumptions about which computational operations are easy and which are not. These assumptions are open to question to some degree. Numbers that some people considered hopelessly difficult to factor a few years ago have been factored. One can imagine a world where factoring is computationally easy; it wouldn't be the end of the world. If you can _prove_ DH is secure, please let us know immediately. The security of the quantum algorithms rests on entirely different foundations. Nobody has been able to even imagine a world where quanta are copyable, without contradicting well-observed physical facts. People have tried. Seriously. If you have a consistent theory of physics that repeals the uncertainty principle, please let us know immediately. > How can you check for tampering without reading the data off the > channel? Checksums? I spelled this out in my previous email. It's a standard quality-assurance check using sampling. > why do I need QC then if I have > a dedicated channel anyhow? Suppose I *wish* to set up a dedicated channel. Dedicated means nobody but me is using it. Wishing doesn't suffice. I went through the motions of setting it up, and maybe I was the only person hooked onto it yesterday, but how do I know it hasn't been tapped sometime since then? Quantum key-exchange provides powerful assurance that the wished-for property is actually achieved. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: quantum hype
On 09/13/2003 05:43 PM, David Wagner wrote: > > I believe the following is an accurate characterization: > Quantum provides confidentiality (protection against eavesdropping), > but only if you've already established authenticity (protection > against man-in-the-middle attacks) some other way. I wouldn't have put it quite that way. Authenticity doesn't need to come before confidentiality. Let's consider various threats: 1) passive eavesdropping. 2) active eavesdropping including tampering. 3) simple impersonation at the far end. 4) MITM, which can be considered a form of active eavesdropping by means of a double impersonation. Quantum key exchange provides end-to-end protection against passive eavesdropping. It plugs into the block diagram in the same place as Diffie-Hellman key exchange would plug in. It's the same only a little stronger (no assumptions about algorithmic intractability). That means you can establish a confidential but anonymous tunnel, and then send authentication messages through the tunnel. As far as I know, there are no quantum algorithms that prevent impersonation. Perhaps I'll learn of some tomorrow, but I would be truly surprised. Quantum mechanics isn't going to tell you that John Doe #137 is a good guy while John Doe #138 is a bad guy. This is quite significant, because key exchange is only one part of any practical system. Quantum mountebanks claim to have solved "the" key distribution problem, but this is untrue. They have dealt with _exchange_ of session keys, but they have not dealt with the _distribution_ of authentication keys. Distributing and securing any kind of keys under (say) battlefield conditions is a nightmare. Reducing the amount of keying material helps only slightly, unless you can reduce it to zero, which has not been achieved AFAIK. Then you have to consider the cost of very special endpoint equipment, the cost of a very special communication channel, and the cost of using that channel inefficiently. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: quantum hype
On 09/13/2003 03:52 PM, martin f krafft wrote: > ... any observation of the quantum stream is immediately > detectable -- but at the recipient's side, and only if checksums are > being employed, which are not disturbed by continual or sporadic > photon flips. > > someone will have > access to the 20 bytes before the recipient can look at the 20 > bytes, decide they have been "tampered" with, and alert the sender. > So I use symmetric encryption and quantum cryptography for the key > exchange... the same situation here. Maybe the recipient will be > able to tell the sender about the junk it receives, but Mallory > already has read some of the text being ciphered. 1) As the subject: line suggests, there is indeed a lot of hype in the quantum crypto business. But there is also a kernel of reality behind it. 2) Typically people use a combination of quantum and non-quantum techniques. 3) Typically there is a multi-stage process: -- Exchange several blocks of keying material. -- Check for tampering; reject blocks that show tampering. -- Do some post-processing to reduce vulerability to undetected tampering. -- Use the result to encrypt your actual data. This is the first stage at which valuable data is exposed in any way. Consider the possibilities: *) In each block, Mallory has a 50/50 chance of being able to copy a bit without being detected. *) More generally, Mallory has a 2^-C chance of being able to copy C bits without being detected. As an easy-to-understand example: You (Alice and Bob, the good guys) choose a C big enough that 2^-C looks negligible to you. Alice sends Bob a bunch of bits (N>>2C). Bob tells Alice (in the clear) what receiver settings he used. Alice then knows which bits Bob should have been able to receive correctly. Alice tells Bob (in the clear) to check a randomly-chosen set of C bits, checking that they have the values Alice thinks they should have. If this test is passed, it puts an upper bound on how greedy Mallory has been. Then Alice tells Bob (in the clear) to use another (disjoint) set of C bits. Bob XORs these bits together and calls it one bit of key. There is only one chance in 2^-C that Mallory knows this bit. The efficiency of the key-exchange is roughly one part in 2C. So there is an exponential security/efficiency tradeoff. Not too shabby. The foregoing assumed an error-free channel. Things get much worse if the good guys need to do error correction. There are snake-oily products out there that throw in some "mild" cryptographic assumptions in order to increase the efficiency. So beware. On 09/13/2003 05:06 PM, David Wagner wrote: > > Quantum cryptography *assumes* that you > have an authentic, untamperable channel between sender and receiver. Not true. The signal is continually checked for tampering; no assumption need be made. Not all the world's oil comes from snakes. Some does, some doesn't. > if we want end-to-end security, one can't > stick classical routers or other such equipment in the middle of the > connection between you and I. That's true. A classical router is indistinguishable from a tap. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: cryptographic ergodic sequence generators?
On 09/06/2003 02:09 PM, Perry E. Metzger wrote: > For making things like IP fragmentation ids and other similar > protocol elements unpredictable, OK, that more-or-less defines an objective. > it would be useful to have what I'll call a cryptographic ergodic > sequence generator I'm not at all sure that is the best way to approach the stated objective. As usual, we need a clear idea of what threats we are facing; we need more detail than is provided by the objective mentioned above. Also, as always, we want to impose large costs on the bad guys and small costs on the good guys, so we need to have some sort of cost model. In this case, the main issues appear to be: a) some cost if try to re-use an ID prematurely. b) some cost if the bad guy guesses what ID we are going to use in the near future. (Presumably he can replay IDs from the recent past as much as he wants; we can't stop that.) c) constraints on n, the number of bits in the ID. d) computation costs and communication costs. e) possibly it is important for our peer at the receiving end to be able to treat our IDs as being _sequential_ and well-ordered, as the Subject: of this thread suggests, as opposed to being merely distinct. There's no point in designing something that works well only when it is not under attack... so let's assume it is under heavy attack. The bad guy is trying like crazy to guess IDs. This means that cost (b) must be taken very seriously. In particular, suppose we have an ergodic design with n=24. Then the bad guy can just sit and watch the first 16 million packets, and can then eat us for lunch, predicting future IDs with 100% success. In the long run (large numbers of packets), this is outcome is as bad as anything could possibly be. We can do better. I recommend we consider schemes that are not strictly ergodic, but instead incorporate some degree of randomness. To understand the effectiveness of my schemes, we must understand item (c), the constraints on n. Let us suppose that there can be at most 2^k IDs "alive" in the system at any one time. -- Obviously we must have n>=k; otherwise it would be impossible to use the IDs reliably, even in the absence of an attack. -- More interestingly, we need to assume n is appreciably bigger than k, or the whole game is not worth playing. That's because the bad guy has one chance in 2^(n-k) of guessing an ID that corresponds to an "alive" ID, no matter how cleverly we choose the IDs. Therefore, without further ado: Scheme #1: Construct a n-bit-long plain-ID using a k-bit counter concatenated with (n-k) bits of randomness. Then form the final crypto-ID by encrypting the plain-ID. (The encryption key is randomly chosen at system boot time, and remains secret.) This guarantees that no ID will collide with any of the 2^k most-recently-used IDs. It also guarantees that the bad guy's chance of guessing the next ID is only 2^(n-k). [After all, *we* don't have any better chance of guessing our next ID, because of the randomness.] In the long run, this is better than the strictly ergodic scheme by a factor of 2^(n-k). This is about as well as we can do, in a minimax sense, based on the maximum success the bad guy can have. NOTE: If the IDs need to be sequential, as mentioned in desideratum (e) above, then we need to give the decryption key to our peer at the receiving end. The peer can recover the plain-ID, discard the randomness, and use the counter for sequencing. Scheme #2: We could go to the extreme of having no counter at all, just n bits of randomness. To prevent collisions, we use a content-addressable memory sufficient to remember the last 2^k IDs we sent. We choose candidate IDs at random; if a candidate would cause a collision, we discard the candidate and try again. This anti-collision step increases computational costs by roughly one part in 2^(n-k) and increases communications costs not at all. About the only advantage this has over scheme #1 has to do with whether you think k is an upper bound or a typical value. If the typical number of "alive" IDs in the system is less than the maximum number, scheme #2 introduces more randomness and therefore makes life harder for the bad guy. This of course totally sacrifices any notion of sequencing. Scheme #3: This is a hybrid of scheme #1 and scheme #2. Keep a c-bit counter, where we allow c to be less than k. This absolutely guarantees no collisions with the last 2^c IDs, and makes it highly likely that there will be no collisions stretching back much farther than that. We take our chances, without bothering to have a content-addressable memory. This scheme exploits the tradeoff between cost (a) and cost (b). In the likely case that cost (b) is much larger, it might pay to reduce the chance of incurring cost (b) even if this means a slight risk of incurring cost (a). - The Cryptography Mailing List Unsubscribe
lopsided Feistel (was: cryptographic ergodic sequence generators)
On 09/06/2003 02:33 PM, Tim Dierks wrote: > I'm sure that it would be possible to design a Feistel-based block > cipher with variable block size, supporting some range of even values > of n. There's no need to exclude odd n. I know the typical superficial textbook describes the Feistel trick in terms of splitting each block exactly in half, but if you understand the trick you see that it works just fine for other splits. It doesn't need to be anywhere near half. It doesn't even need to be a two-way split. You could process a 21-bit word as: -- three groups of seven, or -- seven groups of three, or -- one group of twelve and one group of nine, or -- whatever. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: cryptographic ergodic sequence generators?
On 09/06/2003 02:09 PM, Perry E. Metzger wrote: > For making things like IP fragmentation ids and other similar > protocol elements unpredictable, it would be useful to have what I'll > call a cryptographic ergodic sequence generator -- that is, a > generator that will produce a sequence of n bit numbers such that > there are no repeats until you pass the 2^nth number in the sequence > (that is, the sequence is a permutation of all 2^n bit numbers) and > such that it is very difficult to predict what the next number in the > sequence might be beyond the fact that it will not be one of the > numbers seen earlier in the sequence. It is also rather important > that the generator be computationally inexpensive. > > Anyone know how to produce such a thing? Encrypted counter. The counter provably has a cycle of 2^n. The encryption is provably 1-to-1. Choose the encryption key randomly and keep it secret. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: PRNG design document?
On 08/29/2003 03:43 PM, Tim Dierks wrote: > Allow me to clarify my problem a little. I'm commonly engaged to > review source code for a security audit, some such programs include a > random number generator, many of which are of ad-hoc design. The > nature of such audits is that it's much more appealing to be able to > say "here are three accepted guidelines that your generator violates" > rather than "I haven't seen that before and I don't like it, you > should replace it with something else". That's a very helpful clarification. > So I'm interested in such design guidelines, if they're available, > which such a generator could be tested against. While the resources > provided have been useful, it's only led me to where I was: that the > only way to do so is to attempt to analyze the system for > vulnerability to a collection of known flaws. That dissatisfaction is wise. Checking for known flaws is far from sufficient. It is possible to do much better. > I know a bunch of basic, obvious things that I can state (have a > large enough internal state, generate output with a secure hash, > etc.) and a bunch of other fuzzier notions that are harder to > concretize (output should be dependent on a sufficient quantity of > the internal pool, reseeding should affect a sufficent quantity of > the internal pool, etc.). But I don't have a resource which attempts > to canonically define minimal requirements for all these elements. > (If I have missed such a list in skimming the broad resources > available, I'd appreciate a note.) First, as is so common in this business, a lot depends on the threat model. -- If some scientist is doing Monte Carlo integrations, almost anything will be "random enough". I recommend using a long-period counter feeding MD5; that is sufficiently random and sufficiently efficient that it's probably not worth considering alternatives. -- At the other extreme, for some high-stakes adversarial applications, such a nationwide lottery or military crypto, *none* of the fuzzy concepts mentioned above come anywhere near being sufficient. Also keep in mind that many of the biggest threats against a PRNG do not attack the algorithm itself, but rather attack the *seed*. The state must be initialized (which is hard) and protected forever after (which is hard). I know the Subject: of this thread mentions PRNG, but I would not use a pseudo-random number generator for any high-stakes adversarial application. In particular, if anybody cares enough to hire you, asking you whether the PRNG is good enough, the answer almost certainly is no. Also note that I have made it very easy to use a hardware random number generator. For any high-stakes adversarial situation, I strongly recommend using that. For details, see http://www.av8n.com/turbid/ If you want some "principles" that are "violated" by typical RNGs, start with this: the RNG needs to come with a proof of correctness. In particular, this should include some sort of guaranteed lower bound on the entropy density of the generated symbols. In the case of any RNG with persisent state, there should be a proof that the seed is random and the state remains secure for all time. That's not a complete list of principles, but it should suffice to disqualify most of the junk that is out there. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: traffic analysis
On 08/28/2003 04:26 PM, David Wagner wrote: > > Are you sure you understood the attack? Are you sure you read my original note? > The attack assumes that communications links are insecure. I explicitly hypothesized that the links were encrypted. The cryptotext may be observed and its timing may be tampered with, but I assumed the attackers could not cut through the encryption to get at the plaintext. > The *transmission* from Alice may adhere to a fixed schedule, but > that doesn't prevent the attacker from introducing delays into the > packets after transmission. Fine. So far the timing doesn't tell us anything about the behavior of Alice, just the behavior of the attacker. > For instance, suppose I want to find out who is viewing my web site. > I have a hunch that Alice is visiting my web site right this instant, > and I want to test that hunch. I delay Alice's outgoing packets, > and I check whether the incoming traffic to my web contains matching > delays. I explicitly said that if some endpoints are not secure, Alice suffers some loss of privacy when communicating with such an endpoint. Here DAW is playing the role of attacker, and is mounting an attack that combined traffic analysis with much more powerful techniques; he is assuming he "owns" the endpoint or otherwise can see through the crypto into the plaintext. Let us not confuse "traffic analysis" issues with "anonymity" issues. I explicitly said that traffic analysis was not the only threat to be considered. To say it another way: The US ambassador in Moscow is not trying to remain anonymous from the US ambassador in Riyadh; they just don't want the opposition to know if/when/how-often they talk. = I described a certain model based on certain hypotheses. Many people have responded with attacks on different models, based on different hypotheses. Some have frankly admitted contradicting me without having bothered to read what I wrote. I'm not going to respond to any more of these ... except to say that they do not, as far as I can see, detract in any way from the points I was making. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: traffic analysis
A couple of people wrote in to say that my remarks about defending against traffic analysis are "not true". As 'proof' they cite http://www.cypherspace.org/adam/pubs/traffic.pdf which proves nothing of the sort. The conclusion of that paper correctly summarizes the body of the paper; it says they "examined" and "compared" a few designs, and that they "pose the question as to whether other interesting protocols exist, with better trade-offs, that would be practical to implement and deploy." Posing the question is not the same as proving that the answer is negative. I am also reminded of the proverb: Persons saying it cannot be done should not interfere with persons doing it. The solution I outlined is modelled after procedures that governments have used for decades to defend against traffic analysis threats to their embassies and overseas military bases. More specifically, anybody who thinks the scheme I described is vulnerable to a timing attack isn't paying attention. I addressed this point several times in my original note. All transmissions adhere to a schedule -- independent of the amount, timing, meaning, and other characteristics of the payload. And this does not require wide-area synchronization. If incoming packets are delayed or lost, outgoing packets may have to include nulls (i.e. cover traffic). This needn't make inefficient use of communication resources. The case of point-to-point links to a single hub is particularly easy to analyze: cover traffic is sent when and only when the link would otherwise be idle. Similarly it needn't make inefficient use of encryption/decryption resources. This list is devoted to cryptography, so I assume people can afford 1 E and 1 D per message; the scheme I outlined requires 2 E and 2 D per message, which seems like a cheap price to pay if you need protection against traffic analysis. On top of that, the processor doing the crypto will run hotter because typical traffic will be identical to peak traffic, but this also seems pretty cheap. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
traffic analysis (was: blackmail / stego)
I changed the Subject: line because most of the blackmail / stego thread was about traffic analysis. It's amusing that traffic analysis could be used to defeat a steganographic blackmail attempt, but there are larger issues involved. It is not true, despite what some people recently suggested, that traffic analysis destroys any hope of real-time anonymous communication. It is true that if you design an "anonymity" system under the assumption that the opposition doesn't have enough resources to perform traffic analysis, you'll be taken to the cleaners if the opposition does have such resources. There exist well-known techniques for greatly reducing the effectiveness of traffic analysis. A scenario of relevance to the present discussion goes like this: -- There exists a data haven. (Reiter and Rubin called this a "crowd".) -- Many subscribers have connections to the haven. -- Each subscriber maintains a strictly scheduled flow of traffic to and from the haven, padding the channel with nulls if necessary. -- All the traffic is encrypted, obviously. Then the opponent can put unlimited effort into traffic analysis but won't get anything in return, beyond the _a priori_ obvious fact that some pair of subscribers *may* have communicated. As an extension: -- The haven may fetch a lot of web pages, some of them in response to requests from subscribers, and some not. Then the opponent can conclude that some subscriber(s) *may* have looked at some of the fetched pages. Remark: I said that each channel must carry (and only carry) strictly scheduled traffic. It is sufficient but not necessary to send a constant rate. More complicated schedules, possibly incorporating a degree of randomness, are allowed. The point is that the cryptotext traffic must be independent of the amount (and other characteristics) of the plaintext traffic. Additional remarks, having little to do with traffic analysis, except as a reminder that traffic analysis isn't the only threat to be considered: *) Anonymity means They can't prove you're guilty. But it also means you can't prove you're innocent. A sufficiently totalitarian regime will require everyone to be able to prove their innocence at all times. Subscribing to an anonymity service would therefore be automatically illegal. *) Obviously the haven itself must be resistant to penetration by the opposition. *) Obviously if you use this service (or any other) to communicate with somebody at an endpoint that is already under surveillance, you have no privacy. So you must to some extent trust the endpoints, no matter how good the channel is. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: PRNG design document?
On 08/19/2003 11:57 AM, Tim Dierks wrote: > > I'm assuming a cryptographic PRNG of the type in OpenSSL, PGP, etc., > where entropic seeding data is accumulated into a pool and output is > produced by operating on the pool with a secure hash or similar > cryptographic algorithm. The statement contains two inequivalent ideas: -- some applications (OpenSSL, PGP, etc.) which imply certain requirements, and -- some technology for generating numbers which may or may not meet those requirements. The mentioned technology is what I classify as a _stretched_ random symbol generator, because it outputs an entropy density greater than zero but less than 100%. For most of the things that OpenSSL and PGP do, certainly certificate generation and almost certainly session-key generation, I would *not* recommend using a stretched random symbol generator, but rather a full-blown True Random Symbol Generator, i.e. 100% entropy density. There are other situations (e.g. expunging a multi-gigabyte disk) where you might really need to do some stretching. BTW I prefer to reserve the term PRNG to apply to the extreme case of zero entropy density, but there's not much to be gained by quibbling about terminology. > Is there a definitive or highly recommended paper or book on the > design of PRNGs? How about this: http://www.av8n.com/turbid/ > I'm interested in whether there's a strong source on what the design > considerations for how to process the input into the pool, mix & > remix the pool, and generate output are. The idea of a pool that needs mixing and remixing is not the optimal design IMHO. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: authentication and ESP
On 06/19/2003 01:49 PM, martin f krafft wrote: > As far as I can tell, IPsec's ESP has the functionality of > authentication and integrity built in: It depends on what you mean by "built in". 1) The RFC provides for ESP+authentication but does not require ESP to use authentication. 2) Although the RFC allows ESP without authentication, typical implementations are less flexible. In FreeS/WAN for instance, if you ask for ESP will get ESP+AH. ESP without authentication may be vulnerable to replay attacks and/or active attacks that tamper with the bits in transit. The degree of vulnerability depends on details (type of chaining, higher-level properties of payload, ...). Remember that encryption and authentication perform complimentary roles: Suppose Alice is sending to Bob. They are being attacked by Eve. Encryption limits the amount of information _Eve_ receives. Authentication prevents tampering, so _Bob_ can trust what he receives. It is possible to construct situations where you could omit the AH from ESP+AH without losing anything, but you would need to analyze the situation pretty carefully. If you have a good reason for using something other than ESP+AH, please clarify what you want to do and why. Otherwise just go with the normal ESP+AH. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: https for virtual hosts (was: attack on paypal)
On 06/11/2003 10:56 AM, Sunder wrote: www.foo.com www.bar.com www.baz.com can't all live on the same IP and have individual ssl certs for https. :( This is because the cert is exchanged before the http 1.1 layer can say "I want www.bar.com" So you need to waste IP's for this. Since the browser standards are already in place, it's unlikely to be to find a workaround. A reasonable workaround might be something like: http://www.ietf.org/rfc/rfc3056.txt ... to allow isolated IPv6 domains or hosts, attached to an IPv4 network which has no native IPv6 support, to communicate with other such IPv6 domains or hosts with minimal manual configuration, before they can obtain natuve IPv6 connectivity. It incidentally provides an interim globally unique IPv6 address prefix to any site with at least one globally unique IPv4 address, even if combined with an IPv4 Network Address Translator (NAT). - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: An attack on paypal
"James A. Donald" <[EMAIL PROTECTED]> wrote: > >>Attached is a spam mail that constitutes an attack on paypal similar >>in effect and method to man in the middle. Yeah, I've been seeing that one for a month or two now. I've seen several versions. Some of them are quite well done. I imagine they get more than a few victims. I would have thought that the perpetrators would have been too afraid of stings to try something so bold. The existence of such schemes is a sad commentary on the state of law enforcement. >>The bottom line is that https just is not working. Its broken. On 06/08/2003 05:47 PM, tom st denis wrote: I disagree. That attack is more akin to a "Hi, I'm calling from {insert bank here} and we need your CC info to update your file." ... So your "conclusions" are a bit off. You guys are talking past each other. All statements of the form. -- foo is working (or not) -- foo solves the problem (or not) are so imprecise as to be useless. It is better to talk about a definite specification. Then we can ask whether foo meets the spec or not. If you ask whether a given https implementation meets the https specifications, then quite possibly it does. So in this sense the technology is not "broken". But if you ask whether https makes the world safe for naifs to conduct e-commerce, by protecting them from all possible spoofs and MITM attacks, then no, it certainly does not do that. There are some who rashly claimed it was supposed to do that, so in this sense it is quite broken. It fails to meet the broader spec. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Quantum crypto, from BBC
On 06/07/2003 08:04 AM, Udhay Shankar N wrote: I haven't seen this discussed here yet. It's been discussed here some, and discussed elsewhere plenty. I get 19,000 hits from http://www.google.com/search?q=quantum+cryptography+product+OR+products > Is there something to this? It depends on your definition of "something". Quantum cryptography is perfectly real and is fascinating in an academic sort of way. The available products are somewhere between "not very practical" and "ridiculous" if you ask me. Most companies can't be bothered to do classical crypto properly. The idea that they would pay the incremental cost to step up to quantum crypto seems far-fetched to me. On the scale of physics hype, quantum crypto in particular and quantum computation in general are nowhere near as bad as cold fusion, but perhaps comparable to high-Tc superconductors, which had a definite basis in fact, but their practicality was wildly overclaimed. Dr Shields' team have demonstrated quantum cryptography working over distances of 100 km, which should be enough to cover large metropolitan areas such as London and Tokyo. This is not new news. The Department of Trade and Industry has pledged cash to help the researchers refine their work and bring commercial quantum cryptography products to market. Tee hee. Very funny. I don't think "trade and industry" considerations are the driving force here. I think the military and the cryptologic agencies have rather larger budgets than the Department of Trade and Industry, and they are really who's paying for the flurry of R&D. = If you want to improve the fact-to-hype ratio, go to http://xxx.lanl.gov/find/quant-ph and type "cryptography" in the 'abstract' box. I get 82 hits in the range 2001-to-date. And those lead to yet other references. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
baseline privacy ... not
Hi -- 1) In a cable-modem system, the layer-1 signal to/from your cable is physically present in your neighbors' homes. 2) To defend against the obvious privacy problems this implies, the standards provide for Baseline Privacy (BPI) which encrypts the signals. So you're safe, right? 3) Evidence suggests that most cable-modem customers in the US are not protected. Many service providers have Baseline Privacy turned off. Defeated. Disabled. Skipped. No privacy. The evidence for this comes from -- directly examining the configuration of a few modems -- talking to The Cable Guy -- noting that when certain small providers do implement BPI, they brag about it and claim this gives them an advantage over the "established" providers. http://gemnets.com/c5_technical.html#question5 4) From this it appears that in most cases, all that protects your privacy is security-by-obscurity. And if you want an upper bound on how much obscurity there is, note that there is a vibrant community of cable-modem firmware hackers: http://www.cablemodemhack.com/ 5) It's interesting to think what customers ought to do about this, short-term and/or long-term. -- Obviously end-to-end security is needed. But it is not always feasible at present. I would connect to google via SSL if I could, but google doesn't implement https. And that would still leave me open to traffic analysis. -- Link-by-link security is never a substitute for overall security, but you need some link-by-link security just to cut down on traffic analysis and DoS attacks, including ARP poisoning and the like. One idea that comes to mind is to use IPsec to secure the connections to an onion routing system. Or mist / crowd / whatever. Comments? Suggestions? - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]