Re: Monoculture
Thor Lancelot Simon wrote: On Sun, Oct 05, 2003 at 03:04:00PM +0100, Ben Laurie wrote: Thor Lancelot Simon wrote: On Sat, Oct 04, 2003 at 02:09:10PM +0100, Ben Laurie wrote: Thor Lancelot Simon wrote: these operations. For example, there is no simple way to do the most common certificate validation operation: take a certificate and an optional chain, and check that the certificate is signed by an accepted root CA, or that each certificate in the chain has the signing property and that the chain reaches that CA -- which would be okay if OpenSSL did the validation for you automatically, but it doesn't, really. Err, yes it does, but its not very well documented. No. You can't do it in one step, and you have to use functions that are marked in OpenSSL's header files as not being part of the official API. mod_ssl has a convenience function that's confusingly named just like the OpenSSL library functions that deals with this -- of course, it should be part of OpenSSL itself, but at least as of 0.9.6 it was not. Would you care to be more explicit? I have to apologize -- I was not entirely correct in my initial statement, but without access to the source tree I did most of my OpenSSL work in (it belongs to a former employer) it took me a while to retrace my steps and realize I was not quite right. On the client side, though the documentation's poor, you're correct: there _is_ a way to validate a certificate and chain you've received from the peer in one step. (I note that there is now reference in the header files to some AUTOCHAIN stuff that I don't recall from earlier versions of OpenSSL, but that ssl_verify_cert_chain is *still* not part of the public API; it's in ssl_locl.h). But that's because its used internally (by both clients and servers). On the server side (or, indeed, on the client side, if the client side needs to follow a chain to reach a trusted CA, and thus needs to load chain certificates) there's no API for loading a cert and its entire chain in one shot, and indeed to do so AFAICT you must use functions that are not part of the public API. Hmmm. You can put multiple certs in a single file, IIRC, and you can certainly have multiple certs in a directory, with hashed links pointing to them. I know for sure chained certs work without screwing around coz I just tested them whilst adding CRLs to Apache-SSL. See SSL_CTX_use_certificate_chain() in the mod_ssl sources (which appears much simpler in mod_ssl 2.8 than what I remember working with -- perhaps the OpenSSL API *has* improved!) and SSL_use_certificate_file, SSL_CTX_use_certificate_file, and SSL_CTX_use_certificate_chain_file in the OpenSSL sources. And then note that *all* of the example code gets this stuff wrong -- if it even bothers to do server certificate validation at all. Regrettably, mod_ssl is not the best guide to the use of OpenSSL - it often goes about things in a long-winded and inappropriate way (compare its CRL handling with mine, for example). I can't lose my impression that some of the chain-handling functions moved from ssl_locl.h to ssl.h between 0.9.6 and 0.9.7 but I don't have a 0.9.6 tree handy nor the time to sift through it. Sigh. I wish I had some of my code from the last time I tackled this issue with OpenSSL at hand, but unfortunately I don't own it, so I do not. This could well be true. The complexity and instability of the API for this stuff, and the fact that we're both rooting around *in the OpenSSL source code* to figure out which bits of it are public and which are internal, and in which version of OpenSSL, when the operations at hand (loading and validating chains of certificates, from the cert for the peer's identity up to the cert from which trust derives) is a pretty good example, itself, of why I don't care for OpenSSL. Oh, I totally agree! I spent a long time working on the X.509 support in Pluto, too, and though I don't really care for it either it does have the decided advantage that it appears to be designed in the right direction: from what are the end-user's needs? instead of what is the structure of the underlying protocol or software abstraction? I'm not familiar with Pluto. Cheers, Ben. -- http://www.apache-ssl.org/ben.html http://www.thebunker.net/ There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit. - Robert Woodruff - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
On Sun, Oct 05, 2003 at 03:04:00PM +0100, Ben Laurie wrote: Thor Lancelot Simon wrote: On Sat, Oct 04, 2003 at 02:09:10PM +0100, Ben Laurie wrote: Thor Lancelot Simon wrote: these operations. For example, there is no simple way to do the most common certificate validation operation: take a certificate and an optional chain, and check that the certificate is signed by an accepted root CA, or that each certificate in the chain has the signing property and that the chain reaches that CA -- which would be okay if OpenSSL did the validation for you automatically, but it doesn't, really. Err, yes it does, but its not very well documented. No. You can't do it in one step, and you have to use functions that are marked in OpenSSL's header files as not being part of the official API. mod_ssl has a convenience function that's confusingly named just like the OpenSSL library functions that deals with this -- of course, it should be part of OpenSSL itself, but at least as of 0.9.6 it was not. Would you care to be more explicit? I have to apologize -- I was not entirely correct in my initial statement, but without access to the source tree I did most of my OpenSSL work in (it belongs to a former employer) it took me a while to retrace my steps and realize I was not quite right. On the client side, though the documentation's poor, you're correct: there _is_ a way to validate a certificate and chain you've received from the peer in one step. (I note that there is now reference in the header files to some AUTOCHAIN stuff that I don't recall from earlier versions of OpenSSL, but that ssl_verify_cert_chain is *still* not part of the public API; it's in ssl_locl.h). On the server side (or, indeed, on the client side, if the client side needs to follow a chain to reach a trusted CA, and thus needs to load chain certificates) there's no API for loading a cert and its entire chain in one shot, and indeed to do so AFAICT you must use functions that are not part of the public API. See SSL_CTX_use_certificate_chain() in the mod_ssl sources (which appears much simpler in mod_ssl 2.8 than what I remember working with -- perhaps the OpenSSL API *has* improved!) and SSL_use_certificate_file, SSL_CTX_use_certificate_file, and SSL_CTX_use_certificate_chain_file in the OpenSSL sources. And then note that *all* of the example code gets this stuff wrong -- if it even bothers to do server certificate validation at all. I can't lose my impression that some of the chain-handling functions moved from ssl_locl.h to ssl.h between 0.9.6 and 0.9.7 but I don't have a 0.9.6 tree handy nor the time to sift through it. Sigh. I wish I had some of my code from the last time I tackled this issue with OpenSSL at hand, but unfortunately I don't own it, so I do not. The complexity and instability of the API for this stuff, and the fact that we're both rooting around *in the OpenSSL source code* to figure out which bits of it are public and which are internal, and in which version of OpenSSL, when the operations at hand (loading and validating chains of certificates, from the cert for the peer's identity up to the cert from which trust derives) is a pretty good example, itself, of why I don't care for OpenSSL. I spent a long time working on the X.509 support in Pluto, too, and though I don't really care for it either it does have the decided advantage that it appears to be designed in the right direction: from what are the end-user's needs? instead of what is the structure of the underlying protocol or software abstraction? Thor - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
Thor Lancelot Simon wrote: As far as what OpenSSL does, if you simply abandon outright any hope of acting as a certificate authority, etc. you can punt a huge amount of complexity; if you punt SSL, you'll lose quite a bit more. As far as the programming interface goes, I'd read Eric's book and then think hard about what people actually use SSL/TLS for in the real world. It's horrifying to note that OpenSSL doesn't even have a published interface for a some of these operations. For example, there is no simple way to do the most common certificate validation operation: take a certificate and an optional chain, and check that the certificate is signed by an accepted root CA, or that each certificate in the chain has the signing property and that the chain reaches that CA -- which would be okay if OpenSSL did the validation for you automatically, but it doesn't, really. Err, yes it does, but its not very well documented. In fact, it constantly amazes me what OpenSSL does do for you automatically. For example, I recently added CRL checking to Apache-SSL. It took a while to figure it out, but in the end it came down to doing this: static void InitCRL(SSLConfigRec *pConfig) { X509_STORE *pStore=SSL_CTX_get_cert_store(pConfig-pSSLCtx); int vflags=0; if(pConfig-bUseCRL) vflags|=X509_V_FLAG_CRL_CHECK; if(pConfig-bCRLCheckAll) vflags|=X509_V_FLAG_CRL_CHECK_ALL; X509_STORE_set_flags(pStore,vflags); } (note, before people start nagging me for it, this is a WIP, but will be released soon). Cheers, Ben. -- http://www.apache-ssl.org/ben.html http://www.thebunker.net/ There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit. - Robert Woodruff - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
[EMAIL PROTECTED] wrote: On Thu, 2 Oct 2003, Thor Lancelot Simon wrote: 1) Creates a socket-like connection object 2) Allows configuration of the expected identity of the party at the other end, and, optionally, parameters like acceptable cipher suite 3) Connects, returning error if the identity doesn't match. It's probably a good idea to require the application to explicitly do another function call validating the connection if it decides to continue despite an identity mismatch; this will avoid a common, and dangerous, programmer errog. 4) Provides select/read operations thereafter. Speaking as a Postfix developer, it would be very useful to have a non-blocking interface that maintained an event bitmask and readable/writable callbacks for the communications channel, allowing a single-threaded application to get other work done while a TLS negotiation is in progress, or to gracefully time out the TLS negotiation if progress is too slow. This means that the caller should be able to tear down the state of a partially completed connection at any time without memory leaks or other problems. Again, you can do this with OpenSSL. Cheers, Ben. -- http://www.apache-ssl.org/ben.html http://www.thebunker.net/ There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit. - Robert Woodruff - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
On Sat, Oct 04, 2003 at 02:09:10PM +0100, Ben Laurie wrote: Thor Lancelot Simon wrote: As far as what OpenSSL does, if you simply abandon outright any hope of acting as a certificate authority, etc. you can punt a huge amount of complexity; if you punt SSL, you'll lose quite a bit more. As far as the programming interface goes, I'd read Eric's book and then think hard about what people actually use SSL/TLS for in the real world. It's horrifying to note that OpenSSL doesn't even have a published interface for a some of these operations. For example, there is no simple way to do the most common certificate validation operation: take a certificate and an optional chain, and check that the certificate is signed by an accepted root CA, or that each certificate in the chain has the signing property and that the chain reaches that CA -- which would be okay if OpenSSL did the validation for you automatically, but it doesn't, really. Err, yes it does, but its not very well documented. No. You can't do it in one step, and you have to use functions that are marked in OpenSSL's header files as not being part of the official API. mod_ssl has a convenience function that's confusingly named just like the OpenSSL library functions that deals with this -- of course, it should be part of OpenSSL itself, but at least as of 0.9.6 it was not. Thor - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
On Thu, 2 Oct 2003, Thor Lancelot Simon wrote: 1) Creates a socket-like connection object 2) Allows configuration of the expected identity of the party at the other end, and, optionally, parameters like acceptable cipher suite 3) Connects, returning error if the identity doesn't match. It's probably a good idea to require the application to explicitly do another function call validating the connection if it decides to continue despite an identity mismatch; this will avoid a common, and dangerous, programmer errog. 4) Provides select/read operations thereafter. Speaking as a Postfix developer, it would be very useful to have a non-blocking interface that maintained an event bitmask and readable/writable callbacks for the communications channel, allowing a single-threaded application to get other work done while a TLS negotiation is in progress, or to gracefully time out the TLS negotiation if progress is too slow. This means that the caller should be able to tear down the state of a partially completed connection at any time without memory leaks or other problems. -- Victor Duchovni IT Security, Morgan Stanley - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture / Guild
... it does look very much from the outside that there is an informal Cryptographers Guild in place... The Guild, such as it is, is a meritocracy; many previously unknown people have joined it since I started watching it in about 1990. The way to tell who's in the Guild is that they can break your protocols or algorithms, but you can't break theirs. While there are only hundreds of serious members of the Guild -- a comfortable number for holding conferences on college campuses -- I think just about everyone in it would be happier if ten times as many people were as involved as they are in cryptography and security. Then ten times as many security systems that everybody (including the Guild members) depends on would be designed properly. They certainly welcomed the Cypherpunks to learn (and to join if they were serious enough). I consider myself a Guild Groupie; I don't qualify but I think they're great. I follow in their footsteps and stand on their shoulders. Clearly there are much larger numbers of Guild Groupies than Guild members, or Bruce Schneier and Neal Stephenson wouldn't be able to make a living selling books to 'em. :-) John PS: Of course there's whole set of Mystic Secret Guilds of Cryptography. We think our openness will defeat their closedness, like the free world eventually beat the Soviet Union. There are some good examples of that, such as our Guild's realization of the usefulness of public-key crypto (we reinvented independently, but they hadn't realized what a revolutionary concept they already had). Then again, they are better funded than we are, and have more exemptions from legal constraints (e.g. it's hard for us to do production cryptanalysis, which is really useful when learning to design good cryptosystems). - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture / Guild
On Thu, Oct 02, 2003 at 03:34:35PM -0700, John Gilmore wrote: ... it does look very much from the outside that there is an informal Cryptographers Guild in place... The Guild, such as it is, is a meritocracy; many previously unknown people have joined it since I started watching it in about 1990. The way to tell who's in the Guild is that they can break your protocols or algorithms, but you can't break theirs. The problem with guilds is that they become set in their ways. Ask here how the fact that not all large numbers are hard to factor affects RSA and you will be ignored or dismissed. Ask whether cubic meters of special hardware could brute-force keys better than the same cubic meters of super computers and you get the same. As a perineal outsider, I notice this in several fields. I'm not in the guild for measuring the Specific Gravity of Gases. Which is precisely why my name is on the patent for the smallest machine (4,677,841). -- - | Lyn KennedyE-mail | [EMAIL PROTECTED] | | K5QWB ICBM | 32.5 North 96.9 West| ---Livin' on an information dirt road a few miles off the superhighway--- - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
slightly ranting, you might want to hit del now :) Ian Grigg wrote: What is written in these posts (not just the present one) does derive from that viewpoint and although one can quibble about the details, it does look very much from the outside that there is an informal Cryptographers Guild in place [1]. I don't think the jury has reached an opinion on why the cryptography group looks like a guild as yet, and it may never do so. A guild, of course, is either a group of well-meaning skilled people serving the community, or a cartel for raising prices, depending on who is doing the answering. To me it seems more like a academic community - particularly the way many can't handle the concept of good enough but look for theoretically perfect solutions that may be unworkable in the Real World. And yes, I *am* an outsider - I dabble a little, and I am a programmer, but I am the first to admit my math skills are nowhere near adequate to make any meaningful contribution to the field. It seems to me there is no more a cryptography guild than a linux guild - yes, you get advocates who foam at the mouth if you say the wrong thing, but the majority seem more interested in getting it to work. From my POV as a programmer, learning the field consists of identifying the available building blocks (hash, symmetric, asymmetric), standards (openpgp, x509, ssl, ssh, ipsec) and prior implimentations (paying particular attention to what had to be patched due to discovered vunerablities, so as to avoid the same errors in my own code) It also seems the crypto community is very open to questions, very hostile to statements - so often knowing how to phrase something to them is as important as the content of the question. Stating I am doing $FOO will not be as productive as If I were to do $FOO what vunerabilities would that introduce? - remembering that any good advice you get back for free would have probably cost you weeks of study or possibly thousands of dollars trying to obtain a security certification for your solution later on. Just ignore any posts of because it isn't done that way unless they give a good reason why your way isn't better (note as good isn't good enough - you always need a good reason to stray from a tested and known path, and it is often worth putting up with a few minor inconveniences to stay on it) Oh - and make sure you can recognise a good reason when you see it ::) The guild would like the application builder to learn the field. They would like him to read up on all the literature, the analysies. To emulate the successes and avoid the pitfalls of those protocols that went before them. The guild would like the builder to present his protocol and hope it be taken seriously. The guild would like the builder of applications to reach acceptable standards. I would certainly expect a house builder to know how to lay bricks - but if he insisted on designing the house too, I would expect him to know how to do that (and not just start putting up walls and hoping it will all work out later. Design requires a fair understanding of what you are designing and what the capabilities and limitations of the materials are - this is why SAs get paid more than their programming teams (not that I like that given I am a programmer not a SA). If you aren't willing to learn how to do that, you can still follow someone else's design - or take a modular approach and just drop pre-built units (normally libraries) into those parts of the code that need them. Libraries can be surprisingly good - if the designer put in enough effort, they can have sufficient inline M/C for the timing-critical parts that they are noticably more efficient than implimenting your own code in a medium or high level language. And, the guild would like the builder to take the guild seriously, in recognition of the large amounts of time guildmembers invest in their knowledge. That does tend to happen - in any community, you get those who get used to being authorities, and react badly to being challenged. At least in this community most of them have the sense to back down when proved wrong :) None of that is likely to happen. The barrier to entry into serious cryptographic protocol design is too high for the average builder of new applications [2]. He has, after all, an application to build. Indeed so - that is why using a prebuilt standard (or better yet, a library) as your base is such a good idea. However, a lot of programmers don't like doing that because they feel it is either cheating or means all their hard work is going to be dismissed as just an implimentation of someone else's idea rather than something original and novel. However, the odds of someone rolling their own protocol getting something more efficient or effective as work that has already been done are low - and if the package you put together is sufficently good, no users will care it uses SSH (protocol) for comms or someone else's AES library for
RE: Monoculture
perry wrote: We could use more implementations of ssl and of ssh, no question. ...more cleanly implemented and simpler to use versions of existing algorithms and protocols... would be of tremendous utility. jill ramonsky replied: I am very much hoping that you can answer both (a) and (b) with a yes, in which case I will /definitely/ get on with recoding SSL: Is it possible for Bob to instruct his browser to (a) refuse to trust anything signed by Eve, and (b) to trust Alice's certificate (which she handed to him personally)? (And if so, how?) how it's done depends on the browser: in Moz 1.0: Edit Preferences... Privacy Security Certificates Manage Certificates {Authorities, Web Sites} in MSIE 5: Edit Preferences.., Web Browser Security Certificate Authorities (there seems to be no way to tell MSIE 5 to trust Alice's server cert for SSL connections, except to tell MSIE 5 to trust Alice's CA.) in NS 4.75: Communicator Tools Security Info Certificates {Signers, Web Sites} - don davis, boston - - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
RE: Monoculture
Thanks everyone for the SSL encouragement. I'm going to have a quick re-read of Eric's book over the weekend and then start thinking about what sort of easy to use implementation I could do. I was thinking of doing a C++ implentation with classes and templates and stuff. (By contrast OpenSSL is a C implementation). Anyone got any thoughts on that? Also - anyone thinking of using something like this - could you post (in another thread maybe) suggestions as to what kind of simple interface you actually want? As in, what you want it to do? All suggestions gratefully considered, but in the light of comments in this list, I will /not/ turn it into bloatware just to satisfy all demands. (OpenSSL can do that). Finally - I'll need some help setting up a sourceforge thing as I've never set up an open source project before and don't really know how to go about that. Some advice on licensing wouldn't go amiss either. (GPL? ... LGPL? ... something else?) Re Don's comments below: This seems to me to a /serious/ flaw in the design of MSIE. What if Alice doesn't /have/ a CA because she can't afford their fees? (or she doesn't trust them, or for any other reason you might care to think of). In fact, if I've understood this correctly, if Alice uses MSIE, she can't even tell her browser to trust her own website, despite being in possession of not only her own public key, but her own secret key as well! What is it with MSIE that it would prefer to trust someone other than Alice about the authenticity of Alice's site !!!??? Okay guys - _this is a serious question_. Alice has a web site. Alice has a web browser which unfortunately happens to be MSIE. Alice wishes to view Alice's web site using Alice's browser (which is not on the same machine as the server). Alice does not wish to trust ANYONE else, but she does trust herself absolutely. How does she get the browser to display the padlock? I wouldn't be at all surprised if the answer turns out to be It can't be done. (That may not be a problem if other browsers don't have this design flaw, of course, since Alice can tell all of her friends don't use Microsoft). Jill -Original Message- From: Don Davis [mailto:[EMAIL PROTECTED] Sent: Thursday, October 02, 2003 1:26 PM To: Jill Ramonsky Cc: [EMAIL PROTECTED] Subject: RE: Monoculture Is it possible for Bob to instruct his browser to (b) to trust Alice's certificate (which she handed to him personally)? (And if so, how?) how it's done depends on the browser: in MSIE 5: Edit Preferences.., Web Browser Security Certificate Authorities (there seems to be no way to tell MSIE 5 to trust Alice's server cert for SSL connections, except to tell MSIE 5 to trust Alice's CA.) - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
On Thu, Oct 02, 2003 at 02:21:29PM +0100, Jill Ramonsky wrote: Thanks everyone for the SSL encouragement. I'm going to have a quick re-read of Eric's book over the weekend and then start thinking about what sort of easy to use implementation I could do. I was thinking of doing a C++ implentation with classes and templates and stuff. (By contrast OpenSSL is a C implementation). Anyone got any thoughts on A C++ implementation will be much less useful to many potential users; perhaps the most underserved set of potential SSL/TLS users is in the embedded space, and they often can't afford to, or won't, carry a C++ runtime around with them. We learned this lesson with FreSSH and threads. I would strongly recommend a C implementation with an optional C++ interface, if C++ is the way you want to go. Also, I'd consider, for simplicity's sake, at least at first, implementing *only* TLS, and *only* the required ciphers/MACs (actually, using others' implementations of the ciphers/MACs, even the OpenSSL or cryptlib ones, is probably not just acceptable but actually a _really good idea_.) The major problems with OpenSSL are, from my point of view, caused by severe overengineering in the areas of: 1) Configuration 2) ASN.1/X.509 handling 3) Tightly-coupled support for the many diverse variants of SSL/TLS. As far as what OpenSSL does, if you simply abandon outright any hope of acting as a certificate authority, etc. you can punt a huge amount of complexity; if you punt SSL, you'll lose quite a bit more. As far as the programming interface goes, I'd read Eric's book and then think hard about what people actually use SSL/TLS for in the real world. It's horrifying to note that OpenSSL doesn't even have a published interface for a some of these operations. For example, there is no simple way to do the most common certificate validation operation: take a certificate and an optional chain, and check that the certificate is signed by an accepted root CA, or that each certificate in the chain has the signing property and that the chain reaches that CA -- which would be okay if OpenSSL did the validation for you automatically, but it doesn't, really. From my point of view, a _very_ simple interface that: 1) Creates a socket-like connection object 2) Allows configuration of the expected identity of the party at the other end, and, optionally, parameters like acceptable cipher suite 3) Connects, returning error if the identity doesn't match. It's probably a good idea to require the application to explicitly do another function call validating the connection if it decides to continue despite an identity mismatch; this will avoid a common, and dangerous, programmer errog. 4) Provides select/read operations thereafter. Would serve the purposes of 90+% of client applications. On the server side, you want a bit more, and you may want a slightly finer-grained extended interface for the client, but still, you can catch a _huge_ fraction of what people do now with only the interface listed above. Thor - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
Perry E. Metzger [EMAIL PROTECTED] writes: Guus Sliepen [EMAIL PROTECTED] writes: In that case, I don't see why you don't bend your efforts towards producing an open-source implementation of TLS that doesn't suck. We don't want to program another TLS library, we want to create a VPN daemon. Well, then you might consider using an existing TLS library. It is rather hard to make a protocol that does TLS things that is both safe and in any significant way simpler than TLS. Several people have now suggested using TLS, but nobody seem to also refute the arguments made earlier against building VPNs over TCP, in http://sites.inka.de/~bigred/devel/tcp-tcp.html. I have to agree with many things in the paper; using TCP (as TLS does) to tunnel TCP/UDP is a bad idea. Off-the-shelf TLS may be a good security protocol, but it is not a good VPN protocol. Recommending TLS without understanding, or caring about, the application domain seem almost arrogant to me. Admittedly, you could invent a datagram-based TLS, but this is not widely implemented nor specified (although I vaguely recall WTLS) so then you are back at square one as far as security analysis goes. Thanks, Simon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
Simon Josefsson [EMAIL PROTECTED] writes: Several people have now suggested using TLS, but nobody seem to also refute the arguments made earlier against building VPNs over TCP, in http://sites.inka.de/~bigred/devel/tcp-tcp.html. Well, I agree, the most reasonable thing to do is to use ipsec, but if people aren't going to use ipsec they should at least use a protocol that isn't insecure. Perry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
At 8:32 PM -0700 10/1/03, Matt Blaze wrote: It might be debatable whether only licensed electricians should design and install electrical systems. But hardly anyone would argue that electrical system designers and installers needn't be competent at what they do. (Perhaps most of those who would advance such arguments were electrocuted or killed in fires before they had a chance to make their case). In most of the US, a homeowner can install electrical systems in their house. However, their installation must be up to code, and inspected by a government inspector. The analog for crypto protocols seems to be obvious, although the inspector part seems to be more ad hoc and community based. (But there's no building permit either.) Cheers - Bill - Bill Frantz| There's nothing so clear as | Periwinkle (408)356-8506 | vague idea you haven't written | 16345 Englewood Ave www.pwpconsult.com | down yet. -- Dean Tribble | Los Gatos, CA 95032 - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
EKR writes: I'm trying to figure out why you want to invent a new authentication protocol rather than just going back to the literature ... there's another rationale my clients often give for wanting a new security system, instead of the off- the-shelf standbys: IPSec, SSL, Kerberos, and the XML security specs are seen as too heavyweight for some applications. the developer doesn't want to shoehorn these systems' bulk and extra flexibility into their applications, because most applications don't need most of the flexibility offered by these systems. some shops experiment with the idea of using only part of OpenSSL, but stripping unused stuff out of each new release of OpenSSL is a maintenance hassle. note that customers aren't usually dissatisfied with the crypto protocols per se; they just want the protocol's implementation to meet their needs exactly, without extra baggage of flexibility, configuration complexity, and bulk. they want their crypto clothing to fit well, but what's available off-the-rack is a choice between frumpy one-size-fits-all, and a difficult sew-your-own kit, complete with pattern, fabric, and sewing machine. so, they often opt for tailor-made crypto clothing. my clients' concern (to keep their crypto code as small and as simple as possible) doesn't justify their inventing and deploying broken protocols, but their concern does point out that neither the crypto industry nor the crypto literature has fully met these customers' crypto needs. - don davis, boston - - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
Don Davis [EMAIL PROTECTED] writes: EKR writes: I'm trying to figure out why you want to invent a new authentication protocol rather than just going back to the literature ... there's another rationale my clients often give for wanting a new security system, instead of the off- the-shelf standbys: IPSec, SSL, Kerberos, and the XML security specs are seen as too heavyweight for some applications. the developer doesn't want to shoehorn these systems' bulk and extra flexibility into their applications, because most applications don't need most of the flexibility offered by these systems. I hear this a lot, but I think that Perry nailed it earlier. SSL, for instance, is about as simple as we know how to make a protocol that does what it does. The two things that are generally cited as being sources of complexity are: (1) Negotiation. (2) Certificates. Negotiation doesn't really add that much protocol complexity, and certificates are kind of the price of admission if you want third party authentication. some shops experiment with the idea of using only part of OpenSSL, but stripping unused stuff out of each new release of OpenSSL is a maintenance hassle. But here's you're talking about something different, which is OpenSSL. Most of the OpenSSL complexity isn't actually in SSL. The way I see it, there are basically four options: (1) Use OpenSSL (or whatever) as-is. (2) Strip down your toolkit but keep using SSL. (3) Write your own toolkit that implements a stripped down subset of SSL (e.g. self-signed certs or anonymous DH). (4) Design your own protocol and then implement it. Since SSL without certificates is about as simple as a stream security protocol can be, I don't see that (4) holds much of an advantage over (3) -Ekr -- [Eric Rescorla [EMAIL PROTECTED] http://www.rtfm.com/ - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
RE: Monoculture
I could do an implementation of SSL. Speaking as a programmer with an interest in crypto, I'm fairly sure I could produce a cleanly implemented and simple-to-use version. I confess I didn't realise there was a need. You see, it's not that it doesn't seem to excite [me] - it's just that, well, OpenSSL already exists, and creating another tool (or library or whatever) to do exactly the same thing seems a bit of a waste of time, like re-inventing the wheel. If you can provide some reasonably reassurance that it's not a waste of time, I'll make a start. But I would like to ask you to clarify something about SSL which has been bugging me. Allow me to present a scenario. Suppose: (1) Alice runs a web server. (2) Bob has a web client. (3) Alice and Bob know each other personally, and see each other every day. (4) Eve is the bad guy. She runs a Certificate Authority, which is trusted by Bob's browser, but not by Bob. Is it possible for Bob to instruct his browser to (a) refuse to trust anything signed by Eve, and (b) to trust Alice's certificate (which she handed to him personally)? (And if so, how?) I am very much hoping that you can answer both (a) and (b) with a yes, in which case I will /definitely/ get on with recoding SSL. Jill -Original Message- From: Perry E. Metzger [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 01, 2003 3:36 PM To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Subject: Re: Monoculture We could use more implementations of ssl and of ssh, no question. However, suggesting to people that they produce more cleanly implemented and simpler to use versions of existing algorithms and protocols doesn't seem to excite people, although it would be of tremendous utility. Perry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
Who on this list just wrote a report on the dangers of Monoculture? An implementation monoculture is more dangerous than a protocol monoculture.. Most exploitable security problems arise from implementation errors, rather than from inherent flaws in the protocol being implemented. And broad diversity in protocols has a downside from another general systems security principle: minimization.. The more protocols you need to implement to talk to other systems, the less time you have to make sure the ones you implement are implemented well, and the more likely you are to pick up one which has a latent implementation flaw. - Bill - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
On 10/01/2003 11:22 AM, Don Davis wrote: there's another rationale my clients often give for wanting a new security system, instead of the off- the-shelf standbys: IPSec, SSL, Kerberos, and the XML security specs are seen as too heavyweight for some applications. the developer doesn't want to shoehorn these systems' bulk and extra flexibility into their applications, because most applications don't need most of the flexibility offered by these systems. Is that a rationale, or an irrationale? According to 'ps', an all-up ssh system is less than 3 megabytes (sshd, ssh-agent, and the ssh client). At current memory prices, your clients would save less than $1.50 per system even if their custom software could reduce this bulk to zero. With the cost of writing custom software being what it is, they would need to sell quite a large number of systems before de-bulking began to pay off. And that's before accounting for the cost of security risks. some shops experiment with the idea of using only part of OpenSSL, but stripping unused stuff out of each new release of OpenSSL is a maintenance hassle. 1) Well, they could just ignore the new release and stick with the old version. Or, if they think the new features are desirable, then they ought to compare the cost of re-stripping against the cost of implementing the new desirable features in the custom code. I'm just trying to inject some balance into the balance sheet. 2) If you do a good job stripping the code, you could ask the maintainers to put your #ifdefs into the mainline version. Then you have no maintenance hassle at all. they want their crypto clothing to fit well, but what's available off-the-rack is a choice between frumpy Aha. They want to make a fashion statement. That at least is semi-understandable. People do expensive and risky things all the time in the name of fashion. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
Matt Blaze wrote: I imagine the Plumbers Electricians Union must have used similar arguments to enclose the business to themselves, and keep out unlicensed newcomers. No longer acceptable indeed. Too much competition boys? Rich, Oh come on. Are you willfully misinterpreting what I wrote, or did you honestly believe that that was my intent? Sadly, there is a shared culture amongst cryptography professionals that presses a certain logical, scientific viewpoint. What is written in these posts (not just the present one) does derive from that viewpoint and although one can quibble about the details, it does look very much from the outside that there is an informal Cryptographers Guild in place [1]. I don't think the jury has reached an opinion on why the cryptography group looks like a guild as yet, and it may never do so. A guild, of course, is either a group of well-meaning skilled people serving the community, or a cartel for raising prices, depending on who is doing the answering. But, even if a surprise to some, I think it is a fact that the crypto community looks like and acts as if a guild. I'd encourage the designer of the protocol who asked the original question to learn the field. Unfortunately, he's going about it a sub-optimally. Instead of hoping to design a just protocol and getting others to throw darts at it (or bless it), he might have better luck (and learn far more) by looking at the recent literature of protocol design and analysis and trying to emulate the analysis and design process of other protocols when designing his own. Then when he throws it over the wall to the rest of the world, the question would be not is my protocol any good but rather are my arguments convincing and sufficient? This is where maybe the guild and the outside world part ways. The guild would like the application builder to learn the field. They would like him to read up on all the literature, the analysies. To emulate the successes and avoid the pitfalls of those protocols that went before them. The guild would like the builder to present his protocol and hope it be taken seriously. The guild would like the builder of applications to reach acceptable standards. And, the guild would like the builder to take the guild seriously, in recognition of the large amounts of time guildmembers invest in their knowledge. None of that is likely to happen. The barrier to entry into serious cryptographic protocol design is too high for the average builder of new applications [2]. He has, after all, an application to build. What *is* going to happen is this: builders will continue to ignore the guild. They will build their application, and throw any old shonk crypto in there. Then, they will deploy their application, in the marketplace, and they will prove it, in the marketplace. The builder will find users, again, in the marketplace. At some point along this evolution, certain truths will become evident: the app is successful (or not). The code is good enough (or not). People get benefit (or not). Companies with value start depending on the app (or not). Security is adequate (or is not). Someone comes along and finds some easy breaches (or not). That embarrasses (or not). And, maybe someone nasty comes along and starts doing damage (or not). What may not be clear is that the investment of the security protocol does not earn its effort until well down the track. And, as an unfortunate but inescapable corollary, if the app never gets to travel the full distance of its evolutionary path, then any effort spent up front on high-end security is wasted. Crypto is high up-front cost, and long term payoff. In such a scenario, standard finance theory would say that if the project is risky, do not add expensive, heavy duty crypto in up front. This tradeoff is so strong that when we look about the security field, we find very few applications that succeeded when also built with security in mind from the initial stages. And, almost all successful apps had little or bad security in them up front. If they needed it later, they required expensive add-ons. Later on. There are no successful systems that started with perfect crypto, to my knowledge. There are only perfect protocols and successful systems. A successful system can evolve to enjoy a great crypto protocol, but it would seem that a great protocol can only spoil the success of a system in the first instance. The best we can hope for, therefore, in the initial phase, is a compromise: maybe the builder can be encouraged to think about security as an add-on in the future? Maybe some cheap and nasty crypto can be stuck in there as a placemarker? The equivalent of TEA or 40 bit RC4, but in a protocol sense. Or, maybe he can encourage a journeyman of the guild to add the stuff in, on the side, as a fun project. Maybe, just maybe, someone can create Bob's Simple Crypto Library. As a stopgap
Re: Monoculture
Jill Ramonsky wrote: Is it possible for Bob to instruct his browser to (a) refuse to trust anything signed by Eve, and (b) to trust Alice's certificate (which she handed to him personally)? (And if so, how?) I am very much hoping that you can answer both (a) and (b) with a yes, ok then yes :) What it comes down to is a browser will trust any certificate either a) explicitly marked as trusted or b) signed by a root CA in its root certificate store so the correct procedure for (a) is for bob to delete eve's root certificate from his root store. for (b) he can either explicitly mark Alice's cert as accepted, or (technically more interesting) if he trusts her as introducer add her root cert - which is the same thing if she self-signed her cert - to his root store, so that *any* cert she signs is accepted. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
On Wed, Oct 01, 2003 at 04:48:33PM +0100, Jill Ramonsky wrote: But I would like to ask you to clarify something about SSL which has been bugging me. Allow me to present a scenario. Suppose: (1) Alice runs a web server. (2) Bob has a web client. (3) Alice and Bob know each other personally, and see each other every day. (4) Eve is the bad guy. She runs a Certificate Authority, which is trusted by Bob's browser, but not by Bob. Is it possible for Bob to instruct his browser to (a) refuse to trust anything signed by Eve, and (b) to trust Alice's certificate (which she handed to him personally)? (And if so, how?) The list of trusted certs is part of the browser config, and can be altered. It would be hard to imagine a browser so badly written as to hard-code that list. Certainly Mozilla makes it easy (Manage Certs under Privacy Security in Edit Preferences) and I've even added a self-signed server cert under IE with no trouble or inconvenience. (Yes it did ask whether to accept the site's cert.) -- Barney Wolff http://www.databus.com/bwresume.pdf I'm available by contract or FT, in the NYC metro area or via the 'Net. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
Don Davis wrote: EKR writes: I'm trying to figure out why you want to invent a new authentication protocol rather than just going back to the literature ... note that customers aren't usually dissatisfied with the crypto protocols per se; they just want the protocol's implementation to meet their needs exactly, without extra baggage of flexibility, configuration complexity, and bulk. they want their crypto clothing to fit well, but what's available off-the-rack is a choice between frumpy one-size-fits-all, and a difficult sew-your-own kit, complete with pattern, fabric, and sewing machine. so, they often opt for tailor-made crypto clothing. This is also security-minded thinking on the part of the customer. Including extra functionality means that they have to understand it, they have to agree with its choices, they have to follow the rules in using it, and have to pay the costs. If they can ditch the stuff they don't want, that means they are generally much safer in making simple statements about the security model that they have left. So, coming up with a tailor-made solution has the security advantage of reducing complexity. If one is striving to develop the whole security model on ones own, without the benefit of formal methods, that approach is a big advantage. (None of which goes to say that they won't ditch a critical component, of course. I'm just trying to get into their heads here when they act like this.) iang - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
eric wrote: The way I see it, there are basically four options: (1) Use OpenSSL (or whatever) as-is. (2) Strip down your toolkit but keep using SSL. (3) Write your own toolkit that implements a stripped down subset of SSL (e.g. self-signed certs or anonymous DH). (4) Design your own protocol and then implement it. Since SSL without certificates is about as simple as a stream security protocol can be, I don't see that (4) holds much of an advantage over (3) i agree, except that simplifying the SSL protocol will be a daunting task for a non-specialist. when a developer is faced with reading understanding the intricacy of the SSL spec, he'll naturally be tempted to start over. this doesn't exculpate the developer for biting off more than he could chew, but it's unfair to claim that his only motivation was NIH or some other sheer stupidity. btw, i also agree that when a developer decides to design a new protocol, he should study the literature about the design analysis of such protocols. but at the same time, we should recognize that there's a wake-up call for us in these recurrent requests for our review of seemingly-superfluous, obviously-broken new protocols. such developers evidently want and need a fifth option, something like: (5) use SSSL: a truly lightweight variant of SSL, well-analyzed and fully standardized, which trades away flexibility in favor of small code size ease of configuration. arguably, this is as much an opportunity as a wake-up call. - don davis - - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
On Wed, Oct 01, 2003 at 04:48:33PM +0100, Jill Ramonsky wrote: I could do an implementation of SSL. Speaking as a programmer with an interest in crypto, I'm fairly sure I could produce a cleanly implemented and simple-to-use version. Yep. It's a bit of work, and more work to ensure that there are no programming bug type security holes, such as those recently announced, but it's not rocket science. But I would like to ask you to clarify something about SSL which has been bugging me. Allow me to present a scenario. Suppose: (1) Alice runs a web server. (2) Bob has a web client. (3) Alice and Bob know each other personally, and see each other every day. (4) Eve is the bad guy. She runs a Certificate Authority, which is trusted by Bob's browser, but not by Bob. Is it possible for Bob to instruct his browser to (a) refuse to trust anything signed by Eve, and (b) to trust Alice's certificate (which she handed to him personally)? (And if so, how?) Yes and yes. Most SSL/TLS implementations let the application designate a set of certs as trusted CA certs for purposes of authenticating SSL peers. If his client is programmed to let him, Bob can delete Eve's cert from the trusted CA list. Many browsers let you do this although it's often hard to find in the config menus. For (b), Bobs client would need to be able to mark Bob's copy of Alice's cert as trusted even though its not a self-signed CA cert. This is also just a matter of programming, but most browsers don't let you do this-- their programmers decided that in order to simplify operation, they would not allow browsers to mark non-selfsigned certs as trusted. The SSL/TLS spec is pretty quiet about what peers use to authenticate the certs that they receive. You'd be free to implement a PGP-style web of trust in your TLS implementation as long as the certs themselves are X.509 format. Eric - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
Ian Grigg [EMAIL PROTECTED] writes: This is where maybe the guild and the outside world part ways. The guild would like the application builder to learn the field. They would like him to read up on all the literature, the analysies. To emulate the successes and avoid the pitfalls of those protocols that went before them. The guild would like the builder to present his protocol and hope it be taken seriously. The guild would like the builder of applications to reach acceptable standards. And, the guild would like the builder to take the guild seriously, in recognition of the large amounts of time guildmembers invest in their knowledge. Actually, I could care less if they take the guild seriously, because there isn't any guild. What I care about is that people take the risks seriously. This is all very much like the reaction back when lots of people were saying please don't operate on people when you haven't washed your hands and lots of other folks said nuts to that sort of thing -- I've been a surgeon for 30 years and almost 20% of my patients survive!. When I read The Codebreakers in the late 1970s, one thing got drummed into my head in chapter after chapter after chapter. It is a simple lesson, but one that I will repeat here. Dumb cryptography kills people. It has a simple corollary. Dumb cryptography is built by people who don't understand that the problem is hard and that doing a bad job kills people. In chapter after chapter, you read about people making the same mistakes, over and over, and never learning, and then other people dying because they were too egotistical to believe that they could have made a mistake in the design of their security systems. We do not ask anyone join a mythical guild. We ask that people not go off and build suspension bridges out of rotting twine. The problem, of course, is that although it is obvious why you don't want your suspension bridge hung from rotting twine instead of steel, it is far less obvious to the naked eye that using the C library random() call doesn't provide enough security to keep your nuclear power plant controls safe. Well, the opposition to the guild is one of pro-market people who get out there and build applications. I don't see any truth to that. You can build applications just as easily using things like TLS -- and perhaps even more easily. The alternatives aren't any simpler or easier, and are almost always dangerous. There isn't a guild. People just finally realize what is needed in order to make critical -- and I do mean critical -- pieces of infrastructure safe enough for use. -- Perry E. Metzger[EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
Perry E. Metzger wrote: ... Dumb cryptography kills people. What's your threat model? Or, that's your threat model? Applying the above threat model as written up in The Codebreakers to, for example, SSL and its original credit card nreeds would seem to be a mismatch. On the face of it, that is. Correct me if I'm wrong, but I don't recall anyone ever mentioning that anyone was ever killed over a sniffed credit card. And, I'm not sure it is wise to draw threat models from military and national security history and apply it to commercial and individual life. There are scenarios where people may get killed and there was crypto in the story. But they are far and few between [1]. And in general, those parties gradually find themselves taking the crypto seriously enough to match their own threat model to an appropriate security model. But, for the rest of us, that's not a good threat model, IMHO. Well, the opposition to the guild is one of pro-market people who get out there and build applications. I don't see any truth to that. You can build applications just as easily using things like TLS -- and perhaps even more easily. The alternatives aren't any simpler or easier, and are almost always dangerous. OK, that's a statement. What is clear is that, regardless of the truth of the that statement, developers time and time again look at the crypto that is there and conclude that it is too much. The issue is that the gulf is there, not whether it is a fair gulf. There isn't a guild. BTW, just to clarify. The intent of my post was not to claim that there is a guild. Just to claim that there is an environment that is guild-like. People just finally realize what is needed in order to make critical -- and I do mean critical -- pieces of infrastructure safe enough for use. I find this mysterious. When I send encrypted email to my girlfriend with saucy chat in there, is that what you mean by critical ? Or perhaps, when I send a credit card number that is limited to $50 losses, is verified directly by the merchant, and has a home delivery address, do you mean, that's critical ? Or, if I implement a VPN between my customers and suppliers, do you mean that this is critical ? I think not. For most purposes, I'm looking to reduce the statistical occurrences of breaches. I'll take elimination of breaches if it is free, but in the absence of a perfect world, for most comms needs, near enough is fine by me, and anyone that tells me that the crypto is 100% secure is more than likely selling snake oil. For those applications that *are* critical, surely the people best placed to understand and deal with that criticality are the people who run the application themselves? Surely it's their call as to whether they take their responsibilities fully, or not? iang [1] the human rights activities of http://www.cryptorights.org/ do in fact present a case where people can get killed, and their safety may depend to a lesser or greater extent on crypto. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
On Wed, Oct 01, 2003 at 02:34:23PM -0400, Ian Grigg wrote: Don Davis wrote: note that customers aren't usually dissatisfied with the crypto protocols per se; they just want the protocol's implementation to meet their needs exactly, without extra baggage of flexibility, configuration complexity, and bulk. [...] Including extra functionality means that they have to understand it, they have to agree with its choices, they have to follow the rules in using it, and have to pay the costs. If they can ditch the stuff they don't want, that means they are generally much safer in making simple statements about the security model that they have left. You clearly formulated what we are doing! We want to keep our crypto as simple and to the point as necessary for tinc. We also want to understand it ourselves. Implementing our own authentication protocol helps us do all that. Uhm, before getting flamed again: by our own, I don't mean we think we necessarily have to implement something different from all the existing protocols. We just want to understand it so well and want to be so comfortable with it that we can implement it ourselves. -- Met vriendelijke groet / with kind regards, Guus Sliepen [EMAIL PROTECTED] signature.asc Description: Digital signature
Re: Monoculture
Ian Grigg [EMAIL PROTECTED] writes: Perry E. Metzger wrote: ... Dumb cryptography kills people. What's your threat model? Or, that's your threat model? Applying the above threat model as written up in The Codebreakers to, for example, SSL and its original credit card nreeds would seem to be a mismatch. People's software is rarely used in just one place. These days, one might very well wake up to discover that one's operating system or cryptographic utility is being used to protect ATM machines or power generation equipment or worse. People die when power systems fail. Furthermore, the little open source utility that you think is never going to be used for something life critical may (with or without your knowledge) end up being used by someone at an NGO who'll be killed when the local government thugs break something. On the face of it, that is. Correct me if I'm wrong, but I don't recall anyone ever mentioning that anyone was ever killed over a sniffed credit card. SSL is not only used to protect people's credit cards. It is one thing if, as a customer, with eyes wide open, you make a decision to use something iffy. However, as a producer, it is a bad idea to make assumptions you know what people will do with your tools, because you don't. People end up using tools in surprising ways. You can't control them. Furthermore, it is utterly senseless to build something to use bad cryptography when good cryptography is free and easy to use. You claim there is some Cryptography Guild out there, but unlike every other Guild in history, all our work is available for the taking by anyone who wants it without the slightest renumeration to said fictitious Guild. Well, the opposition to the guild is one of pro-market people who get out there and build applications. I don't see any truth to that. You can build applications just as easily using things like TLS -- and perhaps even more easily. The alternatives aren't any simpler or easier, and are almost always dangerous. OK, that's a statement. What is clear is that, regardless of the truth of the that statement, developers time and time again look at the crypto that is there and conclude that it is too much. For decades, I've seen programmers claim they didn't have time to test their code or document it, either. Should I believe them, or should I keep kicking? People just finally realize what is needed in order to make critical -- and I do mean critical -- pieces of infrastructure safe enough for use. I find this mysterious. When I send encrypted email to my girlfriend with saucy chat in there, is that what you mean by critical ? Someone else who is not skilled in the art will then use that same piece of software to send information to someone at Amnesty International, and might very well end up dead if the software doesn't work right. Just because YOU do not use a piece of software in a life-critical way does not mean someone else out there will not. Or, if I implement a VPN between my customers and suppliers, do you mean that this is critical ? And someone else will use that VPN software to connect in to the management interface for sections of the electrical grid, or a commuter train system, or other things that can easily cause people to die. You do not know who will use your software. For those applications that *are* critical, surely the people best placed to understand and deal with that criticality are the people who run the application themselves? I've been a security consultant for years. There are very few organizations -- even ones with critical security needs -- that actually understand security well. -- Perry E. Metzger[EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
On Wed, Oct 01, 2003 at 02:24:00PM -0400, Ian Grigg wrote: Matt Blaze wrote: I imagine the Plumbers Electricians Union must have used similar arguments to enclose the business to themselves, and keep out unlicensed newcomers. No longer acceptable indeed. Too much competition boys? Rich, Oh come on. Are you willfully misinterpreting what I wrote, or did you honestly believe that that was my intent? Sadly, there is a shared culture amongst cryptography professionals that presses a certain logical, scientific viewpoint. So is being logically and scientific is a bad way to do cryptography? Maybe you would rather some sort of more 'post-modern', 'liberal' or 'free market' cryptography? What is written in these posts (not just the present one) does derive from that viewpoint and although one can quibble about the details, it does look very much from the outside that there is an informal Cryptographers Guild in place [1]. Bollocks. Anyone is free to learn and practice (in the 'western' world, and many other countries) cryptography. Some people are just better at it, and many of those people are recognized for being better or more experienced. By your argument any group that has education and/or training is a guild. Heaven forbid CS and IT types look at the history of their own field. The guild would like the application builder to learn the field. They would like him to read up on all the literature, the analysies. To emulate the successes and avoid the pitfalls of those protocols that went before them. The That sounds like a progressive, enlightened way of doing business, at least trying to avoid known mistakes, and trying to discover new ones. None of that is likely to happen. The barrier to entry into serious cryptographic protocol design is too high for the average builder of new applications [2]. He has, after all, an application to build. Which is why the implmentation is different from protocol design, except for the insecure application developer. to boot. What is not nice is that there is no easy way to work out which code to use, and the protocols are not so easy to understand. It's nice that we have an open Cryptography is hard; suck it up. That is not a reason to act irrational and encourage using known weak or flawed methods, when we do have better known methods. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
Guus Sliepen [EMAIL PROTECTED] writes: You clearly formulated what we are doing! We want to keep our crypto as simple and to the point as necessary for tinc. We also want to understand it ourselves. There is nothing wrong with either goal. Implementing our own authentication protocol helps us do all that. Implementing is fine. Designing, however, may have a world of problems. Uhm, before getting flamed again: by our own, I don't mean we think we necessarily have to implement something different from all the existing protocols. We just want to understand it so well and want to be so comfortable with it that we can implement it ourselves. That's fine. There is nothing wrong with new implementations. My biggest concern is with people rolling their own crypto algorithms and protocols, not with people re-implementing them. If you are going to implement something on your own, though, may I strongly encourage you to write your code in a way that is inherently secure? Security is not only a question of correct protocols, but of good implementation. Avoiding buffer overflows, using principles like aperture minimization and least privilege, and a dozen other techniques will help you make your system far more secure than it would otherwise be. -- Perry E. Metzger[EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
On Wed, 1 Oct 2003, John S. Denker wrote: According to 'ps', an all-up ssh system is less than 3 megabytes (sshd, ssh-agent, and the ssh client). At current memory prices, your clients would save less than $1.50 per system even if their custom software could reduce this bulk to zero. That's not the money they're trying to save. The money they're trying to save is spent on the salaries of the guys who have to understand it. Depending on what needs you have, that's anything from familiarity with setting up the certs and authorizations and servers and configuring the clients, to the ability to sit down and verify the source line by line and routine by routine. The price of computer memory is a non sequitur here; people want something dead-simple so that there won't be so much overhead in _human_ knowledge and understanding required to operate it. Crypto is not like some game or something that nobody has to really understand how it works; key management and cert management is a complex issue and people have to be hired to do it. Code that has so much riding on it has to be audited in lots of places, and people have to be hired to do that. Every line of code costs money in an audit, even if somebody else wrote it. So, yeah, they'd rather see a lot of stuff hard-coded instead of configurable; hard-coded is easier to verify, hard-coded has less configuration to do, and hard-coded is cheaper to own. We get so busy trying to be all things to all people in computer science that we often forget that what a lot of our clients really want is simplicity. 1) Well, they could just ignore the new release and stick with the old version. Or, if they think the new features are desirable, then they ought to compare the cost of re-stripping against the cost of implementing the new desirable features in the custom code. And in a lot of places that's exactly what they do. If the shop requires a full code audit before taking any new software, going to the new version can cost tens of millions of dollars over and above the price. And the bigger the new version's sourcecode is, the more the audit is going to cost. 2) If you do a good job stripping the code, you could ask the maintainers to put your #ifdefs into the mainline version. Then you have no maintenance hassle at all. You wouldn't. But the people who have to slog through that tarball of code for an audit get the jibblies when they see #ifdefs all over the place, because it means they have to go through line by line and routine by routine again and again and again with different assumptions about what symbols are defined during compilation, before they can certify it. Bear - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
On Wed, Oct 01, 2003 at 10:20:53PM +0200, Guus Sliepen wrote: You clearly formulated what we are doing! We want to keep our crypto as simple and to the point as necessary for tinc. We also want to understand it ourselves. Implementing our own authentication protocol helps us do all that. Uhm, before getting flamed again: by our own, I don't mean we think we necessarily have to implement something different from all the existing protocols. We just want to understand it so well and want to be so comfortable with it that we can implement it ourselves. In that case, I don't see why you don't bend your efforts towards producing an open-source implementation of TLS that doesn't suck. If you insist on not using ESP to encapsulate the packets -- which in my opinion is a silly restriction to put on yourself; the ESP encapsulation is extremely simple, to the point that one of my former employers has a fully functional implementation that works well at moderate data rates on an 8088 running MS-DOS! -- TLS is probably exactly what you're looking for. Note that it's *entirely* possible to use ESP without using IKE for the user/host authentication and key exchange. Nothing is preventing you from using TLS or its moral equiavalent to exchange keys -- and looking at some of the open-source IKE implementations, it's easy to see how this would be a tempting choice. Indeed, there's no reason your ESP implementation would need to live in the kernel; I already know of more than one that simple grabs packets using the kernel's tunnel driver, for portability reasons. However, if for what seem to me to be very arbitrary reasons you insist on using an encapsulation that's not ESP, I urge you to use TLS for the whole thing. As I and others have pointed out here, if you're willing to *pay* for it, you can have your choice of TLS implementations that are simple, secure, and well under 100K. Compare and contrast with the behemoth that is OpenSSL and it's easy to see why you wouldn't want to use the open-source implementation that is available to you now, but there is no reason you could not produce one yourself that was much less awful. You say that you object to existing protocols because you want simplicity and performance. I say that it's not reasonable of you to blame the failures of the existing *open-source implementations* of those protocols on the protocols themselves. I think that both the multiple good, small, simple commercial SSL/TLS implementations and the two MS-DOS IPsec implementations are good examples that demonstrate that what you should object to, more properly, is lousy software design and implementation on the part of many open-source protocol implementors, not lousy protocol design in cases where the protocol design is actually quite good. So if you're going to set out to fix something, I think if you're trying to fix the protocols, you're wasting your effort -- there are existing, widely peer-reviewed and accepted protocols that are *already* about as simple as they can get and still be secure the way users actually use them in the real world. I think that it would make a lot more sense to fix the lousy implementation quality instead; that way you seem much more likely to achieve your security, performance, and simplicity goals. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
Ronald L. Rivest [EMAIL PROTECTED] writes: What is aperture minimization? That's a new term for me... Never heard of it before. Google has never seen it either... (Perhaps others on the list would be curious as well...) I'm sure you have heard of it, just under other names. The term aperture minimization really just means that -- keeping the potential opening that can be attacked minimized. If you have only a tiny piece of trusted code, it is easier to fully audit than if you have a large piece of trusted code. If you have only a brief period when you have privileges asserted, there is less scope for hijacking a program than if it asserts privileges at all times. If your system can send general SQL queries to the database server, someone hijacking it can do the same, but if you can only send very limited canned queries by an ad hoc protocol the hijacker has less scope for mischief. Thus, aperture minimization: narrow the window (aperture) and less stuff can get through it. Perry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: how simple is SSL? (Re: Monoculture)
Adam Back [EMAIL PROTECTED] writes: On Wed, Oct 01, 2003 at 08:53:39AM -0700, Eric Rescorla wrote: there's another rationale my clients often give for wanting a new security system [existing protcools] too heavyweight for some applications. I hear this a lot, but I think that Perry nailed it earlier. SSL, for instance, is about as simple as we know how to make a protocol that does what it does. The two things that are generally cited as being sources of complexity are: (1) Negotiation. Negotiation doesn't really add that much protocol complexity, eh well _now_ we can say that negotiation isn't a problem, but I don't think we can say it doesn't add complexity: but in the process of getting to SSLv3 we had un-MACed and hence MITM tamperable ciphersuites preferences (v1), and then version roll-back attack (v2). Right, but that's a DESIGN cost that we've already paid. It doesn't add significant implementation cost. As in check out any SSL implementation. (2) Certificates. and certificates are kind of the price of admission if you want third party authentication. Maybe but X.509 certificates, ASN.1 and X.500 naming, ASN.1 string types ambiguities inherited from PKIX specs are hardly what one could reasonably calls simple. There was no reason SSL couldn't have used for example SSH key formats or something that is simple. If one reads the SSL rfcs it's relatively clear what the formats are the state stuff is a little funky, but ok, and then there's a big call out to a for-pay ITU standard which references half a dozen other for-pay ITU standards. Hardly compatible with IETF doctrines on open standards you would think (though this is a side-track). Since SSL without certificates is about as simple as a stream security protocol can be I don't think I agree with this assertion. It may be relatively simple if you want X.509 compatibility, and if you want ability to negotiate ciphers. I said WITHOUT certificates. Take your SSL implementation and code it up to use anonymous DH only. There's not a lot of complexity to remove at that point. -Ekr -- [Eric Rescorla [EMAIL PROTECTED] http://www.rtfm.com/ - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
Don Davis [EMAIL PROTECTED] writes: eric wrote: The way I see it, there are basically four options: (1) Use OpenSSL (or whatever) as-is. (2) Strip down your toolkit but keep using SSL. (3) Write your own toolkit that implements a stripped down subset of SSL (e.g. self-signed certs or anonymous DH). (4) Design your own protocol and then implement it. Since SSL without certificates is about as simple as a stream security protocol can be, I don't see that (4) holds much of an advantage over (3) i agree, except that simplifying the SSL protocol will be a daunting task for a non-specialist. when a developer is faced with reading understanding the intricacy of the SSL spec, he'll naturally be tempted to start over. this doesn't exculpate the developer for biting off more than he could chew, but it's unfair to claim that his only motivation was NIH or some other sheer stupidity. I disagree. If someone doesn't understand enough about SSL to understna where to simplify, they shouldn't even consider designing a new protocol. btw, i also agree that when a developer decides to design a new protocol, he should study the literature about the design analysis of such protocols. but at the same time, we should recognize that there's a wake-up call for us in these recurrent requests for our review of seemingly-superfluous, obviously-broken new protocols. such developers evidently want and need a fifth option, something like: (5) use SSSL: a truly lightweight variant of SSL, well-analyzed and fully standardized, which trades away flexibility in favor of small code size ease of configuration. arguably, this is as much an opportunity as a wake-up call. I'm not buying this, especially in the dimension of code size. I don't see any evidence that the people complaining about how big SSL are basing their opinion on anything more than the size of OpenSSL. I've seen SSL implementations in well under 100k. -Ekr - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
John S. Denker [EMAIL PROTECTED] writes: According to 'ps', an all-up ssh system is less than 3 megabytes (sshd, ssh- agent, and the ssh client). At current memory prices, your clients would save less than $1.50 per system even if their custom software could reduce this bulk to zero. Let me guess, your background is in software rather than hardware? :-). Not all computers are PCs, where you can just drop in another SIMM and the problem is fixed. Depending on how you measure it, there are at least as many/many more embedded systems out there than PCs, where you have X system resources and can't add any more even if you wanted to because (a) the system is already deployed and can't be altered, (b) it's cheaper to rewrite the crypto from scratch than spend even 5 cents (not $1.50) on more memory, or (c) the hardware can't address any more than the 128K or 512K (64K and 256K 8-bit SRAMs x 2, the bread and butter of many embedded systems) that it already has. With the cost of writing custom software being what it is, they would need to sell quite a large number of systems before de-bulking began to pay off. And that's before accounting for the cost of security risks. See above. This is exactly the situation that embedded-systems vendors find themselves in (insert tales of phone exchanges built from clustered Z80s because it's easier to keep adding more of those than to move the existing firmware to new hardware without the Z80's restrictions, or people being paid outrageous amounts of money to hand-code firmware for 4-bit CPUs because it's cheaper than moving everything to 8-bit ones, or ...). Perry E. Metzger [EMAIL PROTECTED] writes: SSL is not only used to protect people's credit cards. It is one thing if, as a customer, with eyes wide open, you make a decision to use something iffy. However, as a producer, it is a bad idea to make assumptions you know what people will do with your tools, because you don't. People end up using tools in surprising ways. You can't control them. Yup. I once had a user discuss with me the use of my SSL code in an embedded application that controlled X. I was a bit curious as to why they'd bother, until they explained the scale of the X they were controlling. If anything were to go wrong there, it'd be a lot more serious than a few stolen credit cards. Once you have a general-purpose security tool available, it's going to be used in ways that the original designers and implementors never dreamed of. That's why you need to build it as securely as you possibly can, and once it's done go back over it half a dozen times and see if you can build it even more securely than that. Peter. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
In message [EMAIL PROTECTED], Perry E. Metzger writes: Unfortunately, those parts are rather dangerous to omit. 0) If you omit the message authenticator, you will now be subject to a range of fine and well documented cut and paste attacks. With some ciphers, especially stream ciphers, you'll be subject to far worse attacks still. 1) If you omit the IV, suddenly you're going to be subject to a second new range of attacks based on the fact that fixed blocks will always encrypt the exact same way. We went through all that, by the way, when designing IPSec. At first, we didn't put in mandatory authenticators, because we didn't understand that they were security critical. Then, of course, we discovered that they were damn critical, and that most of the text books on this had been wrong. We didn't understand lots of subtleties about our IVs, either. One big hint: do NOT use IVs on sequential packets with close hamming distance! Better yet, don't use predictable IVs; the threat is much clearer. Perry is right -- a number of us learned the hard way about cryptographic protocol complexity. I led the fight to remove sequence numbers from the early version of ESP, since no one could elucidate a threat model beyond the enemy could duplicate packets. My response was so what -- packet duplication is always possible per the IP datagram model. (A while back, my ISP fulfilled that part of the model; I was seeing up to 90% duplicate packets. But I digress.) But then I wrote a paper where I showed lots of ways to attack IPsec if you didn't have both sequence numbers and integrity protection, so I led the fight to reintroduce sequence numbers, and to make integrity protection part of ESP rather than leaving it to AH. We all learn, even in embarrassing ways. My first published cryptographic protocol, EKE, has had an interesting history. One version of it is still believed secure: encrypt both halves of a DH exchange with a shared secret. (Ironically enough, that was the very first variant we came up with -- I still have the notebook where I recorded it.) We came up with lots of variations and optimizations that all looked just fine. We were wrong... Someone has already alluded to the Needham-Schroeder protocol. It's instructive to review the history of it. The original protocol was published in 1978; it was the first cryptographic protocol in the open literature. Presciently enough, it warned that cryptographic protocol design seemed to be a very suble art. Three years later, Denning and Sacco showed an attack on the protocol under certain assumptions; they suggested changes. In 1994, Abadi and Needham published a paper showing a flaw in the Denning-Sacco variant. In 1996, Lowe published a new attack on the *original* Needham-Schroeder paper. Translated into modern terms -- the first paper was published before certificates were invented -- the faulty protocol was only three lines long! Three lines of protocol, in the oldest paper in the literature, and it took 18 years to find the flaw... No, we're not a guild. To me, guild has connotations of exclusivity and closed membership. Anyone can develop their own protocols, and we're quite happy -- *if* they understand what they're doing. That means reading the literature, understand the threats, and deciding which you need to counter and which you can ignore. In IPsec, Steve Kent -- who has far more experience with cryptographic protocols than most of us, since he has access to, shall we say, more than just the open literature -- was a strong proponent of making integrity checks option in ESP. Why, when I just finished saying that they're important? Integrity checks can be expensive, and in some situations the attacks just don't apply. The trick is to understand the tradeoffs, and *to document them*. Leave out what you want, but tell people what you've left out, why you've left it out, and under what circumstances will that change get them into trouble. --Steve Bellovin, http://www.research.att.com/~smb - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
I imagine the Plumbers Electricians Union must have used similar arguments to enclose the business to themselves, and keep out unlicensed newcomers. No longer acceptable indeed. Too much competition boys? Rich, Oh come on. Are you willfully misinterpreting what I wrote, or did you honestly believe that that was my intent? No one - at least certainly not I - suggests that people shouldn't be allowed to invent whatever new protocols they want or that some union card be required in order to do so. However, we've learned a lot in recent years about how to design such protocols, and we've seen intuitively obviously secure protocols turn out to be badly flawed when more advanced analysis techniques and security models are applied against them. Yes, the standards against which newly proposed protocols are measured have increased in recent years: we've reached a point where it is practical for the potential users of many types of security protocols to demand solid analysis of their properties against rather stringent security models. It is no longer sufficient, if one hopes to have a new protocol taken seriously, for designers to simply throw a proposal over the wall to users and analysts and hope that if the analysts don't find something wrong with it the users will adopt it. Now it is possible - and necessary - to be both a protocol designer and analyst at the same time. This is a good thing - it means we've made progress. Finally we can now look at practical protocols more systematically and mathematically instead of just hoping that we didn't miss certain big classes of attack. (We're not done, of course, and we're a long way from discovering a generally useful way to look at an arbitrary protocol and tell if it's secure). Fortunately, there's no dark art being protected here. The literature is open and freely available, and it's taught in schools. And unlike the guilds you allude to, anyone is free to participate. But if they expect to be taken seriously, they should learn the field first. I'd encourage the designer of the protocol who asked the original question to learn the field. Unfortunately, he's going about it a sub-optimally. Instead of hoping to design a just protocol and getting others to throw darts at it (or bless it), he might have better luck (and learn far more) by looking at the recent literature of protocol design and analysis and trying to emulate the analysis and design process of other protocols when designing his own. Then when he throws it over the wall to the rest of the world, the question would be not is my protocol any good but rather are my arguments convincing and sufficient? I suppose some people will always take an anti-intellectual attitude toward this and congratulate themselves about how those eggheads who write those papers with the funny math in them don't know everything to excuse their own ignorance of the subject. People like that with an interest in physics and engineering tend to invent a lot of perpetual motion machines, and spend a lot of effort fending off the vast establishment conspiracy that seeks to suppress their brilliant work. (We've long seen such people in cipher design, but they seem to have ignored protocols for the most part, I guess because protocols are less visible and sexy). Rich, I know you're a smart guy with great familiarity (and contributions to) the field, and I know you're not a kook, but your comment sure would have set off my kook alarm if I didn't know you personally. Who on this list just wrote a report on the dangers of Monoculture? Rich Schroeppel [EMAIL PROTECTED] (Who still likes new things.) Me too. -matt - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
Richard Schroeppel [EMAIL PROTECTED] writes: (Responding to the chorus of protocol professionals saying please do not roll your own) I imagine the Plumbers Electricians Union must have used similar arguments to enclose the business to themselves, and keep out unlicensed newcomers. No longer acceptable indeed. Too much competition boys? TLS, IPSec, JFK, etc. are all intellectual property free. No one gets money if people use them. There is no union here with an incentive to eliminate competition. No one's pay changes if someone uses TLS instead of a roll-your-own-protocol. Who on this list just wrote a report on the dangers of Monoculture? I did. Dependence on a single system is indeed a problem. However, one must understand the nature of the problem, not diversify blindly. Some companies are said to require that multiple high level executives cannot ride on the same plane flight, for fear of losing too many of them simultaneously. That is a way of avoiding certain kinds of risk. However, I know of no company that suggests that some of those executives fly in rickety planes that have never been safety tested and were built by squirrels using only pine cones. That does not reduce risk. I have to agree with Matt Blaze, Eric Rescorla, and numerous others who have said this before. Cryptographic algorithms and protocols are exceptionally difficult to design properly, and you should not go around designing something on a whim and throwing it into your software, any more than you would invent a new drug one morning and inject it into patients that afternoon. There is nothing whatsoever wrong with people proposing a new protocol or algorithm, publishing it, discussing it, etc. Indeed, TLS, AES and all the rest started as published documents that were then subjected to prolonged attempts to break them. If, after something has been reviewed for some years, it then appears to have unique advantages and no one has succeeded in attacking the protocol, it might even be fit for use in products. This is very very different, however, from subjecting your users to seat-of-the-pants designed protocols and algorithms that have had no review whatsoever. Given that even the professionals generally screw it up the first few times around, it is hardly surprising that the roll your own attempts are almost always stunningly bad. This is doubly so given that the protocols and algorithms used in many of these systems don't even have a pretense of superiority over the existing ones. The protocols Peter Gutmann was complaining about in the message that started this thread are, for the most part, childishly bad in spite of the protestations of their creators. Are you arguing that it is in the interest of most people to be using such incompetently designed security software? By the way, none of this contradicts what a number of us said in our monoculture paper. Perry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
Perry writes: Richard Schroeppel [EMAIL PROTECTED] writes: (Responding to the chorus of protocol professionals saying please do not roll your own) I imagine the Plumbers Electricians Union must have used similar arguments to enclose the business to themselves, and keep out unlicensed newcomers. No longer acceptable indeed. Too much competition boys? ... Who on this list just wrote a report on the dangers of Monoculture? I did. Dependence on a single system is indeed a problem. However, one must understand the nature of the problem, not diversify blindly. Some companies are said to require that multiple high level executives cannot ride on the same plane flight, for fear of losing too many of them simultaneously. That is a way of avoiding certain kinds of risk. However, I know of no company that suggests that some of those executives fly in rickety planes that have never been safety tested and were built by squirrels using only pine cones. That does not reduce risk. Speaking of plumbers and electricians, it occurs to me that while it would be very difficult to find pipe fittings designed without taking into account static and dynamic analysis or electric wiring designed without benefit of resistance or insulation breakdown tests (basic requirements for pipes and wires that nonetheless require fairly advanced knowledge to understand properly), equipping a house with such materials might actually end up being safe. The inevitable fire might be extinguished by the equally inevitable flood. -matt - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
I imagine the Plumbers Electricians Union must have used similar arguments to enclose the business to themselves, and keep out unlicensed newcomers. No longer acceptable indeed. Too much competition boys? The world might be better off if you couldn't call something secure unless it came from a certificated security programmer. Just like you don't want your house wired by a Master Electrician, who has been proven to have experience and knowledge of the wiring code -- i.e., both theory and practice. Yes, it sometimes sucks to be a newcomer and treated with derision unless you can prove that you understand the current body of knowledge. We should all try to be nicer. But surely you can understand a cryptographer's frustration when a VPN -- what does that P stand for? -- shows flaws that are equivalent to a syntax error in a Java class. Perhaps it would help to think of it as defending the field. When crap and snake-oil get out, even well-meaning crap and snake-oil, the whole profession ends up stinking. /r$ PS: As for wanting to avoid the client-server distinction in SSL/TLS, just require certs on both sides and do mutual authentication. The bytestream above is already bidirectional. -- Rich Salz Chief Security Architect DataPower Technology http://www.datapower.com XS40 XML Security Gateway http://www.datapower.com/products/xs40.html XML Security Overview http://www.datapower.com/xmldev/xmlsecurity.html - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]