Re: Face-Recognition Technology Improves
On Sat, 15 Mar 2003, Bill Stewart wrote: They're probably not independent, but they'll be influenced by lighting, precise viewing angles, etc., so they're probably nowhere near 100% correlated either. I notice the systems mentioned in the study rely on biometrics extracted from flat images. Recent crop of systems actually scan the face geometry by using patterned light (apparently, cheaper than using a laser scanner), resulting in a much richer and standartized (lighting and facial orientation is irrelevant) biometric fingerprint. There's a world of difference between a line of people each slowly stepping through the gate past a sensor in roughly aligned orientation and a fixed-orientation no-zoom low-resolution camera looking at a group of freely behaving subjects at varying illumination. Even with basically single-source nonintegrative biometrics one could do a lot with hi-res camera with zoom actively tracking a single person at a time, using a NIR (skin is far more transparent to IR, resulting in a far richer pigmentation pattern fingerprint to be acquired) for illumination. Then there's gait, a physical body model, etc. Shortwave SAR (SAR for THz wavelenths seems to be doable according to recent publications), so reading body geometry would appear possible. Volatile MHC fragment chemosensors are being developed, a hi-tech variant of Stasi's approach with odor samples and canines. (Calibrated sensors, no need for sensor to be exponsed to the scent before, bit vectors never grow stale). By using multichannel, integrative approaches and more sophisticated DSP the error rate can be eventually brought down arbitrarily low, and simultaneously become increasingly hard to falsify. The costs will come down eventually for such integrative telebiometrics systems realtime connected via wireless to be blanket deployable. Unlike a mobile telephone, you can't switch your body off, or leave it at home. It will be interesting to see what will happen politically once the majority of voters will realize they're living in a strictly unilateral version of Brinworld. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: How effective is open source crypto?
Anne Lynn Wheeler [EMAIL PROTECTED] writes: There is a description of doing an SSL transaction in single round trip. The browser contacts the domain name system and gets back in single transmission the 1) public key, 2) preferred server SSL parameters, 3) ip-address. The browser selects the SSL parameters, generates a random secret key, encrypts the HTTP request with the random secret key, encrypts the random secret key with the public key ... and sends off the whole thing in a single transmission eliminating all of the SSL protocol backforth setup chatter. You still need a round trip in order to prevent replay attacks. The fastest that things can be while still preserving the security properties of TLS is: ClientHello - ClientKeyExchange - Finished - - ServerHello - Finished Data - See Boneh and Schacham's Fast-Track SSL paper in Proc.ISOC NDSS 2002 for a description of a scheme where the client caches the server's parameters for future use, which is essentially isomorphic to having the keys in the DNS as far as the SSL portion goes. In any case, the optimization you describe provides almost no performance improvement for the server because the load on the server derives almost entirely from the cryptography, not from transmitting the ServerHello [0]. What it does is provide reduced latency, but this is only of interest to the client, not the server, and really only matters on very constrained links. -Ekr [0] With the exception of the ephemeral modes, but they're simply impossible in the scheme you describe. -- [Eric Rescorla [EMAIL PROTECTED] http://www.rtfm.com/ - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: How effective is open source crypto?
At 08:40 AM 3/16/2003 -0800, Eric Rescorla wrote: You still need a round trip in order to prevent replay attacks. The fastest that things can be while still preserving the security properties of TLS is: ClientHello - ClientKeyExchange - Finished - - ServerHello - Finished Data - See Boneh and Schacham's Fast-Track SSL paper in Proc.ISOC NDSS 2002 for a description of a scheme where the client caches the server's parameters for future use, which is essentially isomorphic to having the keys in the DNS as far as the SSL portion goes. In any case, the optimization you describe provides almost no performance improvement for the server because the load on the server derives almost entirely from the cryptography, not from transmitting the ServerHello [0]. What it does is provide reduced latency, but this is only of interest to the client, not the server, and really only matters on very constrained links. -Ekr [0] With the exception of the ephemeral modes, but they're simply impossible in the scheme you describe. Sorry, there were two pieces being discussed. The part about SSL being a burden/load on servers and the shorten SSL description taken from another discussion. The shorten SSL description was (in fact) from a discussion of the round-trips and latency ... not particularly burden on the server. In the original discussion there was mention about HTTP requires TCP setup/teardown which is minimum seven packet exchange and any HTTPS chatter is in addition to that. VMTP, from rfc1045 is minimum five packet exchange, and XTP is minimum three packet exchange. A cached/dns SSL is still minimum seven packet exchange done over TCP (although XTP would reduce that to three packet exchange). So what kind of replay attack is there. Looking at purely e-commerce ... there is no client authentication. Also, since the client always chooses a new, random key there is no replay attack on the client ... since the client always sends something new (random key) every time. That just leaves replay attacks on the server (repeatedly sending the same encrypted data). As follow-up to doing the original e-commerce stuff ... we then went on to look at existing vulnerabilities and solutions and (at least) the payment system has other methods already in place with regard to getting duplicate transaction aka standards body for all payments (credit, debit, stored-value, etc) in all (electronic) environments (internet, point-of-sale, self-serve, face-to-face, etc), X9.59 http://www.garlic.com/~lynn/index.html#x959 (standard) http://www.garlic.com/~lynn/index.html#aadsnacha (debit/atm network pilot) Replay for simple information retrieval isn't particularly serious except as DOS but serious DOS can be done whether flooding is done with encrypted packets or non-encrypted packets. Another replay attack is transaction based ... where each transaction represents something like performing real world transaction (send a shirt and debit account). If it actually involves payment ... the payment infrastructure has provisions in place to handle repeat/replay and will reject. So primarily what is left are simple transaction oriented infrastructures that don't have their own mechanism for detecting replay/repeats and are relying on SSL. I would also contend that this is significantly smaller exposure than self-signed certificates. -- Anne Lynn Wheelerhttp://www.garlic.com/~lynn/ Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: How effective is open source crypto?
Anne Lynn Wheeler [EMAIL PROTECTED] writes: At 08:40 AM 3/16/2003 -0800, Eric Rescorla wrote: Sorry, there were two pieces being discussed. The part about SSL being a burden/load on servers and the shorten SSL description taken from another discussion. This wasn't clear from your message. The shorten SSL description was (in fact) from a discussion of the round-trips and latency ... not particularly burden on the server. In the original discussion there was mention about HTTP requires TCP setup/teardown which is minimum seven packet exchange TCP setup is 3 packets. The teardown doesn't have any effect whatsoever on the performance of the system (and often isn't done anyway). It's a very modest load on the network and one which is far outstripped by the traffic sent by SSL and HTTP. So what kind of replay attack is there. Looking at purely e-commerce ... there is no client authentication. Also, since the client always chooses a new, random key there is no replay attack on the client ... since the client always sends something new (random key) every time. That just leaves replay attacks on the server (repeatedly sending the same encrypted data). Correct. It's considered bad form to design systems which have known replay attacks when it's just as easy to design systems which don't. If there were some overriding reason why it was impractical to mount a defense, then it might be worth living with a replay attack. However, since it would have only a very minimal effect on offered load to the network and--in most cases--only a marginal effect on latency, it's not worth doing. -Ekr -- [Eric Rescorla [EMAIL PROTECTED] http://www.rtfm.com/ - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: How effective is open source crypto? (addenda)
... small side-note part of the x9.59 work for all payments in all environments was that the transaction system needed to be resilient to repeats and be done in a single round-trip (as opposed to the transport). there needed to be transaction resiliency with respect to single round trip with something like email that might not happen in strictly real-time (extremely long round-trip delays). Real-world systems have been known to have glitches ... order/transaction generation that accidentally repeats (regardless of whether or not transport is catching replay attacks). -- Anne Lynn Wheelerhttp://www.garlic.com/~lynn/ Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: How effective is open source crypto? (bad form)
At 09:30 AM 3/16/2003 -0800, Eric Rescorla wrote: Correct. It's considered bad form to design systems which have known replay attacks when it's just as easy to design systems which don't. If there were some overriding reason why it was impractical to mount a defense, then it might be worth living with a replay attack. However, since it would have only a very minimal effect on offered load to the network and--in most cases--only a marginal effect on latency, it's not worth doing. -Ekr -- [Eric Rescorla [EMAIL PROTECTED] http://www.rtfm.com/ so, lets look at the alternatives for servers that are worried about server replay attacks: client has public key crypto-preferred info (dns or cached), generates random secret key, encrypts request, encrypts random secret key, single transmission server gets request ... application has opened the connection with or w/o server replay attack. if the application, higher level protocol has its own repeat checking it has opened the connection w/o server replay attack. and the server sends the request up the stack to the application. If the application has opened the connection with server replay attack, the protocol sends back some random data (aka its own secret)... that happens to be encrypted with the random key. The client is expecting either the actual response or the replay attack check. If the client gets the actual response, everything is done. If the clients gets back the replay attack check it combines it with something and returns to the server. The difference is basic two packet exchange (within setup/teardown packet exchange overhead) plus an additional replay prevention two packet exchange (if the higher level protocol doesn't have its own repeat handling protocol). The decision as to whether it is two packet exchange or four packet exchange is not made by client ... nor the server ... but by the server application. Simple example for e-commerce is sending a P.O. along with payment authorization ... the transmitted P.O. form is guaranteed to have a unique identifier. The P.O. processing system has logic for handling repeat POs ... for numerous reasons (not limited to replay attacks). Single round-trip transaction: ClientHello/Trans- - ServerResponse/Finish Transaction w/replay challenge: ClientHello/Trans- -Server replay challenge ClientResp- -ServerResponse/Finish Now, ClientHello/Trans can indicate whether the client is expecting a single round-trip or additional data. Also, the ServerResponse can indicate whether it is a piggy-backed finish or not. So, the vulnerability analysis is what is the object of the replay attack and what needs to be protected. I would contend that the object of the replay attack isn't directly the protocol, server, or the system but the specific server application. Problem of course, is that with generic webserver (making the connection) there might be a couple levels of indirection between the webserver specifying the connection parameters and the actual server application (leading to webservers always specifying replay challenge option). -- Anne Lynn Wheelerhttp://www.garlic.com/~lynn/ Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: How effective is open source crypto? (bad form)
Anne Lynn Wheeler [EMAIL PROTECTED] writes: The difference is basic two packet exchange (within setup/teardown packet exchange overhead) plus an additional replay prevention two packet exchange (if the higher level protocol doesn't have its own repeat handling protocol). The decision as to whether it is two packet exchange or four packet exchange is not made by client ... nor the server ... but by the server application. You've already missed the point. SSL/TLS is a generic security protocol. As such, the idea is to push all the security into the protocol layer where possible. Since, as I noted, the performance improvement achieved by not doing so is minimal, it's better to just have replay protection here. -Ekr -- [Eric Rescorla [EMAIL PROTECTED] http://www.rtfm.com/ - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Face-Recognition Technology Improves
At 12:39 PM 03/16/2003 +0100, Eugen Leitl wrote: On Sat, 15 Mar 2003, Bill Stewart wrote: They're probably not independent, but they'll be influenced by lighting, precise viewing angles, etc., so they're probably nowhere near 100% correlated either. I notice the systems mentioned in the study rely on biometrics extracted from flat images. Recent crop of systems actually scan the face geometry by using patterned light (apparently, cheaper than using a laser scanner), resulting in a much richer and standartized (lighting and facial orientation is irrelevant) biometric fingerprint. But there are two sides to the problem - recording the images of the people you're looking for, and viewing the crowd to try to find matches. You're right that airport security gates are probably a pretty good consistent place to view the crowd, but getting the target images is a different problem - some of the Usual Suspects may have police mugshots, but for most of them it's unlikely that you've gotten them to sit down while you take a whole-face geometry scan to get the fingerprint. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Microsoft: Palladium will not limit what you can run
Bill Stewart writes: On Thursday, Mar 13, 2003, at 21:45 US/Eastern, Jay Sulzberger wrote: The Xbox will not boot any free kernel without hardware modification. The Xbox is an IBM style peecee with some feeble hardware and software DRM. But is the Xbox running Nag-Scab or whatever Palladium was renamed? Or is it running something of its own, perhaps using some similar components? The Xbox is definitely not based on NGSCB; Microsoft told EFF very clearly last year that Palladium was still being designed and hadn't gone into manufacturing. The Xbox was certainly being sold then. The Xbox was analyzed by Andrew bunnie Huang, who found that it was using a sui generis security system. ftp://publications.ai.mit.edu/ai-publications/2002/AIM-2002-008.pdf -- Seth David Schoen [EMAIL PROTECTED] | Very frankly, I am opposed to people http://www.loyalty.org/~schoen/ | being programmed by others. http://vitanuova.loyalty.org/ | -- Fred Rogers (1928-2003), |464 U.S. 417, 445 (1984) - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Face-Recognition Technology Improves
On Sun, 16 Mar 2003, Bill Stewart wrote: You're right that airport security gates are probably a pretty good consistent place to view the crowd, but getting the target images is a different problem - some of the Usual Suspects may have police mugshots, but for most of them it's unlikely that you've gotten them to sit down while you take a whole-face geometry scan to get the fingerprint. I think the security-crazed data gatherers would just want to scan biometrics of every single person passing through the metal detector gates, check them against the list of usual suspects, and insert them in realtime into a central database. Where they will remain, for indefinite time, free for any authorized party to do data mining on. Unless explict laws have been passed preventing this very eventuality, and the systems are actually audited that no data is retained beyond what is necessary for processing. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: How effective is open source crypto? (aads addenda)
we did something similar for AADS PPP Radius http://www.garlic.com/~lynn/index.html#aads AADS radius example http://www.asuretee.com/ ... with FIPS186-2, x9.62, ecdsa digital signature authentication on sourceforce http://ecdsainterface.sourceforge.net/ radius digital signature protocol has replay challenge. so adding radius option to webserver client authentication stub (infrastructure can share common administration client authentication across all of its environments). then client clicks on https client authentication, generates secret random key, encrypts request for client authentication with random key, encrypts random key with server public key, sends off single transmission. Server responds with radius connect request which includes replay challenge value as part of message (encrypted with random key). Client responds with digital signature on the server radius message (and some of its own data, encrypted with random key). Basically use the same packet sequence as in transaction w/o replay challenge ... since higher level protocol contains replay challenge. Then can use same packet sequence for webserver TLS and encrypted PPP (and works as VPN; possibly can define also as encrypted TCP) along with the same client authentication infrastructure Infrastructure can use the same administration (RADIUS) infrastructure for all client authentication say enterprise with both extranet connections as well as webserver or ISP that also supplies webhosting. The same administrative operation can be used to support client authentication at the PPP level as well as at the webserver level. The same packet exchange sequence is used for both PPP level encryption with client authentication as well as TLS for webserver level encryption with client authentication. The higher level application can decide whether it already has sufficient replay/repeat resistance or request replay/repeat resistance from lower level protocol. So regardless of TLS, PPP, or TCP, client authentication (using same packet sequence as transaction, w/o lower level replay challenge): 1) client picks up server public key and encryption options (from cache or DNS) 2) client sends of radius client authentication, encrypted with random secret key, encrypted with server public key ... 3) server lower level protocol handles the decryption of the random secret key and the decryption of the client request (which happens to be radius client authentication but could be any other kind of transaction request) and passes up the decrypted client request 4) server higher level protocol (radius client authentication) responds with radius replay challenge 5) client gets the replay challenge, adds some stuff, digitally signs it and responds 6) server higher level radius client authentication protocol appropriately processes Same server public key initial connect code works at TLS, PPP, and possibly TCP protocol levels. The same server public key initial connect code supports both lower-level replay challenge and no replay challenge. Same radius client authentication works at TLS, PPP, and possibly TCP protocol levels. Same client administrative processes works across the whole environment. aka the radius client authentication protocol is just another example (like the purchasse order example) of the higher level protocol having its own replay/repeat handling infrastructure (whether it is something like log checking or its own replay challenge). -- Anne Lynn Wheelerhttp://www.garlic.com/~lynn/ Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Diffie-Hellman 128 bit
Well, I'm attacking a protocol, I know the rules of DH parameters, and the issue here is I'm trying to solve x, brute forcing that in the 128 bit range can be difficult, and x doesn't have to be a prime. (a = g^x mod P). Their primes are 128 bit primes, as well as their pubkeys, I've done some tests on their prime, and all perform under this method of (p-1)/2 = prime. This eliminates the pohlig-hellman discrete logarithm attack, but I'm trying to learn the Gaussian integer method. No, just use the Number Field Sieve algorithm (this is mentioned in section 3.5 of the manuscript I gave you the link to). You could read section 3.6 of the Handbook of Applied Cryptography for a basic introduction to the problem of discrete logarithm. --Anton - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Diffie-Hellman 128 bit
- Original Message - From: NOP [EMAIL PROTECTED] To: Derek Atkins [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Friday, March 14, 2003 9:32 PM Subject: Re: Diffie-Hellman 128 bit Well, I'm attacking a protocol, I know the rules of DH parameters, and the issue here is I'm trying to solve x, brute forcing that in the 128 bit range can be difficult, and x doesn't have to be a prime. (a = g^x mod P). Their primes are 128 bit primes, as well as their pubkeys, I've done some tests on their prime, and all perform under this method of (p-1)/2 = prime. This eliminates the pohlig-hellman discrete logarithm attack, but I'm trying to learn the Gaussian integer method. Sorry, I mentioned using NFS in my previous reply, which is probably not the way you want to go about this (since it's not as efficient for small values and more complicated to code). Index-Calculus with Gaussian integers is indeed a good way. You can look at the paper from LaMacchia and Odlyzko http://citeseer.nj.nec.com/lamacchia91computation.html which Derek and maybe someone else pointed out.. They easily calculated discret logs modulo a 192-bit integer. --Anton - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Another side channel weakness in the SSL/TLS
Dear colleagues we would like to inform you about a new attack on SSL/TLS. For further details see the cryptologic report at http://eprint.iacr.org/2003/052/ or the press release at ICZ web site at http://www.i.cz/en/onas/tisk7.html. Best regards Vlastimil Klima and Tomas Rosa, {vlastimil.klima, [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Microsoft: Palladium will not limit what you can run
On Sat, 2003-03-15 at 05:12, Eugen Leitl wrote: On Sat, 15 Mar 2003, Anonymous wrote: Microsoft's point with regard to DRM has always been that Palladium had other uses besides that one which everyone was focused on. Obviously Of course it's useful. Does the usefulness outweigh the support for special interests (DRM, governments, software monopolies)? There is no value for the end user which can't be achieved with smart cards, which have the additional potential of being removable and transportable. I have my own problems with Pd, but I'm not sure how remote attestation can be achieved without something like Pd or TCPA. And remote attestation is quite useful (although also dangerous) for online gaming, and distributed computing. -- -Dave Turner Stalk Me: 617 441 0668 On matters of style, swim with the current, on matters of principle, stand like a rock. -Thomas Jefferson - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Diffie-Hellman 128 bit
At 13/03/03 23:48, you wrote: I am looking at attacks on Diffie-Hellman. The protocol implementation I'm looking at designed their diffie-hellman using 128 bit primes (generated each time, yet P-1/2 will be a prime, so no go on pohlig-hellman attack), so what attacks are there that I can look at to come up with either the logarithm x from (a=g^x mod p) or the session key that is calculated. A brute force wouldn't work, unless I know the starting range. Are there any realistic attacks on DH parameters of this size, or is theoretically based on financial computation attacks? You can find good explanation for the rationale behind Diffie-Hellman parameters as well as general precautions for implementation in a good paper called Security Issues in the Diffie-Hellman Key Agreement Protocol You can find it in: http://citeseer.nj.nec.com/483430.html Regards, Hagai. Hagai Bar-El - Information Security Analyst Tel.: 972-8-9354152 Fax.: 972-8-9354152 E-mail: [EMAIL PROTECTED] Web: www.hbarel.com - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Interception of Telecommunications in Germany
The German 'Regulatory Authority for Telecommunications and Posts' has drafted the translation of a much-discussed document: Technical Directive setting forth Requirements relating to the implementation of Legal Measures for the Interception of Telecommunications (TR TKÜ) It is available at http://home.t-online.de/home/regtp.referat335/TRTKUE-40-draft18-03-03.zip ( http://home.t-online.de/home/regtp.referat335/ ) Cheers, Stefan. --- Dipl.-Inform. Stefan Kelm Security Consultant Secorvo Security Consulting GmbH Albert-Nestler-Strasse 9, D-76131 Karlsruhe Tel. +49 721 6105-461, Fax +49 721 6105-455 E-Mail [EMAIL PROTECTED], http://www.secorvo.de --- PGP Fingerprint 87AE E858 CCBC C3A2 E633 D139 B0D9 212B - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Who's afraid of Mallory Wolf?
Who's afraid of Mallory Wolf? By common wisdom, SSL is designed to defeat the so-called Man in the Middle attack, or MITM for short. Also known as Mallory, in crypto circles. The question arises, why? For what reason is the MITM a core part of the SSL threat model? And, why do all the implementations assume this? (It is, in fact, possible to use SSL, or TLS as it is now known, without regard to the MITM protection that is part of the model - certs - but I ignore that here, as do implementations!) One has to go back to the original invention of SSL, back in 1994 or so: the web was storming the barricades as the 2nd great killer application for the net (email was the 1st). Companies were dipping their toes into the endless possibilities of commerce. Netscape was evolving as the master of the new net, the challenge to Microsoft, the owner of all things it surveyed. And, as with all dot-com crazies to follow, it had nothing spectacular in the way of a business model. Selling a few secured servers, was all. This whole commerce thing was, at that time, a great wonder, because it involved earning money, and money that was honestly earnt was a precious short commodity at Netscape in those days. To cut a long story off at the knees, Netscape put together a variant of the HTTP protocol layered over crypto. This was sold in addition to its servers as the way to secure credit card payments over the net. The analysis of the designers of SSL indicated that the threat model included the MITM. On what did they found this? It's hard to pin it down, and it may very well be, being blessed with nearly a decade's more experience, that the inclusion of the MITM in the threat model is simply best viewed as a mistake. Consider this simple fact: There has been no MITM attack, in the lifetime of the Internet, that has recorded or documented the acquisition and fraudulent use of a credit card (CC). (Over any Internet medium.) Even worse, there's not been any known MITM of any aggresive form. The only cases known are a bunch of demos, under laboratory conditions. They don't count, and MITM remains a theoretical attack, more the subject of learnings and design exercises than the domain of business or crypto engineering. How hard is this fact? A bit softish, actually, but given the amount of traffic we have seen in the last decade, one would think that MITMs would have made their appearance in aggressive attacks by now, perhaps by scanning emails, perhaps by listening to unprotected HTTP. (In fact, there are now fertile grounds for the attack, with the advent of 802.11b. There are even kits available for it.) But so far, no cases have been found. (In fact, there isn't too much evidence, beyond the circumstantial bemoanings of those that can't, to indicate that aggressors are even passively listening, let alone trying more sophisticated MITM attacks.) Within the world of credit cards, the people who work directly within the ecommerce industry admit privately that this is true [1]. All lost credit card events are based on other attacks. Which leads one to wonder what the threat is? And if there is a threat? That is, should the MITM be in the threat model for SSL, or should it be excluded? Internet cryptography gives us one answer: If it can be protected against, it should be, as to do otherwise results in a false sense of security. This is what I call 100% cryptography for want of a better term. It's a sort of journeyman phase of crypto-plumbing, at that time when as beginners, we read from the big read book. We imagined how to deal with many dark and scary threats and we all agreed, no question, the goal was to cover more of them than the next guy. We would swap conspiracy theories well into the night, all the while, bemoaning the lack of usage of real cryptography, the poverty of our opponent's wit, and the fruitiness of our cheap red wine. I miss those days, if not the product of those mad times. It was also a time where we rarely saw the real life implications of our code, deployed in a threatening environment. In short, we 100%-ers built systems based on expectations, but we did not close the feedback loop to push the real life results back into the deployed systems. Economics gives us another answer: a standard approach to deciding how to spend money. 1. estimate the average cost of each attack. 2. estimate the number of attacks 3. multiply the above two to get a total cost. 4. likewise, estimate the total cost of avoiding the attacks. 5.a if you can avoid these attacks by spending less money, you profit. 5.b if you spend more than you save, you lose. It's just economics, and statistics, and the validity here is simply that credit cards are nothing if they are not economically- and statistically-based models of commerce and fraud. So, let's guess the cost of each CC lost to our MITM as $1000. (Pick your own number if you don't
Re: Brumley Boneh timing attack on OpenSSL
Bill Stewart [EMAIL PROTECTED] writes: Schmoo Group response on cryptonomicon.net http://www.cryptonomicon.net/modules.php?name=Newsfile=articlesid=263mode=order=0thold=0 Apparently OpenSSL has code to prevent the timing attack, but it's often not compiled in (I'm not sure how much that's for performance reasons as opposed to general ignorance?) I had blinding code included in my crypto code for about 3 years, when not a single person used it in all that time I removed it again (actually I think it's probably still there, but disconnected). I'm leaning strongly towards general ignorance here... Peter. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Brumley Boneh timing attack on OpenSSL (fwd)
Some clarification by Peter Gutmann [EMAIL PROTECTED] on why cryptlib doesn't do timing attack resistance default: Peter Gutmann [EMAIL PROTECTED]: cryptlib was never intended to be a high-performance SSL server (the docs are fairly clear on this), and I don't think anyone is using it to replace Apache or IIS. OTOH it is used in a number of specialised environments such as closed environments, embedded systems and mainframes. For example two real-world uses of the cryptlib SSL server are in embedded environment A and mainframe environment B. In A, the processing is handled by a low-power embedded processor. It takes 10-15s to perform an SSL handshake, and that's after the code has been optimised to death to squeeze every possible bit of performance out of it. Performing the necessary 1.5M queries at 15s each would take approximately 8 1/2 months at 100% CPU load (meaning that the device is unable to perform any other operations in that entire time). This is unlikely to go unnoticed, given that it's polled from other devices for status updates. In B, CPU resources are so scarce that the implementation uses null cipher suites because it can't afford the overhead of even RC4 for encryption (admittedly this required a custom code hack, cryptlib doesn't normally support null encryption suites). After about 100 or so attempts at a full SSL handshake, klaxons would sound and blue-suited troops would deploy onto the raised flooring to determine where all the CPU time is going. In neither of these environments (and various similar ones) would a side- channel attack requiring 1M or so queries (e.g. this one, or the Bleichenbacher attack, or the Klima-Pokorny-Rosa attack, which cryptlib shouldn't be vulnerable to since I'm paranoid about error reporting) be terribly feasible. OTOH blinding does produce a noticeable slowdown for a process that's already regarded by its users as unacceptably slow and/or CPU-intensive (I have some users who've hacked the key-exchange process to use fixed shared keys because they just can't spare the CPU time to do a real handshake, e.g. by injecting the shared key into the SSL session cache so you just do a pseudo-resume for each new connection). For this reason, cryptlib makes the use of sidechannel- attack-protection an optional item, which must be selected by the user (via use of the blinding code, now admittedly I should probably make this a bit easier to do in future releases than having to hack the source :-). This is not to downplay the seriousness of the attack, merely to say that in some cases the slowdown/CPU consumption vs.attack risk doesn't make it worthwhile to defend against. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Keysigning @ CFP2003
GPG/PGP Keysigning @ Computers, Freedom and Privacy 2003 April 2nd, 9:45pm (First BoF Session) I will be organizing a keysigning session for CFP2003. Please submit your keys to [EMAIL PROTECTED] and I will print out sheets with key information in order to speed up the process. Bring a photo ID and a copy of your key information so that you can verify what is on the printout. A list of submitted keys and a keyring will be available on: http://anize.org/cfp2003/ Thank you... -- + Douglas Calvert [EMAIL PROTECTED] http://anize.org/dfc/ + | Key Id 0xC9541FB2 http://anize.org/dfc-keys.asc | | [X] User wants to receive encrypted mail | +| 0817 30D4 82B6 BB8D 5E66 06F6 B796 073D C954 1FB2 |+ signature.asc Description: This is a digitally signed message part
Re: Brumley Boneh timing attack on OpenSSL (fwd)
At 09:51 AM 03/22/2003 +0100, Eugen Leitl wrote: Some clarification by Peter Gutmann [EMAIL PROTECTED] on why cryptlib doesn't do timing attack resistance default: Peter Gutmann [EMAIL PROTECTED]: cryptlib was never intended to be a high-performance SSL server (the docs are fairly clear on this), and I don't think anyone is using it to replace Apache or IIS. OTOH it is used in a number of specialised environments such as closed ... For this reason, cryptlib makes the use of sidechannel- attack-protection an optional item, which must be selected by the user (via use of the blinding code, now admittedly I should probably make this a bit easier to do in future releases than having to hack the source :-). This is not to downplay the seriousness of the attack, merely to say that in some cases the slowdown/CPU consumption vs.attack risk doesn't make it worthwhile to defend against. If it's not meant to be a high-performance server, then slowing it down another 20% by doing RSA timing things is probably fine for most uses, and either using compiler flags or (better) friendlier options of some sort to turn off the timing resistance is probably the better choice. I'm not sure how flexible things need to be - real applications of the openssl code include non-server things like certificate generation, and probably some reasonable fraction of the RSA or DH calculations don't need to be timing-protected, but many of them are also things that aren't CPU-consumption-critical either. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Keysigning @ CFP2003
On Saturday 22 March 2003 17:12, Douglas F. Calvert wrote: I will be organizing a keysigning session for CFP2003. Please submit your keys to [EMAIL PROTECTED] and I will print out sheets with key information in order to speed up the process. Bring a photo ID and a copy of your key information so that you can verify what is on the printout. A list of submitted keys and a keyring will be available on: I must be out of touch - since when did PGP key signing require a photo id? -- iang - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Cryptoprocessors compliant with FIPS 140-2
The list of all FIPS 140-1 and 140-2 validated modules can be found here http://csrc.nist.gov/cryptval/140-1/1401val.htm (this includes software and hardware modules). For Mitigation of Other Attacks, the FIPS 140 evaluation doesn't look at these. Some vendors might consider these attacks and implement some kind of protection, but these will not be evaluated. Documentation for a specific module might discuss countermeasures to these attacks if they have been implemented. --Anton - Original Message - From: Damien O'Rourke [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Friday, March 21, 2003 11:14 AM Subject: Cryptoprocessors compliant with FIPS 140-2 Hi, I was wondering if anyone could list a number of cryptographic processors that are compliant with the Federal information processing standard (FIPS) 140-2 Security Requirements for cryptographic modules. I know that the IBM-4758 was compliant with FIPS 140-1 up to level 4 but I don't think it has been tested under the newer version of the standard (correct me if I'm wrong). Specifically I am wondering about section 4.11 on page 39 entitled Mitigation of Other Attacks which discusses, power analysis, timing attacks, TEMPEST and fault induction. If you could tell me what level they have been certified to and where I might find some more information on them that would be great. In fact, any relevant information would be greatly appreciated. Thanks for your time. Best Regards, Damien. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Cryptoprocessors compliant with FIPS 140-2
Damien O'Rourke wrote: I was wondering if anyone could list a number of cryptographic processors that are compliant with the Federal information processing standard (FIPS) 140-2 Security Requirements for cryptographic modules. NIST, the US Government Agency responsible for FIPS 140, maintains lists of certified products: http://csrc.nist.gov/cryptval/vallists.htm - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Face-Recognition Technology Improves
On Sun, 16 Mar 2003, Eugen Leitl wrote: There's a world of difference between a line of people each slowly stepping through the gate past a sensor in roughly aligned orientation and a fixed-orientation no-zoom low-resolution camera looking at a group of freely behaving subjects at varying illumination. The problem is that's exactly the sort of barrier that goes away over time. We face the inevitable advance of Moore's Law. The prices on those cameras are coming down, and the prices of the media to store higher-res images (which plays a major part in how much camera people decide is worth the money) is coming down even more rapidly. Face recognition was something that was beyond our computing abilities for a long time, but the systems are here now and we have to decide how to deal with them - not on the basis of what they are capable of this month, but on the basis of what kind of society they enable in coming decades. Also, face recognition is not like cryptography; you can't make your face sixteen bits longer and stave off advances in computer hardware for another five years. These systems are here now, and they're getting better. Varied lighting, varied perspective, moving faces, pixel counts, etc -- these are all things that make the problem harder, but none of them is going to put it out of reach for more than six months or a year. Five years from now those will be no barrier at all, and the systems they have five years from now will be deployed according to the decisions we make about such systems now. Bear - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Who's afraid of Mallory Wolf?
On Sun, 23 Mar 2003, Ian Grigg wrote: Consider this simple fact: There has been no MITM attack, in the lifetime of the Internet, that has recorded or documented the acquisition and fraudulent use of a credit card (CC). (Over any Internet medium.) How do you view attacks based on tricking people into going to a site which claims to be affiliated with e.g. Ebay or Paypal, getting them to enter their login information as usual, and using that to steal money? It's not a pure MITM attack, but the current system at least makes it possible for people to verify with the certificate whether or not the site is a spoof. So, let's guess the cost of each CC lost to our MITM as $1000. (Pick your own number if you don't like that one.) Then, how many attacks? None, from the above. Multiplied together, and you get ... nothing. So, you claim that a system designed to make MITM attacks impossible has not suffered a successful MITM attack. Sounds rather tautologous to me. The software mandates it: mostly the browsers, but also the servers, are configured to kick up a stink at the thought of talking to a site that has no certificate. As such, SSL, as implemented, shows itself to include a gross failure of engineering. The system was engineered very well to requirements with which you disagree. [2] AFAIR, Anonymous-Diffie-Hellman, or ADH, is inside the SSL/TLS protocol, and would represent a mighty fine encrypted browsing opportunity. Write to your browser coder today and suggest its immediate employment in the fight against the terrorists with the flappy ears. Just out of interest, do you have an economic cost/benefit analysis for the widespread deployment of gratuitous encryption? It's just not that important. If your browsing privacy is important, you're prepared to click through the alarming messages. If the value of privacy is less than the tiny cost of clicking accept this certificate forever for each site, then it's not a convincing argument for exposing people who don't understand crypto to the risk of MITM. Pete - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Face-Recognition Technology Improves
On Sun, 16 Mar 2003, Bill Stewart wrote: But there are two sides to the problem - recording the images of the people you're looking for, and viewing the crowd to try to find matches. You're right that airport security gates are probably a pretty good consistent place to view the crowd, but getting the target images is a different problem - some of the Usual Suspects may have police mugshots, but for most of them it's unlikely that you've gotten them to sit down while you take a whole-face geometry scan to get the fingerprint. I'm reasonably certain that a 'whole-face geometry scan' is a reasonable thing to expect to be able to extract from six or eight security-gate images. If you've been through the airport four or five times in the last year, and they know whose boarding pass was associated with each image, then they've probably got enough images of your face to construct it without your cooperation. And if they don't do it today, there's no barrier in place preventing them from doing it tomorrow. Five years from now, I bet the cameras and systems will be good enough to make it a one-pass operation. I'd be surprised if they don't then scan routinely as people go through the security booths in airports, and if you've been scanned before they make sure it matches, and if you haven't you now have a scan on file so they can make sure it matches next time. Bear - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Who's afraid of Mallory Wolf?
At 11:10 PM 3/23/2003 -0500, Ian Grigg wrote: Who's afraid of Mallory Wolf? slight observations ... i've heard of no cases of credit card number intercepted on the internet in flight (requiring crypto) ... and no known cases of MITM attack (requiring certificates) However there have been some cases of impersonation ... being directed to a counterfeit web-site. I know of no cases of that being done with DNS cache poisoning ... which is also what certificates are targeted at ... both MITM and other impersonations of various kind. the ones i'm aware of is that the person clicks on some URL and goes to that site which is a counterfeit website. This isn't caught by SSL ... since it just compares the domain name in the URL against the domain name in the certificate presented by the server. Since the subterfuge happens well before any DNS cache is involved ... the SSL check of matching domain names doesn't catch anything. There have also been various impersonation involving frames and other screen painting techniques. There have been cache poisonings (ip-address take over) ... there have been also incidents in the press of domain name hijacking ... sending updates to domain name infrastructure convincing them that somebody else is the new domain name owner. getting a new certificate as the new domain name owner is also a way of subverting the SSL check of matching domain names. http://www.garlic.com/~lynn/aepay4.htm#dnsinteg1 http://www.garlic.com/~lynn/aepay4.htm#dnsinteg2 people registering public keys at the same time they register domain names was one of the suggested countermeasures to domain name hijacking. There was another press thing last week regarding DNS attacks. The issue raised by the DNS attack last fall and the latest warning is that these have the potential to bring the internet to a halt. http://www.computerworld.com/securitytopics/security/story/0,10801,79576,00. html so there is some effort regarding dns integrity because of its critical importance for just having internet function at all. past dns attack refs: http://www.garlic.com/~lynn/2003.html#49 also http://www.computerworld.com/securitytopics/security/cybercrime/story/0,1080 1,75564,00.html http://www.zdnetindia.com/news/commentary/stories/73781.html http://www.zdnetindia.com/print.html?iElementId=73777 from a cost of business standpoint ... i've suggested why not use the existing DNS infrastructure to distribute server public keys in the same way they distribute ip-address (they are pieces of information bound to the domain name, a function of the domain name infrastructure) and are capable of distributing other things ... like administrative technical contacts although that is getting restricted ... some bleed over from pkix http://www.garlic.com/~lynn/aadsm13.htm#38 The case against directories http://www.garlic.com/~lynn/aadsm14.htm#0 The case against directories they could be naked public keys ... which would also be subject to DNS cache poisoning ... or they could be signed public keys doesn't need all the baggage of x509 certs ... can just be really simple signed public key. Slightly related to the above posting about long ago and far away when looking at allowing people (20 plus years ago) on business trips to use portable terminals/PCs to dial in and access the internal network/email a vulnerability assesement found that one of the highest problem areas was hotel PBXs. as a result a special 2400 baud encrypting modem was created. encrypting modem anecdote from the time: http://www.garlic.com/~lynn/2002d.html#11 Security Proportional to Risk (was: IBM Mainframe at home) ... these weren't in any related to the link encrypters from the previous reference (aka supposedly over half of the link encrypters in the world were installed on the internal network). in any case, there was a big concern about numerous kinds of evesdropping ... requiring encryption for information hiding. however, the current internet credit card scenario seems to be that it is so much easier to harvest a whole merchant file with tens or hundreds of thousands of numbers ... than trying to get them one or two at a time off some internet connection. note that the x9.59 approach has always been to remove the credit card numbers as a point of attack (form of shared-secret) by requiring all transactions to be authenticated. as a result, just knowing the number isn't sufficient for fraud (countermeasure against all account number harvesting regardless of the technique and whether insider or outsider attack): http://www.garlic.com/~lynn/index.html#x959 the low-hanging fruit theory is that if merchant sites were armored then there could be more interest in evesdropping-based harvesting ... (leading to more demand for internet encryption). However. armoring merchant sites is difficult since 1) there are potentially millions, 2) human mistake is frequent/common
Re: Who's afraid of Mallory Wolf?
In message [EMAIL PROTECTED], Ian Grigg writes: Who's afraid of Mallory Wolf? Even worse, there's not been any known MITM of any aggresive form. The only cases known are a bunch of demos, under laboratory conditions. They don't count, and MITM remains a theoretical attack, more the subject of learnings and design exercises than the domain of business or crypto engineering. Sorry, that's flat-out false. If nothing else, there was a large-scale MITM attack on the conference 802.11 net at the 2001 Usenix Security Symposium. Spammers are hijacking BGP prefixes; see http://www.merit.edu/mail.archives/nanog/2002-10/msg00068.html for one such incident. Eugene Kashpureff was pleaded guilty to domain-name hijacking; used very slightly differently, that's a MITM attack. See http://www.usdoj.gov/criminal/cybercrime/kashpurepr.htm for details. I warned of the possibility of hijacking via routing attacks in 1989, and via DNS attacks in 1995. (See the 'papers' directory on my Web site.) Given that the attacks were demonstrably feasible, Netscape would have been negligent not to design for it. Given that such attacks or their near cousins have actually occurred, I'd say they were right. And yes, you're probably right that no one has stolen credit card numbers that way. Of course, since the defense was in place before people had an opportunity to try, one can quite plausibly argue that Netscape prevented the attack - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Brumley Boneh timing attack on OpenSSL
Regarding using blinding to defend against timing attacks, and supposing that a crypto library is going to have support for blinding: - Should it do blinding for RSA signatures as well as RSA decryption? - How about for ElGamal decryption? - Non-ephemeral (static) DH key exchange? - Ephemeral DH key exchange? - How about for DSS signatures? In other words, what do we need as far as blinding support either in developing a crypto library or in evaluating a crypto library for use? Suppose we are running a non-SSL protocol but it is across a real-time Internet or LAN connection where timing attacks are possible. And suppose our goal is not to see a paper and exploit published within the next three years telling how to break the protocol's security with a few hours of connect time. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Who's afraid of Mallory Wolf?
Grigg counts the benefits of living in a MITM-protected world (no MITM attacks recorded), as though they would happen with or without MITM protection. Is there any reason to believe that's this is, in fact, true? That is, if zero dollars were spent on MITM protection, would there still be no recoreded attacks? Until that's answered, Grigg's economic analysis is flawed. I used to get picked on, but since I bulked up and learned karate, nobody's picked on me. I guess it was pointless to do those things. On Sun, 2003-03-23 at 23:10, Ian Grigg wrote: The question arises, why? For what reason is the MITM a core part of the SSL threat model? And, why do all the implementations assume this? [...] The analysis of the designers of SSL indicated that the threat model included the MITM. [...] Consider this simple fact: There has been no MITM attack, in the lifetime of the Internet, that has recorded or documented the acquisition and fraudulent use of a credit card (CC). -- -Dave Turner Stalk Me: 617 441 0668 I believe there is no righteousness in the situation in which we find ourselves. -Real Live Preacher - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Armoring websites
ref: http://www.garlic.com/~lynn/aadsm14.htm#1 Who's afraid of Mallory Wolf? http://www.garlic.com/~lynn/aadsm14.htm#2 Who's afraid of Mallory Wolf? (addenda) here is discussion of armoring websites with respect to security proportional to what is at risk http://www.garlic.com/~lynn/2001h.html#61 net banking, is it safe??? http://www.garlic.com/~lynn/aepay7.htm#netbank2 net banking, is it safe?? ... security proportional to risk random refs to hardened sites: http://www.garlic.com/~lynn/aadsm2.htm#risk another characteristic of online validation. http://www.garlic.com/~lynn/2001.html#33 Where do the filesystem and RAID system belong? http://www.garlic.com/~lynn/2002.html#44 Calculating a Gigalapse http://www.garlic.com/~lynn/2002m.html#5 Dumb Question - Hardend Site ? http://www.garlic.com/~lynn/2002m.html#6 Dumb Question - Hardend Site ? http://www.garlic.com/~lynn/2002o.html#14 Home mainframes http://www.garlic.com/~lynn/2003c.html#52 diffence between itanium and alpha -- Anne Lynn Wheelerhttp://www.garlic.com/~lynn/ Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Who's afraid of Mallory Wolf?
At 11:10 PM 3/23/2003 -0500, Ian Grigg wrote: Automatically generated self- signed FREEDOM CERTIFICATES, as a convenient temporary measure until widespread Anonymous- Diffie-Hellman is deployed in the field, would appear to strike the quickest and most cost- effective blow for Browsing Liberty [2]. Even if Anonymous DH was widely deployed, it might be better to use self-signed certs, or certs signed by an untrusted root - the browser could remember the cert, and warn the user this site has a different identity than last time. Or the browser could log the certs that are used for connections, and at some later date, if the user suspected MITM attacks, the user could review the logs for discrepancies - thus giving, if not tamper resistance against MITM attacks, at least the possibility for post-facto tamper detection. However, changing https to allow untrusted root certs without warnings might not be a good idea - users expect an https URL to be authenticated, so this changes the semantics. Maybe unauthenticated, ie opportunistic, encryption in HTTP with SSL/TLS should happen via something like the RFC 2817 upgrade mechanism? (I believe this particular mechanism has problems). The server could advertise that it supports opportunistic encryption, and a browser could choose it automatically, and the user wouldn't even be notified. Then https semantics could be left unchanged. Trevor - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
RE: Cryptoprocessors compliant with FIPS 140-2
There are only about 310 fips-140-1/2 total validation certificates since 1995. http://csrc.nist.gov/cryptval/ Since the FIPS-140-2 was not signed in until mid-2001 there where very few in 2002 - see the 2 links below. http://csrc.nist.gov/cryptval/140-1/1401val2002.htm http://csrc.nist.gov/cryptval/140-1/1401val2003.htm _ Dave Kleiman [EMAIL PROTECTED] www.netmedic.net -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Damien O'Rourke Sent: Friday, March 21, 2003 11:14 To: [EMAIL PROTECTED] Subject: Cryptoprocessors compliant with FIPS 140-2 Hi, I was wondering if anyone could list a number of cryptographic processors that are compliant with the Federal information processing standard (FIPS) 140-2 Security Requirements for cryptographic modules. I know that the IBM-4758 was compliant with FIPS 140-1 up to level 4 but I don't think it has been tested under the newer version of the standard (correct me if I'm wrong). Specifically I am wondering about section 4.11 on page 39 entitled Mitigation of Other Attacks which discusses, power analysis, timing attacks, TEMPEST and fault induction. If you could tell me what level they have been certified to and where I might find some more information on them that would be great. In fact, any relevant information would be greatly appreciated. Thanks for your time. Best Regards, Damien. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Supreme Court Refuses to Review Wiretaps Ruling
From the New York Times: Supreme Court Refuses to Review Wiretaps Ruling March 24, 2003 By DAVID STOUT WASHINGTON, March 24 - In a case balancing national security with civil liberties, the Supreme Court refused to interfere today with a lower court ruling giving the Justice Department broad new powers to use wiretaps to prosecute terrorists. The justices declined without comment to review a decision last Nov. 18 in which a special federal appeals court found that, under a law passed after the terror attacks of Sept. 11, 2001, the Justice Department can use wiretaps installed for intelligence operations to go after terrorists. That November decision was crucial, because for some two decades there was presumed to be a wall between wiretap operations for intelligence-gathering and wiretapping in the course of criminal investigations. Obtaining permission for a wiretap to gather intelligence has generally been easier than getting authorization for a wiretap in a straightforward criminal investigation. Thus, prosecutors were admonished not to try to skirt the tougher standards for a wiretap in a criminal investigation by claiming it was actually to gather intelligence. The landscape changed with the passage of legislation, shortly after the Sept. 11 attacks, broadening government surveillance powers. Justice Department investigators applied last May for permission to wiretap an individual who was identified in court papers only as a resident of the United States. The department met resistance from the three-member Foreign Intelligence Surveillance Act Court, which exists solely to administer a 1978 law allowing the government to conduct intelligence wiretaps inside the United States. That court ordered the Justice Department to show that its primary purpose in applying for the wiretap was intelligence gathering and not for a criminal case. Moreover, the three-member court decreed that prosecutors in the Justice Department's criminal division could not take an active role in directing activities of the department's intelligence division. Attorney General John Ashcroft appealed to the United States Foreign Intelligence Surveillance Court of Review, which had never met before and which exists, like the lower court, only to oversee the 1978 law. The court of review ruled in November that the lower court had erred when it tried to impose restrictions on the Justice Department. Furthermore, the court of review said, there never was supposed to be a wall between intelligence gathering and criminal investigations. Effective counterintelligence, as we have learned, requires the wholehearted cooperation of all the government's personnel who can be brought to the task, the review panel wrote. A standard which punishes such cooperation could well be thought dangerous to national security. The review panel criticized the lower court, declaring that it had improperly tried to tell the Justice Department how to do its business, in violation of the Constitution's separation of powers between equal branches of government. The Court of Review is made up of Judges Ralph B. Guy of the United States Court of Appeals for the Sixth Circuit; Edward Leavy of the Court of Appeals for the Ninth Circuit; and Laurence H. Silberman of the Court of Appeals for the District of Columbia Circuit. All were appointed to the panel by Chief Justice William H. Rehnquist of the Supreme Court. Mr. Ashcroft praised the November decision as one that revolutionizes our ability to investigate terrorists and prosecute terrorist acts. But the American Civil Liberties Union, the National Association of Criminal Defense Lawyers, the American-Arab Anti-Discrimination Committee and the Arab Community Center for Economic and Social Services, a Michigan-based organization, assailed the November decision. These fundamental issues should not be finally adjudicated by courts that sit in secret, do not ordinarily publish their decisions, and allow only the government to appear before them, the groups said in asking the Supreme Court to review it. The A.C.L.U. and its allies had only friend-of-the-court status in the case, since technically the Justice Department was the only party. Thus, it was not surprising that the Supreme Court declined today to review the lower courts' decision. http://www.nytimes.com/2003/03/24/politics/24CND-SCOT.html?ex=1049536949ei=1en=6cbee835b0f1acbe -- Perry E. Metzger[EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Who's afraid of Mallory Wolf?
On Monday 24 March 2003 11:37, Peter Clay wrote: On Sun, 23 Mar 2003, Ian Grigg wrote: Consider this simple fact: There has been no MITM attack, in the lifetime of the Internet, that has recorded or documented the acquisition and fraudulent use of a credit card (CC). (Over any Internet medium.) How do you view attacks based on tricking people into going to a site which claims to be affiliated with e.g. Ebay or Paypal, getting them to enter their login information as usual, and using that to steal money? Yes, that's definately an attack. As was pointed out, the use of the cert seems to do two things: stop the MITM (via a secured key exchange so the listener cannot see inside the packets) and confirm the site as per what is stated in the URL. My post of last night addressed the MITM only. I completely ignored the issue of spoofing, which would only be possible if there is no complex relationship between them - which is a debateable point. It's not a pure MITM attack, but the current system at least makes it possible for people to verify with the certificate whether or not the site is a spoof. Does the cert stop spoofing? That's the question! If it does, then there might be value there. In which case we can measure it and construct a cost- benefit analysis to decide whether to protect against it. So, let's guess the cost of each CC lost to our MITM as $1000. (Pick your own number if you don't like that one.) Then, how many attacks? None, from the above. Multiplied together, and you get ... nothing. So, you claim that a system designed to make MITM attacks impossible has not suffered a successful MITM attack. Sounds rather tautologous to me. No, there has been little evidence of MITMs *outside* the system. (I said none, Steve Bellovin said some...) The fact that there are none within the system, yes, that would only show either the attacks were defeated, or there weren't going to be any, or that there are better pickings elsewhere... It doesn't allow you to conclude anything about the need for protection. Check Lynn Wheeler's new post (thanks Lynn!) which points to a lot of inside knowledge about the absence of any aggressive MITM activity inside the credit card world. And, see Steve Bellovin's post for some evidence of MITM outside the credit card world. The software mandates it: mostly the browsers, but also the servers, are configured to kick up a stink at the thought of talking to a site that has no certificate. As such, SSL, as implemented, shows itself to include a gross failure of engineering. The system was engineered very well to requirements with which you disagree. :-) Terms are always debatable! I'd say that engineering *includes* the appropriateness of the requirements. Science does not. Where I would agree: the _protocol_ was engineered very well to meet its requirements. It's not a bad protocol, by any logic. However, no protocol exists within a vacuum, this one exists within a _system_ that is commonly also known as SSL. (Therein lies a big problem here: I know of no separate term to distinguish SSL the protocol from SSL, the secure browsing system that you or I use to send our credit card numbers safely.) [2] AFAIR, Anonymous-Diffie-Hellman, or ADH, is inside the SSL/TLS protocol, and would represent a mighty fine encrypted browsing opportunity. Write to your browser coder today and suggest its immediate employment in the fight against the terrorists with the flappy ears. Just out of interest, do you have an economic cost/benefit analysis for the widespread deployment of gratuitous encryption? No, but it would be an interesting exercise! It's just not that important. It's interesting that you say that ... why is it then that people like Ben Laurie, Eric Young, Eric Rescola and others spent years writing and deploying software for free? Why do the people at Safari and Mozilla and Konqueror also spend all that time getting SSL to work? I don't claim to know the answer. But, if their answer is to protect credit card numbers well, actually, I don't think so! And that's the point of the rant: to identify some of these underlying assumptions like SSL protects your credit card numbers and reveal the truth or otherwise. Hopefully, if we can strip out the myths, we'll find the truth. If your browsing privacy is important, you're prepared to click through the alarming messages. If the value of privacy is less than the tiny cost of clicking accept this certificate forever for each site, then it's not a convincing argument for exposing people who don't understand crypto to the risk of MITM. People don't think like us techies do. They see the messages, and they ask for explanations from other people. Who may or may not know what it all means. The end result is the lowest common denominator - if there is a message, then something is wrong. And that's the point: if there is
Re: Who's afraid of Mallory Wolf?
On Monday 24 March 2003 13:02, Steven M. Bellovin wrote: In message [EMAIL PROTECTED], Ian Grigg writes: Who's afraid of Mallory Wolf? Even worse, there's not been any known MITM of any aggresive form. The only cases known are a bunch of demos, under laboratory conditions. They don't count, and MITM remains a theoretical attack, more the subject of learnings and design exercises than the domain of business or crypto engineering. Sorry, that's flat-out false. If nothing else, there was a large-scale MITM attack on the conference 802.11 net at the 2001 Usenix Security Symposium. Thanks Steve, now we are getting closer. 802.11b is where I'd been expecting it to happen, as the costs of the MITM come right down there. Would you characterise the attack as a bunch of techies mucking around, or would you characterise it as an aggressive attempt to gain a commercial advantage? I.e., did the attackers steal anything? Or did they just annoy people by showing how cool they were? I would surmise that's a techie conference, and is thus a demonstration, not a measurable risk. Spammers are hijacking BGP prefixes; see http://www.merit.edu/mail.archives/nanog/2002-10/msg00068.html for one such incident. I'm can't see clearly whether this is an MITM or a spoofing - did they stand in the middle and listen and divert? Or, did they just tell innocent servers to start re-routing traffic? It seems like an announcement of routes, and the listeners just believed... (But, it is an aggressive attack, someone tried to steal traffic for commercial gain.) I think you may be right in that my use of the term MITM is too broad. The cert in SSL protects against a cryptographic MITM in, for example, an ADH session. But, MITMs outside that are important measurable risks so we can create our threat model. The fact that this attack appears not to be analogous to the SSL-style MITM may or may not be relevant. Eugene Kashpureff was pleaded guilty to domain-name hijacking; used very slightly differently, that's a MITM attack. See http://www.usdoj.gov/criminal/cybercrime/kashpurepr.htm for details. From what I recall, this was a demo. He didn't do it to steal. He did it to highlight the business aspects. Sadly for him, he miscalculated (grossly, it seems). But, his case fits in the sense of not a criminal seeking to steal value, and therefore not a case of measurable risk. I warned of the possibility of hijacking via routing attacks in 1989, and via DNS attacks in 1995. (See the 'papers' directory on my Web site.) I certainly accept them as possible. That's not disputed, and never has been, as indeed, that was the whole thrust of the discussion: The SSL designers put the protection in because the threat was possible. They quite rightly offered the choice in the protocols. Where I am concerned is that they also wrongly forced the certificate path on browsers and servers. To our detriment, and to theirs.) Given that the attacks were demonstrably feasible, Netscape would have been negligent not to design for it. Given that such attacks or their near cousins have actually occurred, I'd say they were right. No, I'm afraid that does not hold. The reason we protect against attacks is because when they happen, they incur costs. But, designing in protection also incurs costs. We must do a cost-benefit analysis to decide if it is appropriate to protect against it. To say that attacks are feasible and therefore must be defended against is not how we work. We can guaruntee that you are immune to car accidents, simply by asking you to stay at home. You (probably) chose not to do so, because you chose to enjoy the higher benefit of travelling, as against the smaller expected cost of a suffering an accident. And yes, you're probably right that no one has stolen credit card numbers that way. Of course, since the defense was in place before people had an opportunity to try, one can quite plausibly argue that Netscape prevented the attack Right. But it's an empty argument if there is no need. We don't carry umbrellas when the sun is shining, only when the sky is grey. And, we don't build meteorite protection at all, even though we could, and they happen! We use information about real threats and how they hurt us to decide whether to worry about them. And that's why the question about MITMs is so key! The question is, is there a need? From several economic points of view, the need fails to show itself. And, the cost is quite high, both in cash, and lost security. Taking your links above at face value, I'll assume that the cost of stolen/hijacked IP number there was about $10,000 in lost business and customers being annoyed at unexpected porn. Say that happens once a metric month to some random victim ... or, $100,000 per year. That cost simply fails to justify any level of signed-certificate infrastructure, so, I'd conclude that the BGP protocol designers have done
Re: Who's afraid of Mallory Wolf?
Ian Grigg wrote: By common wisdom, SSL is designed to defeat the so-called Man in the Middle attack, or MITM for short. The question arises, why? One possible reason: Because DNS is insecure. If you can spoof DNS, you can mount a MITM attack. A second possible reason: It's hard to predict what attacks will become automated. Internet attacks seem to have an all-or-nothing feel: either almost noone exploits them, or they get exploited en masse. The latter ones can be really painful, if you haven't built in protection in advance. You could take your argument even further and ask whether any crypto was needed at all. After all, most attacks have worked by compromising the endpoint, not by sniffing network traffic. I'll let you decide whether to count this as a success story for SSL, or as indication that the crypto wasn't needed in the first place. (I'm a little skeptical of this argument, by the way, but hey, if we're playing devil's advocate, why not aim high?) - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Brumley Boneh timing attack on OpenSSL
Nomen Nescio wrote: Regarding using blinding to defend against timing attacks, and supposing that a crypto library is going to have support for blinding: - Should it do blinding for RSA signatures as well as RSA decryption? - How about for DSS signatures? My guess is that it's not necessary, as the attacker doesn't have as much control over the input to the modular exponentiation process in the case of RSA signatures. (For RSA decryption, the attacker can specify the ciphertext freely. However, for signatures, the input to the modular exponentiation is a hash of the attacker's chosen input, which gives the attacker a lot less freedom to play Bleichenbacher-like games.) But then, the recent Klima-Pokorny-Rosa paper shows how even just a tiny crack can lead to subtle, totally unexpected attacks. Who would have thought that SSL's version rollback check (two bytes in the input to the modular exponentiation) could enable such a devastating attack? Not me. The Boneh-Brumley and KPR papers have made me much more paranoid about side-channel attacks. As a result, I might turn blinding on even for signatures by default, out of caution, even though I can't see how such an attack could possibly work. - How about for ElGamal decryption? - Non-ephemeral (static) DH key exchange? Yes, I think I'd use side channel defenses (like blinding) here. I don't know of any attacks off the top of my head, but it sure seems plausible to me that there might be some. - Ephemeral DH key exchange? I wouldn't tend to be very worried about ephemeral exchanges, since all the attacks we've seen so far require many interactions with the server with the same key. I could be wrong, but this seems pretty safe to me. In other words, what do we need as far as blinding support either in developing a crypto library or in evaluating a crypto library for use? Suppose we are running a non-SSL protocol but it is across a real-time Internet or LAN connection where timing attacks are possible. And suppose our goal is not to see a paper and exploit published within the next three years telling how to break the protocol's security with a few hours of connect time. Good question. Personally, I'd enable side channel defenses (like blinding) by default in the crypto library in every place that the library does some lengthy computation with a long-lived secret. But I'll be interested to hear what others think. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Who's afraid of Mallory Wolf?
Ian Grigg wrote: ... The analysis of the designers of SSL indicated that the threat model included the MITM. On what did they found this? It's hard to pin it down, and it may very well be, being blessed with nearly a decade's more experience, that the inclusion of the MITM in the threat model is simply best viewed as a mistake. I'm sorry to say it but MITM is neither a fable nor restricted to laboratory demos. It's an attack available today even to script kiddies. For example, there is a possibility that some evil attacker redirects the traffic from the user's computer to his own computer by ARP spoofing. With the programs arpspoof, dnsspoof and webmitm in the dsniff package it is possible for a script kiddie to read the SSL traffic in cleartext (list of commands available if there is list interest). For this attack to work the user and the attacker must be on the same LAN or ... the attacker could be somewhere else using a hacked computer on the LAN -- which is not so hard to do ;-) ... Clearly, the browsers should not discriminate against cert-less browsing opportunities The only sign of the spoofing attack is that the user gets a warning about the certificate that the attacker is presenting. It's vital that the user does not proceed if this happens -- contrary to what you propose. BTW, this is NOT the way to make paying for CA certs go away. A technically correct way to do away with CA certs and yet avoid MITM has been demonstrated to *exist* (not by construction) in 1997, in what was called intrinsic certification -- please see www.mcg.org.br/cie.htm Cheers, Ed Gerck - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Who's afraid of Mallory Wolf?
On Monday 24 March 2003 14:11, David Turner wrote: Grigg counts the benefits of living in a MITM-protected world (no MITM attacks recorded), as though they would happen with or without MITM protection. Is there any reason to believe that's this is, in fact, true? That is indeed the question, sans personal issues. That is, if zero dollars were spent on MITM protection, would there still be no recoreded attacks? Actually, I think that if zero dollars had been spent on MITM protection for SSL, then there may well have been some MITM attacks. That then would be a good position to be in, because we could measure the costs of those attacks, and decide from a monetary perspective whether protection at the level of requiring signed certificates is a good thing or just a waste of money. My own guess is that MITM activity is so low across all domains of the net that we would not be able to reliably measure it, and if we could measure it, we'd find it not sufficient to mandate certificates as is currently done. Which - to repeat - is not to remove certs from the servers or browser, but to change the way in which we assume that only cert-protected browsing is good enough. The certs are really good for high end sites (because, economically, they return benefits even if there was no MITM threat). But why are they needed for smaller things? Why do I need a certficate to run an SSL server so that my family can share snapshots for instance? Just a hypothetical... Until that's answered, Grigg's economic analysis is flawed. I used to get picked on, but since I bulked up and learned karate, nobody's picked on me. I guess it was pointless to do those things. You provided your own answer :-) You used to get picked on, so you had a measure of its cost. You acted to defend against those costs. Did you ever get MITM'd? Anywhere? Any time? Anyone you know? -- iang - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Keysigning @ CFP2003
On Monday, Mar 24, 2003, at 11:00 US/Eastern, Ian Grigg wrote: On Saturday 22 March 2003 17:12, Douglas F. Calvert wrote: I will be organizing a keysigning session for CFP2003. Please submit your keys to [EMAIL PROTECTED] and I will print out sheets with key information in order to speed up the process. Bring a photo ID and a copy of your key information so that you can verify what is on the printout. A list of submitted keys and a keyring will be available on: I must be out of touch - since when did PGP key signing require a photo id? It's rather efficient if you want to sign a large number of keys of people you mostly do not know personally. -J -- Jeroen C. van Gelderen - [EMAIL PROTECTED] War prosperity is like the prosperity that an earthquake or a plague brings. The earthquake means good business for construction workers, and cholera improves the business of physicians, pharmacists, and undertakers; but no one has for that reason yet sought to celebrate earthquakes and cholera as stimulators of the productive forces in the general interest. -- Ludwig von Mises - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Who's afraid of Mallory Wolf?
On Mon, 24 Mar 2003, Peter Clay wrote: On Sun, 23 Mar 2003, Ian Grigg wrote: Consider this simple fact: There has been no MITM attack, in the lifetime of the Internet, that has recorded or documented the acquisition and fraudulent use of a credit card (CC). (Over any Internet medium.) There have, however, been numerous MITM attacks for stealing or eavesdropping on email. A semi-famous case I'm thinking of involves a rabid baptist minister named fred phelps and a topeka city councilwoman who had the audacity to vote against him running roughshod over the law. He set up routing tables to fool DNS into thinking his machine was the shortest distance from the courthouse where she worked to her home ISP and eavesdropped on her mail. Sent a message to every fax machine in town calling her a Jezebellian whore after getting the skinny on the aftermath of an affair that she was discussing with her husband. And as for theft of credit card numbers, the lack of MITM attacks directly on them is just a sign that other areas of security around them are so loose no crooks have yet had to go to that much trouble. Weakest link, remember? No need to mount a MITM attack if you're able to just bribe the data entry clerk. Just because most companies' security is so poor that it's not worth the crook's time and effort doesn't mean we should throw anyone who takes security seriously enough that a MITM vulnerability might be the weakest link to the wolves. How do you view attacks based on tricking people into going to a site which claims to be affiliated with e.g. Ebay or Paypal, getting them to enter their login information as usual, and using that to steal money? These, technically speaking, are impostures, not MITM attacks. The web makes it ridiculously easy. You can use any linktext or graphic to link to anywhere, and long cryptic URL's are sufficiently standard practice that people don't actually look at them any more to notice a few characters' difference. On the occasions where people have actually spoofed DNS to route the correct URL to the wrong server in order to get info on people's accounts, that is a full-on MITM attack. And that definitely has happened. I'm surprised to hear someone claim that credit card numbers haven't been stolen that way. I've been more concerned about email than credit cards, so I don't know for sure, but if credit cards haven't been stolen this way then the guys who want them are way behind the guys who want to eavesdrop on email. [2] AFAIR, Anonymous-Diffie-Hellman, or ADH, is inside the SSL/TLS protocol, and would represent a mighty fine encrypted browsing opportunity. Write to your browser coder today and suggest its immediate employment in the fight against the terrorists with the flappy ears. Just out of interest, do you have an economic cost/benefit analysis for the widespread deployment of gratuitous encryption? This is a simple consequence of the fact that the main market for SSL encryption is financial transactions. And no credit card issuer wants fully anonymous transactions; it leaves them holding the bag if anything goes wrong. Anonymous transactions require a different market, which has barely begun to make itself felt in a meaningful way (read: by being willing to pay for it) to anyone who has pockets deep enough to do the development. Bear - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Who's afraid of Mallory Wolf?
On Monday, Mar 24, 2003, at 11:37 US/Eastern, Peter Clay wrote: On Sun, 23 Mar 2003, Ian Grigg wrote: Consider this simple fact: There has been no MITM attack, in the lifetime of the Internet, that has recorded or documented the acquisition and fraudulent use of a credit card (CC). (Over any Internet medium.) How do you view attacks based on tricking people into going to a site which claims to be affiliated with e.g. Ebay or Paypal, getting them to enter their login information as usual, and using that to steal money? It's not a pure MITM attack, but the current system at least makes it possible for people to verify with the certificate whether or not the site is a spoof. Correct. On the other hand, in a lot of cases people cannot be expected to do the verification. This shows in the number of people that can be tricked into being spoofed out of their passwords, even when certificates are deployed. That is not an argument against certificates though, it is (partially) an argument against broken user interfaces. Just out of interest, do you have an economic cost/benefit analysis for the widespread deployment of gratuitous encryption? What makes you say it is gratuitous? Or: how can you state my privacy is gratuitous? It's just not that important. If your browsing privacy is important, you're prepared to click through the alarming messages. If the value of privacy is less than the tiny cost of clicking accept this certificate forever for each site, then it's not a convincing argument for exposing people who don't understand crypto to the risk of MITM. This is illogical. Even if a server operator would prefer to allow unauthenticated encryption, he cannot do so without annoying 90% of his customers because they too will be getting these alarming messages. In general, if my browsing privacy is important to me and the server operator is willing to accomodate me, he cannot do so. This however still does not constitute an argument against certificates. It can be morphed as an argument against browsers not supporting Anonymous-DH. (Note that I'm favoring treating sites offering ADH the same as sites offering a certificate. Each offers different functionality which should be distinguishable in the GUI.) Cheers, -J -- Jeroen C. van Gelderen - [EMAIL PROTECTED] The python has, and I fib no fibs, 318 pairs of ribs. In stating this I place reliance On a séance with one who died for science. This figure is sworn to and attested; He counted them while being digested. -- Ogden Nash - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Who's afraid of Mallory Wolf?
So far, as I see it, this is not an issue of specific SSL protocol, but of unrestrictive browser to user interfacing. The only MITM attacks that have been practical valid attacks as of lately were specific to microsoft browser issues when interfacing with SSL. On another note, MITM attacks on SSL, is strictly a user education issue. How many users know what a fingerprint is, or what it is designed for? Unless we either force the browser to be that strict and never interface with unseen or untrusted fingerprints (impractical), what can you do? - Original Message - From: Jeroen C. van Gelderen [EMAIL PROTECTED] To: Peter Clay [EMAIL PROTECTED] Cc: Ian Grigg [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Monday, March 24, 2003 4:50 PM Subject: Re: Who's afraid of Mallory Wolf? On Monday, Mar 24, 2003, at 11:37 US/Eastern, Peter Clay wrote: On Sun, 23 Mar 2003, Ian Grigg wrote: Consider this simple fact: There has been no MITM attack, in the lifetime of the Internet, that has recorded or documented the acquisition and fraudulent use of a credit card (CC). (Over any Internet medium.) How do you view attacks based on tricking people into going to a site which claims to be affiliated with e.g. Ebay or Paypal, getting them to enter their login information as usual, and using that to steal money? It's not a pure MITM attack, but the current system at least makes it possible for people to verify with the certificate whether or not the site is a spoof. Correct. On the other hand, in a lot of cases people cannot be expected to do the verification. This shows in the number of people that can be tricked into being spoofed out of their passwords, even when certificates are deployed. That is not an argument against certificates though, it is (partially) an argument against broken user interfaces. Just out of interest, do you have an economic cost/benefit analysis for the widespread deployment of gratuitous encryption? What makes you say it is gratuitous? Or: how can you state my privacy is gratuitous? It's just not that important. If your browsing privacy is important, you're prepared to click through the alarming messages. If the value of privacy is less than the tiny cost of clicking accept this certificate forever for each site, then it's not a convincing argument for exposing people who don't understand crypto to the risk of MITM. This is illogical. Even if a server operator would prefer to allow unauthenticated encryption, he cannot do so without annoying 90% of his customers because they too will be getting these alarming messages. In general, if my browsing privacy is important to me and the server operator is willing to accomodate me, he cannot do so. This however still does not constitute an argument against certificates. It can be morphed as an argument against browsers not supporting Anonymous-DH. (Note that I'm favoring treating sites offering ADH the same as sites offering a certificate. Each offers different functionality which should be distinguishable in the GUI.) Cheers, -J -- Jeroen C. van Gelderen - [EMAIL PROTECTED] The python has, and I fib no fibs, 318 pairs of ribs. In stating this I place reliance On a séance with one who died for science. This figure is sworn to and attested; He counted them while being digested. -- Ogden Nash - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]