RE: A mighty fortress is our PKI, Part III
I, too, would love to get the details, but Peter is right here. The flaw he reported was in the PKI itself, not in the UI. If there were a bulletproof OS with perfect non-confusing UI, once the malware has a valid signature that traces to a valid certificate, it's the PKI that failed. As for EV being as meaningless as ordinary certificates, that's the point Peter is making. Of course, neither of them certifies the qualities of the publisher that the end user cares about. That would be too expensive and open to liability (therefore, more expensive still). But, in a verbal shell game, the CAs make it sound like someone with an expensive certificate is trustworthy (in the end-user's value system). -Original Message- From: owner-cryptogra...@metzdowd.com [mailto:owner-cryptogra...@metzdowd.com] On Behalf Of Andy Steingruebl Sent: Wednesday, September 15, 2010 4:12 PM To: Peter Gutmann Cc: cryptography@metzdowd.com Subject: Re: A mighty fortress is our PKI, Part III On Wed, Sep 15, 2010 at 8:39 AM, Peter Gutmann pgut...@cs.auckland.ac.nz wrote: Some more amusing anecdotes from the world of PKI: Peter, Not to be too contrary (though at least a little) - not all of these are really PKI failures are they? - There's malware out there that pokes fake Verisign certificates into the Windows trusted cert store, allowing the malware authors to be their own Verisign. The malware could just as easily fake the whole UI. Is it really PKI's fault that it doesn't defend against malware? Did even the grandest supporters ever claim it could/did? - CAs have issued certs to cybercrime web sites like https://www.pay-per-install.com (an affiliate program for malware installers), because hey, the Russian mafia's money is as good as anyone else's. Similarly here - non-EV CAs bind DNS names to a field in a certificate. No more. They don't vouch for the business being run, and in any case any such audit would be point in time anyway. I suppose way back when people promised that certs would do this, but does anyone believe that anymore and have it as an expectation? Perhaps you're setting the bar a bit high? BTW - do you have pointers to most of the things you've reported? I'd love to get the full sordid details :) - Andy - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
RE: Walton's Mountain notaries (identity requirements)
-Original Message- From: John Gilmore [mailto:[EMAIL PROTECTED] Sent: Monday, January 05, 2004 3:11 PM To: Carl Ellison Cc: 'Paul A.S. Ward'; [EMAIL PROTECTED] Subject: Re: Walton's Mountain notaries (identity requirements) ... once again I heard the readings about the edict from Caesar that all people return to their home towns to be counted in a census. Maybe we can take a lesson from that - and have everyone return to people who have known the person, uninterrupted, from birth to the present in order to get anything notarized. Anyone who couldn't find such people just couldn't get anything notarized, I guess. It's a lot more complicated than that, Carl. Society can't demand impossible conditions from its citizens, as a precondition to existence. (This is true even if the condition is possible for 99% of the citizens; the other 1% have rights too.) Hi John. Of course it shouldn't. I was using that extreme example to drive home the point that the concept of identity (via notary, in this case) has been eroded out from under us and we don't have anything to replace it with. My guess is that a good replacement will not provide traceability but will meet our needs (for reputation). However, it's up to people like us to design that replacement. - Carl +--+ |Carl M. Ellison [EMAIL PROTECTED] http://theworld.com/~cme | |PGP: 75C5 1814 C3E3 AAA7 3F31 47B9 73F1 7E3C 96E7 2B71 | +---Officer, arrest that man. He's whistling a copyrighted song.---+ - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Walton's Mountain notaries
-Original Message- From: Paul A.S. Ward [mailto:[EMAIL PROTECTED] Sent: Monday, December 29, 2003 11:29 AM Subject: RE: Repudiating non-repudiation I was recently the subject of identity theft. Specifically, the thieves had my SSN (SIN, actually, since it is in Canada), and my driver's licence number. They produced a fake driver's licence, and used it to open bank accounts in my name. When this all came to light, the bank wanted a notarized document that said that I did not open these accounts or know anything about them. And what was required for notarization? I had to go to city hall and get someone who had never met me before to look at my photo ID (which was my drivers licence) and sign the form saying it was me! Great system! I have to look at this as the result of evolution, starting with Walton's Mountain - the society humans used to have starting with cave men, where everyone stayed among the tribe where they were born and where everyone knew everyone else. A specially trusted person in that tribe/town might be made a Notary. A statement by such a notary would mean something. So, now that the underlying premises that made it all work have evolved out of existence (with the industrial revolution), with what do we replace those mechanisms? If we don't replace them, they'll just keep grinding along and being meaningless rituals. Since these are social mechanisms we're talking about, changing them will take a few centuries, most likely. What will be the right underlying axioms/assumptions a few centuries from now? What shall we do in the meantime? Christmas season is ending - and once again I heard the readings about the edict from Caesar that all people return to their home towns to be counted in a census. Maybe we can take a lesson from that - and have everyone return to people who have known the person, uninterrupted, from birth to the present in order to get anything notarized. Anyone who couldn't find such people just couldn't get anything notarized, I guess. My bet is that we'll evolve reputation credentials to replace notarization of identity. Identity doesn't have much definition any more, since we started moving from one big city to the next every 4 years. With luck, cryptography will offer a real solution - and I don't mean via some dumb attempt to resurrect identity notarization (a la X.509 CAs). - Carl +--+ |Carl M. Ellison [EMAIL PROTECTED] http://theworld.com/~cme | |PGP: 75C5 1814 C3E3 AAA7 3F31 47B9 73F1 7E3C 96E7 2B71 | +---Officer, arrest that man. He's whistling a copyrighted song.---+ - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
RE: Non-repudiation (was RE: The PAIN mnemonic)
Amir, my objection is to the word sender which, in definitions I've read, refers to the human being associated with a particular key. As long as we refer to a private key with no implication that this in any way incurs liability for a human being, then I'm happy -- but e-commerce folks are not. It is important to be able to authenticate a message origin and verify its integrity - the things that a dsig or MAC give you. When you use a public-key dsig, you have the added security advantage that the key capable of forming that signature does not need to be used to verify it. This is the original technical meaning of the term we're struggling over. However, in Diffie and Hellman's original paper, (which referred to this as undeniable, if I remember correctly), the confusion had already set in. A key would never deny or repudiate anything. That's an action by a human being. However, the use of public key cryptography does not imply anything about the human being to whom that key pair was assigned. So, I would use the terms authentication and integrity verification and avoid the term non-repudiation, since that one refers to human behavior and invokes liability on human beings. Since we have no idea how to make computer systems that capture proof of a human being's behavior and intentions, we can not claim to have any evidence that could be presented in court to show that a particular human being made a particular commitment, just based on some digital signature. We can prove that a given private key (to wit, the one private key corresponding to a public key that is entered into evidence) formed a signature over some message or file. However, any attempt to infer more than that is fallacious. If you want to use cryptography for e-commerce, then IMHO you need a contract signed on paper, enforced by normal contract law, in which one party lists the hash of his public key (or the whole public key) and says that s/he accepts liability for any digitally signed statement that can be verified with that public key. Any attempt to just assume that someone's acceptance of a PK certificate amounts to that contract is extremely dangerous, and might even be seen as an attempt to victimize a whole class of consumers. - Carl +--+ |Carl M. Ellison [EMAIL PROTECTED] http://theworld.com/~cme | |PGP: 75C5 1814 C3E3 AAA7 3F31 47B9 73F1 7E3C 96E7 2B71 | +---Officer, arrest that man. He's whistling a copyrighted song.---+ -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Amir Herzberg Sent: Tuesday, December 23, 2003 1:18 AM To: [EMAIL PROTECTED] Subject: Re: Non-repudiation (was RE: The PAIN mnemonic) Ben, Carl and others, At 18:23 21/12/2003, Carl Ellison wrote: and it included non-repudiation which is an unachievable, nonsense concept. Any alternative definition or concept to cover what protocol designers usually refer to as non-repudiation specifications? For example non-repudiation of origin, i.e. the ability of recipient to convince a third party that a message was sent (to him) by a particular sender (at certain time)? Or - do you think this is not an important requirement? Or what? Best regards, Amir Herzberg Computer Science Department, Bar Ilan University Lectures: http://www.cs.biu.ac.il/~herzbea/book.html Homepage: http://amir.herzberg.name - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
RE: Non-repudiation (was RE: The PAIN mnemonic)
-Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Stefan Kelm Sent: Tuesday, December 23, 2003 1:44 AM To: [EMAIL PROTECTED] Subject: Re: Non-repudiation (was RE: The PAIN mnemonic) Ah. That's why they're trying to rename the corresponding keyUsage bit to contentCommitment then: http://www.pki-page.info/download/N12599.doc :-) Cheers, Stefan. Maybe, but that page defines it as: -- contentCommitment: for verifying digital signatures which are intended to signal that the signer is committing to the content being signed. The precise level of commitment, e.g. with the intent to be bound may be signaled by additional methods, e.g. certificate policy. Since a content commitment signing is considered to be a digitally signed transaction, the digitalSignature bit need not be set in the certificate. If it is set, it does not affect the level of commitment the signer has endowed in the signed content. Note that it is not incorrect to refer to this keyUsage bit using the identifier nonRepudiation. However, the use this identifier has been deprecated. Regardless of the identifier used, the semantics of this bit are as specified in this standard. -- Which still refers to the signer having an intent to be bound. One can not bind a key to anything, legally, so the signer here must be a human or organization rather than a key. It is that unjustifiable linkage from the actions of a key to the actions of one or more humans that needs to be eradicated from the literature. - Carl +--+ |Carl M. Ellison [EMAIL PROTECTED] http://theworld.com/~cme | |PGP: 75C5 1814 C3E3 AAA7 3F31 47B9 73F1 7E3C 96E7 2B71 | +---Officer, arrest that man. He's whistling a copyrighted song.---+ - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
RE: Non-repudiation (was RE: The PAIN mnemonic)
Amir, I am glad to see that you are treating this seriously. It is always possible to use the term non-repudiation for some legitimately defined thing - but I personally would prefer not to use the term because it has been tarred by over a decade of misuse (implying some presumption of liability on the part of a human being as a result of the behavior of a cryptographic key). I wish you luck with your protocols and look forward to seeing them. - Carl +--+ |Carl M. Ellison [EMAIL PROTECTED] http://theworld.com/~cme | |PGP: 75C5 1814 C3E3 AAA7 3F31 47B9 73F1 7E3C 96E7 2B71 | +---Officer, arrest that man. He's whistling a copyrighted song.---+ -Original Message- From: Amir Herzberg [mailto:[EMAIL PROTECTED] Sent: Thursday, December 25, 2003 2:47 AM To: Carl Ellison; [EMAIL PROTECTED] Subject: RE: Non-repudiation (was RE: The PAIN mnemonic) At 04:20 25/12/2003, Carl Ellison wrote: ... If you want to use cryptography for e-commerce, then IMHO you need a contract signed on paper, enforced by normal contract law, in which one party lists the hash of his public key (or the whole public key) and says that s/he accepts liability for any digitally signed statement that can be verified with that public key. Of course! I fully agree; in fact the first phase in the `trusted delivery layer` protocols I'm working on is exactly that - ensuring that the parties (using some external method) agreed on the keys and the resulting liability. But when I define the specifications, I use `non-repudiation` terms for some of the requirements. For example, the intuitive phrasing of the Non-Repudiation of Origin (NRO) requirement is: if any party outputs an evidence evid s.t. valid(agreement, evid, sender, dest, message, time-interval, NRO), then either the sender is corrupted or sender originated message to the destination dest during the indicated time-interval. Notice of course that sender here is an entity in the protocol, not the human being `behind` it. Also notice this is only intuitive description, not the formal specifications. Best regards, Amir Herzberg Computer Science Department, Bar Ilan University Lectures: http://www.cs.biu.ac.il/~herzbea/book.html Homepage: http://amir.herzberg.name - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
The PAIN mnemonic
A security taxonomy, PAIN: * privacy (aka thinks like encryption) * authentication (origin) * integrity (contents) * non-repudiation Sorry, Lynn, but I don't buy this. It's missing replay prevention (freshness) and it included non-repudiation which is an unachievable, nonsense concept. If you want to keep the mnemonic, you can change the 4th one to non-replay. - Carl +--+ |Carl M. Ellison [EMAIL PROTECTED] http://theworld.com/~cme | |PGP: 75C5 1814 C3E3 AAA7 3F31 47B9 73F1 7E3C 96E7 2B71 | +---Officer, arrest that man. He's whistling a copyrighted song.---+ - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Non-repudiation (was RE: The PAIN mnemonic)
-Original Message- From: Anne Lynn Wheeler [mailto:[EMAIL PROTECTED] Sent: Sunday, December 21, 2003 6:42 AM To: Carl Ellison Cc: 'Anne Lynn Wheeler'; [EMAIL PROTECTED] Subject: Re: The PAIN mnemonic At 11:20 PM 12/20/2003 -0800, Carl Ellison wrote: and it included non-repudiation which is an unachievable, nonsense concept. one could look at one aspect of non-repudiation as the requirement for everybody having a unique pin/password with guidelines never to share pin/passwords ... which could be considered across a broad range of security activities. That's an interesting definition, but you're describing a constraint on the behavior of a human being. This has nothing to do with cryptosystem choice or network protocol design. What mechanisms do you suggest for enforcing even the constraint you cite? Of course, that constraint isn't enough. In order to achieve non-repudiation, the way it is defined, you need to prove to a third party (the judge) that a particular human being knowingly caused a digital signature to be made. A signature can be made without the conscious action of the person to whom that key has been assigned in a number of ways, none of which includes negligence by that person. Let's just leave the term non-repudiation to be used by people who don't understand security, but rather mouth things they've read in books that others claim are authoritative. There are lots of those books listing non-repudiation as a feature of public key cryptography, for example, and many listing it as an essential security characteristic. All of that is wrong, of course, but it's a test for the reader to see through it. - Carl +--+ |Carl M. Ellison [EMAIL PROTECTED] http://theworld.com/~cme | |PGP: 75C5 1814 C3E3 AAA7 3F31 47B9 73F1 7E3C 96E7 2B71 | +---Officer, arrest that man. He's whistling a copyrighted song.---+ - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
RE: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)
Seth, that was a very good and interesting reply. Thank you. IBM has started rolling out machines that have a TPM installed. If other companies do that too (and there might be others that do already - since I don't follow this closely) then gradually the installed base of TPM-equipped machines will grow. It might take 10 years - or even more - before every machine out there has a TPM. However, that day may well come. Then again, TPMs cost money and I don't know any private individuals who are willing to pay extra for a machine with one. Given that, it is unlikely that TPMs will actually become a popular feature. Some TPM-machines will be owned by people who decide to do what I suggested: install a personal firewall that prevents remote attestation. With wider dissemination of your reasoning, that number might be higher than it would be otherwise. Meanwhile, there will be hackers who accept the challenge of defeating the TPM. There will be TPM private keys loose in the world, operated by software that has no intention of telling the truth to remote challengers. There might even be one or more web services out there with a pool of such keys, offering to do an attestation for you telling whatever lie you want to tell. With such a service in operation, it is doubtful that a service or content provider would put much faith in remote attestation - and that, too, might kill the effort. At this point, a design decision by the TCPA (TCG) folks comes into play. There are ways to design remote attestation that preserve privacy and there are ways that allow linkage of transactions by the same TPM. If the former is chosen, then the web service needs very few keys. If the privacy protection is perfect, then the web service needs only 1 key. If the privacy violation is very strong, then the web service won't work, but the TCG folks will have set themselves up for a massive political campaign around its violation of user privacy. Either of these outcomes will kill the TCG, IMHO. This is the reason that, when I worked for a hardware company active in the TCPA(TCG), I argued strongly against supporting remote attestation. I saw no way that it could succeed. Meanwhile, I am no longer in that company. I have myself to look out for. If I get a machine with a TPM, I will make sure I have the firewall installed. I will use the TPM for my own purposes and let the rest of the world think that I have an old machine with no TPM. You postulated that someday, when the TPM is ubiquitous, some content providers will demand remote attestation. I claim it will never become ubiquitous, because of people making my choice - and because it takes a long time to replace the installed base - and because the economic model for TPM deployment is seriously flawed. If various service or content providers elect not to allow me service unless I do remote attestation, I then have 2 choices: use the friendly web service that will lie for me - or decline the content or service. The scare scenario you paint is one in which I am the lone voice of concern floating in a sea of people who will happily give away their privacy and allow some service or content provider to demand this technology on my end. In such a society, I would stand out and be subject to discrimination. This is not a technical problem. This is a political problem. If that is a real danger, then we need to educate those people. RIAA and MPAA have been hoping for some technological quick fix to let them avoid facing the hard problem of dealing with people who don't think the way they would like people to think. It seems to me that you and John Gilmore and others are doing exactly the same thing - hoping for technological censorship to succeed so that you can avoid facing the hard problem of dealing with people who don't think the way they should (in this case, the people who happily give away their privacy and accept remote attestation in return for dancing pigs). I don't have the power to stop this technology if folks decide to field it. I have only my own reason and skills. - Carl +--+ |Carl M. Ellison [EMAIL PROTECTED] http://theworld.com/~cme | |PGP: 75C5 1814 C3E3 AAA7 3F31 47B9 73F1 7E3C 96E7 2B71 | +---Officer, arrest that man. He's whistling a copyrighted song.---+ -Original Message- From: Seth David Schoen [mailto:[EMAIL PROTECTED] On Behalf Of Seth David Schoen Sent: Sunday, December 21, 2003 3:03 PM To: Carl Ellison Cc: 'Stefan Lucks'; [EMAIL PROTECTED] Subject: Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed) - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
RE: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)
We see here a difference between your and my sides of the Atlantic. Here in the US, almost no one has a smart card. Of those cards you carry, how many are capable of doing public key operations? A simple memory smartcard doesn't count for what we were talking about. There are other problems with doing TCPA-like operations with a smartcard, but I didn't go into those. The biggest one to chew on is that I, the computer owner, need verification that my software is in good shape. My agent in my computer (presumably the smartcard) needs a way to examine the software state of my computer without relying on any of the software in my computer (which might have been corrupted, if the computer's S/W has been corrupted). This implies to me that my agent chip needs a H/W path for examining all the S/W of my computer. That's something the TPM gives us that a smartcard doesn't (when that smartcard goes through a normal device driver to access its machine). - Carl +--+ |Carl M. Ellison [EMAIL PROTECTED] http://theworld.com/~cme | |PGP: 75C5 1814 C3E3 AAA7 3F31 47B9 73F1 7E3C 96E7 2B71 | +---Officer, arrest that man. He's whistling a copyrighted song.---+ -Original Message- From: Ben Laurie [mailto:[EMAIL PROTECTED] Sent: Friday, December 19, 2003 2:42 AM To: Carl Ellison Cc: 'Stefan Lucks'; [EMAIL PROTECTED] Subject: Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed) Carl Ellison wrote: It is an advantage for a TCPA-equipped platform, IMHO. Smart cards cost money. Therefore, I am likely to have at most 1. If I glance quickly through my wallet, I find 7 smartcards (all credit cards). Plus the one in my phone makes 8. So, run that at most 1 argument past me again? Cheers, Ben. -- http://www.apache-ssl.org/ben.html http://www.thebunker.net/ There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit. - Robert Woodruff - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
RE: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)
Stefan, I replied to much of this earlier, so I'll skip those parts. - Carl +--+ |Carl M. Ellison [EMAIL PROTECTED] http://theworld.com/~cme | |PGP: 75C5 1814 C3E3 AAA7 3F31 47B9 73F1 7E3C 96E7 2B71 | +---Officer, arrest that man. He's whistling a copyrighted song.---+ -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Stefan Lucks Sent: Tuesday, December 16, 2003 1:02 AM To: Carl Ellison Cc: [EMAIL PROTECTED] Subject: RE: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed) On Mon, 15 Dec 2003, Carl Ellison wrote: The point is that Your system is not supposed to prevent You from doing anything I want you not to do! TCPA is supposed to lock You out of some parts of Your system. This has nothing to do with the TCPA / TPM hardware. This is a political argument about the unclean origins of TCPA (as an attempt to woo Hollywood). I, meanwhile, never did buy the remote attestation argument for high price content. It doesn't work. So, I looked at this as an engineer. OK, I've got this hardware. If remote attestation is worthless, then I can and should block that (e.g., with a personal firewall). Now, if I do that, do I have anything of value left? My answer was that I did - as long as I could attest about the state of the software to myself, the machine owner. This required putting the origins of the project out of my head while I thought about the engineering. That took effort, but paid off (to me). [...] If it were my machine, I would never do remote attestation. With that one choice, I get to reap the personal advantages of the TPM while disabling its behaviors that you find objectionable (serving the outside master). I am not sure, whether I fully understand you. If you mean that TCPA comes with the option to run a secure kernel where you (as the owner and physical holder of the machine running) have full control over what the system is doing and isn't doing -- ok, that is a nice thing. On the other hand, we would not need a monster such as TCPA for this. What we need is some agent of mine - a chip - that: 1) has access to the machine guts, so it can verify S/W state 2) has a cryptographic channel to me, so it can report that result to me and 3) has its own S/W in a place where no attacker could get to it, even if that attacker had complete control over the OS. The TCPA/TPM can be used that way. Meanwhile, the TPM has no channel to the outside world, so it is not capable of doing remote attestation by itself. You need to volunteer to allow such communications to go through. If you don't like them, then block them. Problem solved. This reminds me of the abortion debate bumper sticker. If you're against abortion, don't have one. - Carl - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
RE: yahoo to use public key technology for anti-spam
I'm not connecting to an open relay. When I pay for service at the local internet café, part of what I get for my money is time on their SMTP server. ..ditto when I pay for cable modem, as I am doing right now. My cable modem provider is cablespeed.com and SMTP server is mail.cablespeed.com. As far as I know, it's available only to its legit subscribers. However, at the end of the month, I'll be signed up with a different cable modem provider. That relationship will last a couple of months, and then I'll be with a different one. Each of these is legit. None is an open relay. But, I don't want to send change-of-address notes out to all my friends every time I change - so I receive through (and identify myself via) a remailer at acm.org. - Carl +--+ |Carl M. Ellison [EMAIL PROTECTED] http://theworld.com/~cme | |PGP: 75C5 1814 C3E3 AAA7 3F31 47B9 73F1 7E3C 96E7 2B71 | +---Officer, arrest that man. He's whistling a copyrighted song.---+ -Original Message- From: Anton Stiglic [mailto:[EMAIL PROTECTED] Sent: Sunday, December 07, 2003 2:11 PM To: Carl Ellison; 'Will Rodger'; 'Steve Bellovin'; [EMAIL PROTECTED] Subject: Re: yahoo to use public key technology for anti-spam - Original Message - From: Carl Ellison [EMAIL PROTECTED] To: 'Will Rodger' [EMAIL PROTECTED]; 'Steve Bellovin' [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Sunday, December 07, 2003 8:44 AM Subject: RE: yahoo to use public key technology for anti-spam I, for one, hate the idea. My From address should be [EMAIL PROTECTED] That's my remailer where I receive all my incoming e-mail. However, my outgoing SMTP server depends on which cable modem provider or hot spot I happen to be at the moment. It would be that SMTP machine that signs my outgoing mail, not acm.org who never sees my outgoing mail. But you should be sending mails via *your* SMTP server, and should be connecting to that SMTP server using SSL and authentication. Open relays encourage spam. People shouldn't be relaying mail via just any SMTP server. --Anton - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
PKI Research Workshop '04, CFP
The third annual PKI Research workshop CFP has been posted. http://middleware.internet2.edu/pki04/ This workshop considers the full range of public key technology used for security decisions. PKI supports a variety of functionalities including authentication, authorization, identity (syndication, federation and aggregation) and trust. We solicit papers, scenarios, war stories, panel proposals, and participation from researchers, systems architects, vendor engineers and above all users. - Carl +--+ |Carl M. Ellison [EMAIL PROTECTED] http://theworld.com/~cme | |PGP: 75C5 1814 C3E3 AAA7 3F31 47B9 73F1 7E3C 96E7 2B71 | +---Officer, arrest that man. He's whistling a copyrighted song.---+ - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: UPnP Security specs available for review
Hi John. I'm sorry you were disappointed. I appreciate your comments on the overview and summary, though. 1024-bit is not an upper limit in key size - but a lower limit. I appreciate your suggestion of varying key lengths and am glad that you have put it in the open literature (this mail list). We explicitly plan for letting people add algorithms or key lengths of their choice, should they find the defaults unacceptable. As with any other standard, this is not an individual effort but a group activity in standards committee. A number of our members have or plan to have small devices with limited processing power. There was significant resistance to the expense of public key operations, which is why we restricted it to two actions: TakeOwnership and SetSessionKeys, both rare, and set the default key size at 1024-bits. Why we would ever send plaintext is that encryption is seen as expensive for these limited devices. It is additionally expensive in this design because we were limited by the inability to violate the pre-existing device architecture to using a tunneling method for encryption. That turns out to have been an advantage for a variety of reasons, but it also increased the expense. However, any manufacturer is free to encrypt all traffic, if it chooses. That's all up to the manufacturer. On pp. 24-25, we give advice to manufacturers on how to choose a TakeOwnership password length. The actual choice is up to the manufacturer, of course. I'm glad you enjoyed point #14. That was a design requirement from the beginning of the project. That is, in some devices (e.g., a remotely accessible power meter), it was envisioned that there will be functions that the manufacturer reserves to itself. Rather than have the manufacturer muck with the normal device's access control structure and make its behavior strange, we strongly suggest that the manufacturer create a sub-device to carry its own reserved functions and leave that sub-device owned by the manufacturer from the beginning. This mail thread might serve as advice to manufacturers. The suggestions you have made do not contradict the spec, but should be a valuable addition to it. Thanks again. - Carl At 12:38 AM 8/23/2003 -0700, John Gilmore wrote: Carl, What's the design lifetime of this security system? 1024 bit RSA is too short. If you're going to all the trouble to build a supposedly secure system, use a length that won't be broken. My suggestion these days is significantly north of 2048 bits. Don't use a power of two, and, ideally, use key lengths that vary among devices, so that there's no sweet spot for someone to build a key-cracking machine for. E.g. One device that implements your spec might have a key length of 2432 bits; the next one a length of 2200 bits; a third 2648 bits. It's clear that the crypto implemented in these devices in the near future is going to be in iterative software, rather than wide hardware, so there's no reason to limit the keys to 1024 bits except for performance and the tiny cost of memory space. And we all know that: the number of public-key operations required is low the latency of public-key operations is usually negligible at system level the performance of hardware always increases rapidly available memory always increases rapidly So don't fall into the NSA trap that's plagued cellphones and every other consumer device. Don't build a security system for every household device that is secure ENOUGH but has a weak link designed in. And don't forget that Intel wants to sell that increased-performance hardware too. So the crypto system should be right at the sluggishly slow point when first released. Because two or three years out, new devices will be plenty fast and then a few years later any crypto overhead will be invisible in new devices. Also, once you have established session keys between two devices, why would you EVER send plaintext between them (page 6, paragraph 9)? The spec should say that plaintext messages will not be accepted, and the implementations should definitely ignore any that arrive. This is the failure mode of US cellphones: the TDMA and CDMA standards define a (poor) encryption scheme, but even though it's in every phone, the cellphone service vendors have all been pressured to disable it on every call. (In fact it isn't even built into the base stations.) IF IT'S POSSIBLE TO DISABLE THE CRYPTO, THE GOVERNMENT WILL CAUSE IT TO HAPPEN IN PRACTICE. So spec it so each device *must* have the secure crypto, on every message, or it won't interoperate. Also, what's this business about manufacturers generating the long-term keys and putting them in the devices and not letting users change them (pg 6, first sentence)? Have you gone over to the Dark Side? How many seconds would it take for a rogue Security Console to try all possible 6-uppercase-letters passwords after you plug in a device (e.g. to charge) and before you try to control it from
UPnP Security specs available for review
http://www.upnp.org/draftspecs/ Enjoy, Carl ++ |Carl Ellison Intel R D E: [EMAIL PROTECTED] | |2111 NE 25th AveT: +1-503-264-2900 | |Hillsboro OR 97124 F: +1-503-264-3375 | |PGP Key ID: 0xFE5AF240 | | 1FDB 2770 08D7 8540 E157 AAB4 CC6A 0466 FE5A F240| ++ - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: SDSI/SPKI background
At 12:00 PM 6/13/2003 +0200, Stefan Mink wrote: Hi Carl, On Wed, Jun 11, 2003 at 09:56:12PM -0700, Carl Ellison wrote: There's one draft that should have gone on to RFC, but people were using it from the draft instead. It's my fault that we left it at that stage and didn't publish the RFC. That's still on my list of things to do :-) It seems that other work kept getting in the way. I guess its the draft about the certificate structure? There were two: the certificate structure draft and the examples draft. But, you're right, it's the certificate structure that we used from the draft without waiting for the RFC. stand-alone product like PGP. It's a tool to be used within other products. It's also almost exclusively for a closed authorization infrastructure, rather than an open naming infrastructure. In fact, Is there a special reason why the authorisation system can't or shouldn't be open here? Most systems and services are distributed and are developed independently, so an open standard would be reasonable here too, wouldn't it? Of course, you're correct. If we knew we were using all the same language rather than local dialects, we might have some common tools that people would be encouraged to write: e.g., a certificate viewer, a certificate path discovery service, ... under SPKI/SDSI thinking, a global naming instructure is not a proper use of one's time and energy. This is doubtless why the PKI Vendors react with hostility toward SPKI/SDSI. agreed :) Yes. Check out KeyNote and PolicyMaker. There are links to those from my web page. I couldn't access the latter one but found a copy on citeseer You can't access my SPKI web page? http://theworld.com/~cme/html/spki.html It works for me. Of course, you don't have to use certificates for authorization. You can bind an authorization to a key in a protected database (a key-based ACL, in SPKI/SDSI terminology). Samples of that are SSH and X9.59. sure, but I like the idea of storing the privileges independent of the service instance; of course there are drawbacks (revocation)... I owe a paper on this. I've been looking into this heavily for a couple of years. See my section on the CAP Theorem on the SPKI web page. Since everything we do with certificates can be done equally with local protected memory (ACLs, Directories) or with services out on the net (holding their own protected memory), you have to have a reason to choose one over the other at design time. Network partition tolerance is one of those reasons, but you have to sacrifice either consistency or availability when you do that. There is also an advantage in revocation with the two ACL models (local ACL; networked service) since there are no freely copied certificates in use. However, you're not home free. We're not designing systems that have only computers connected by communications channels. Our systems perforce include human beings (e.g., as policy administrators). It is a human being who decides to do a revocation. That human doesn't live in the local machine granting access. Even if she did live right next door, she would soon quit if every access request required her personal interaction. So, when you draw the network to include all those humans, you still have network partition problems and still have the possibility of a revocation problem. We went on to use it in products and research. We were and are a group of developers and researchers, not standards writers. Standards writing is fundamentally boring. :) Thanks tschuess Stefan Mink -- Stefan Mink, Schlund+Partner AG (AS 8560) Primary key fingerprint: 389E 5DC9 751F A6EB B974 DC3F 7A1B CF62 F0D4 D2BA BTW, Stefan, my mailer throws up on Mutt messages. I need to get a new mailer for this machine - but can Mutt send signed messages in the old fashioned in-line style? - Carl ++ |Carl Ellison Intel R D E: [EMAIL PROTECTED] | |2111 NE 25th AveT: +1-503-264-2900 | |Hillsboro OR 97124 F: +1-503-264-3375 | |PGP Key ID: 0xFE5AF240 | | 1FDB 2770 08D7 8540 E157 AAB4 CC6A 0466 FE5A F240| ++ - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]