William Arbaugh writes: > If that is the case, then strong authentication provides the same > degree of control over your computer. With remote attestation, the > distant end determines if they wish to communicate with you based on > the fingerprint of your configuration. With strong authentication, the > distant end determines if they wish to communicate with you based on > your identity.
I'm a little confused about why you consider these similar. They seem very different to me, particularly in the context of mass-market transactions, where a service provider is likely to want to deal with "the general public". While it's true that service providers could try to use some demand some sort of PKI credential as a way of getting the true name of those they deal with, the particular things they can do with a true name are much more limited than the things they could do with proof of someone's software configuration. Also, in the future, the cost of demanding a true name could be much higher than the cost of demanding a proof of software identity. To give a trivial example, I've signed this paragraph using a PGP clear signature made by my key 0167ca38. You'll note that the Version header claims to be "PGP 17.0", but in fact I don't have a copy of PGP 17.0. I simply modified that header with my text editor. You can tell that this paragraph was written by me, but not what software I used to write it. As a result, you can't usefully expect to take any action based on my choice of software -- but you can take some action based on whether you trust me (or the key 0167ca38). You can adopt a policy that you will only read signed mail -- or only mail signed by a key that Phil Zimmermann has signed, or a key that Bruce Lehman has signed -- but you can't adopt a policy that you will only read mail written by mutt users. In the present environment, it's somewhat difficult to use technical means to increase or diminish others' incentive to use particular software (at least if there are programmers actively working to preserve interoperability). Sure, attestation for platform identity and integrity has some things in common with authentication of human identity. (They both use public-key cryptography, they can both use a PKI, they both attempt to prove things to a challenger based on establishing that some entity has access to a relevant secret key.) But it also has important differences. One of those differences has to do with whether trust is reposed in people or in devices! I think your suggestion is tantamount to saying that an electrocardiogram and a seismograph have the same medical utility because they are both devices for measuring and recording waveforms. > I just don't see remote attestation as providing control over your > computer provided the user/owner has control over when and if remote > attestation is used. Further, I can think of several instances where > remote attestation is a good thing. For example, a privacy P2P file > sharing network. You wouldn't want to share your files with an RIAA > modified version of the program that's designed to break the anonymity > of the network. This application is described in some detail at http://www.eecs.harvard.edu/~stuart/papers/eis03.pdf I haven't seen a more detailed analysis of how attestation would benefit particular designs for anonymous communication networks against particular attacks. But it's definitely true that there are some applications of attestation to third parties that many computer owners might want. (The two that first come to mind are distributed computing projects like [EMAIL PROTECTED] and network games like Quake, although I have a certain caution about the latter which I will describe when the video game software interoperability litigation I'm working on is over.) It's interesting to note that in this case you benefit because you received an attestation, not because you gave one (although the network is so structured that giving an attestation is arranged to be the price of receiving one: "Give me your name, horse-master, and I shall give you mine!"). The other thing that end-users might like is if _non-peer-to-peer_ services they interacted with could prove properties about themselves -- that is, end-users might like to receive rather than to give attestations. An anonymous remailer could give an attestation to prove that it is really running the official Mixmaster and the official Exim and not a modified Mixmaster or modified Exim that try to break anonymity. Apple could give an attestation proving that it didn't have the ability to alter or to access the contents of your data while it was stored by its "Internet hard drive" service. One interesting question is how to characterize on-line services where users would be asked for attestation (typically to their detriment, by way of taking away their choice of software) as opposed to on-line services where users would be able to ask for attestation (typically to their benefit, by way of showing that the service had certain desirable properties, at least in some future TC technology where the cost to the service provider of making a false attestation could be made substantial, which it is not now). I'm not sure exactly what things separate these. -- Seth David Schoen <[EMAIL PROTECTED]> | Very frankly, I am opposed to people http://www.loyalty.org/~schoen/ | being programmed by others. http://vitanuova.loyalty.org/ | -- Fred Rogers (1928-2003), | 464 U.S. 417, 445 (1984) --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]