| Rick Wash wrote: | >There are many legitimate uses of remote attestation that I would like to | >see. For example, as a sysadmin, I'd love to be able to verify that my | >servers are running the appropriate software before I trust them to access | >my files for me. Remote attestation is a good technical way of doing that. | | This is a good example, because it brings out that there are really | two different variants of remote attestation. Up to now, I've been | lumping them together, but I shouldn't have been. In particular, I'm | thinking of owner-directed remote attestation vs. third-party-directed | remote attestation. The difference is who wants to receive assurance of | what software is running on a computer; the former mechanism allows to | convince the owner of that computer, while the latter mechanism allows | to convince third parties.... | | Finally, I'll come back to the topic you raised by noting that your | example application is one that could be supported with owner-directed | remote attestation. You don't need third-party-directed remote | attestation to support your desired use of remote attestation. So, TCPA | or Palladium could easily fall back to only owner-directed attestation | (not third-party-attestation), and you'd still be able to verify the | software running on your own servers without incurring new risks of DRM, | software lock-in, or whatever.... All of this is fine as long as there is a one-to-one association between machines and "owners" of those machines. Consider the example I gave earlier: A shared machine containing the standard distribution of the trusted computing software. All the members of the group that maintain the software will want to have the machine attest, to them, that it is properly configured and operating as intended. We can call the group the owner of the machine, and create a single key pair that all of them know. But this is brittle - shared secrets always are. Any member of the group could then modify the machine and, using his access to the private key, fake the "all clear" indication. Each participant should have his own key pair, since attestation using a particular key pair only indicates security with respect to those who don't know the private key of the pair - and a member of a development team for the secure kernel *should* mistrust his fellow team members!
So, again, there are simple instances where it will prove useful to be able to maintain multiple sets of independent key pairs. Now, in the shared distribution machine case, on one level team members should be mutually suspicious, but on another they *do* consider themselves joint owners of the machine - so it doesn't bother them that there are key pairs to which they don't have access. After all, those key pairs are assigned to *other* owners of the machine! But exactly the same mechanism could be used to assign a key pair to Virgin Records - who we *don't* want to consider an owner of the machine. As long as, by owner, you mean a single person, or a group of people who completely trust each other (with respect to the security problem we are trying to solve); and as long as each machine only has only one owner; then, yes, one key pair will do. But as soon as "owner" can encompass mutually suspicious parties, you need to have mutual independent key pairs - and then how you use them, and to whom you grant them, becomes a matter of choice and policy, not technical possibility. BTW, even with a single owner, multiple independent key pairs may be useful. Suppose I have reason to suspect that my private key has been leaked. What can I do? If there is only one key pair around, I have to rebuild my machine from scratch. But if I had the forsight to generate *two* key pairs, one of which I use regularly - and the other of which I sealed away in a safe - then I can go to the safe, get out my "backup" key pair, and re-certify my machine. In fact, it would probably be prudent for me to generate a whole bunch of such backup key pairs, just in case. You're trying to make the argument that feature X (here, remote attestation for multiple mutually-suspicious parties) has no significant uses. Historically, arguments like this are losers. People come up with uses for all kinds of surprising things. In this case, it's not even very hard. An argument that feature X has uses, but also imposes significant and non- obvious costs, is another thing entirely. Elucidating the costs is valuable. But ultimately individuals will make their own analysis of the cost/benefit ratio, and their calculations will be different from yours. Carl Ellison, I think, argued that TCPA will probably never have large penetration because the dominant purchasing factor for consumers is always initial cost, and the extra hardware will ensure that TCPA-capable machines will always be more expensive. Maybe he's right. Even if he isn't, as long as people believe that they have control over the costs associated with feature C - in this case, as long as they believe that they can choose *not* to allow remote attestation where it isn't it their interest - they will assign low costs. The problem isn't with the argument up to this point - it's in the potential future world where the weight of concensus and standards and even law effectively removes the theoretical choice to "opt out". -- Jerry (one of the 10% of TV owners who still receives his TV signals over the air - and no, not from a sattelite.) --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]