| Aloha! | | Peter Gutmann skrev: | > So RAM state is entropy chicken soup, you may as well use it because | > it can't make things any worse, but I wouldn't trust it as the sole | > source of entropy. | | Ok, apart from the problems with reliable entropy generation. I'm I | right when I get a bad feeling when I think about the implications of | how the device ID is established. | | As I understand it, the device itself doesn't know it's ID. Instead | you repeatedly send over mem dumps to the reader which then extracts | what it (to some estimated degree) consider to be the correct ID. I don't think that's what they are suggesting. My understanding is that their experiments show that, for any *particular instance* of a chip, you can divide memory into two categories:
1. Some memory locations will come up with same value every time you boot; 2. Some memory locations will come up with an essentially random value each time you boot. In reality, this is a spectrum - e.g., some locations may come up as 0 95% of the time and as 1 the other 5% of the time. You could make a Class 3 for those; members of that class would likely be ignored in the following. Which class a particular memory cell on a particular chip falls into appears to be due to random process variations during manufacture, and is alleged to be unpredictable and fixed for the life of the chip. So presumably the model is: Put each manufactured chip into a testing device that repeatedly power cycles it and reads all of memory. By simply comparing values on multiple cycles, it assigns locations to Class 1 or 2 (or 3, if you like). Once you've done this enough to have reasonable confidence in your assignments, you pick a bunch of Class 1 locations and use them for the id; and a bunch of Class 2 locations and call them the entropy source. You burn the chosen locations into ROM on the chip. At power up, the chip checks the ROM, and constructs an ID from the list of Class 1 locations and a random value from the list of Class 2 locations. (Obviously, you want to be a bit more clever - e.g., if all your Class 1 locations hold the same value on every power up, something is wrong with your assumptions and you reject the chip rather than using an ID of all 0's or all 1's. The paper is asserting that this won't happen often enough to matter.) | Wouldn't a "simple" thing like a challenge response and become much | more complicated - and insecure? | | Basically the device goes from saying: "I'm ID XYZ and to prove it by | providing the following response to your challange", to "I'm an | amnesiac device and here are my mem dump", please calculate my ID | (please remember to power-cycle me x times) and then I'll send a | response." | | Also, wouldn't the ID-scheme presented in the paper take a very long | time. Transferring 256 Bytes * x times + hamming calc (by the host) | vs reading 64 bits (or similar ID length)? This is only done during manufacturing. Presumably it would be integrated into the testing process, which you're doing anyway. | I give the paper plus marks for novelty, but can't see how to use this | in a secure, practical and cost efficient way. The unique ID stuff is clever, but it's not clear how much it gains you: Since you need to do some final per-device programming anyway to identify the locations to be used for the ID, why not just burn in a unique ID? The random generator is clever, but the question is whether "produces an unpredictable result" is really a stable characteristic of memory. For example, it could be that those memory locations initially are quite random, but if they are used to hold constant values for long periods of time during operation, may build up a remnance that destroys the initial randomness. Ultimately, the nice thing being relied on here - random process variations - also make the approach vulnerable to any change in the process. -- Jerry --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]