Leichter, Jerry skrev:
So presumably the model is:  Put each manufactured chip into a testing
device that repeatedly power cycles it and reads all of memory.  By
simply comparing values on multiple cycles, it assigns locations to
Class 1 or 2 (or 3, if you like).  Once you've done this enough to have
reasonable confidence in your assignments, you pick a bunch of Class 1
locations and use them for the id; and a bunch of Class 2 locations and
call them the entropy source.  You burn the chosen locations into ROM on
the chip.  At power up, the chip checks the ROM, and constructs an ID
from the list of Class 1 locations and a random value from the list of
Class 2 locations.  (Obviously, you want to be a bit more clever - e.g.,
if all your Class 1 locations hold the same value on every power up,
something is wrong with your assumptions and you reject the chip rather
than using an ID of all 0's or all 1's.  The paper is asserting that
this won't happen often enough to matter.)

Yes, that is one way to do it - but its not the way they do it in the paper. Doing it your way, that is writing the bit locations for the ID and RNG sources into an index mem on chip, goes against the purpose of the presented scheme. As they write in the paper:

The non-volatile approach involves programming an identity into a tag at the time of manufacture using EPROM, EEPROM, flash, fuse, or more exotic strategies. While non-volatile identities are static and fully reliable, they have drawbacks in terms of the process cost and the area cost of supporting circuitry. Even if only a small amount of non-volatile storage is used, the process cost must be paid across the entire chip area.

One could also argue that if you add the functionality (the non volatile storage) and take the post-production time (and cost) to write down the locations, you could just as well write down a normal ID. (You have the same conclusion futher down in your mail.)

If you do it the way they do it in the paper - communicate the SRAM mem state to an extrnal source to have your ID extracted for you - doesn't that change the ID and authentication protocol?

This is only done during manufacturing.  Presumably it would be
integrated into the testing process, which you're doing anyway.

Nope, again the paper is (pretty) clear that the external reader receives the mem dump and then extracts the ID fingerprint using hamming distance to match the mem dump with a DB of known IDs. This also means that your Class 2 bits (the RNG sources) will be communicated, something that I see as a security problem.

Finally. I have been in contact with the authors regarding their view on rememanence problemns and they simply wait "long enough" to allow remanence effects to be nullified.

But as I see it the device have no way of knowing that "long enough" time have passed. And if it hasn't, communicating the SRAM state might lead to appication information leakage. Correct?

The unique ID stuff is clever, but it's not clear how much it gains
you:  Since you need to do some final per-device programming anyway to
identify the locations to be used for the ID, why not just burn in a
unique ID?


 The random generator is clever, but the question is whether
"produces an unpredictable result" is really a stable characteristic
of memory.

As Peter Gutmann stated earlier in this thread: "RAM state is entropy chicken soup, you may as well use it because it can't make things any worse, but I wouldn't trust it as the sole source of entropy."

Device aging, changes is the manufacturing process, electrical and environmental changes (accidental or deliberately) will all affect the RNG, and there is no easy way for the (low cost) device to know how good or bad quality of the RNG is.

Med vänlig hälsning, Yours

Joachim Strömbergson - Alltid i harmonisk svängning.
Kryptoblog - IT-säkerhet på svenska

The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

Reply via email to