I don't see how you get ergodicity into the device without the possibility of observation. Without addressing that problem you are going to have a protocol that always tells the client 'yep you are hosed'.
I see the following possibilities for ergodicity 1) Random seed installed during manufacture 2) Random capture from embedded RNG source / environmental capture 3) Random caputure from UI 4) Obtain random data from the network For the web browser or server the usual approach is (3). It works fine because there is sufficient ergodicity in the UI interactions to provide 128 bits just waiting a few seconds. But we can't do that on an embedded device 1) Is actually not as bad as might be imagined, you trust the manufacturer not to backdoor your crypto so having them give you a seed is not terrible. 2) Is ideal but adds to hardware cost. Unless that is someone can work out a cheap way to get random data into a D/A port or other I/O pin. 4) Is clearly sub optimal since the service knows the data sent. How about this for a better protocol approach? Client generates random seeds. S1, S2 Client sends Trunc(H(S1)) to the service Service says 'hey I have seen that before' or 'nope it is new' + returns S3 Client generates new seed from H(S1 + S2 + S3 +c) Where c is a counter that is incremented for each call for more random bits. Advantages: 1) The checking protocol is tied into the PRNG algorithm so that it is really hard for the programmer to balls it up. 2) Only part of the ergodicity is checked, the protocol checks to see that there are at least 128 bits worth of random data in the output. Cons: 1) Requires twice the amount of randomness to do the job right. A scheme of this type might well be best implemented as a canary type scheme so that issues the the PRNG during configuration were being detected... Also, this looks to me to be the sort of thing that you would want to be a part of some form of device enrollment protocol. On Thu, Feb 16, 2012 at 9:29 AM, Stephen Farrell <[email protected]> wrote: > > > On 02/16/2012 02:20 PM, Phillip Hallam-Baker wrote: >> >> On Thu, Feb 16, 2012 at 9:13 AM, Stephen Farrell >> <[email protected]> wrote: >>> >>> >>> >>> On 02/16/2012 01:51 PM, Phillip Hallam-Baker wrote: >>>> >>>> >>>> My first thought was that this should be done by the CA. >>> >>> >>> >>> In the cited material, they also cover PGP and SSH keys and >>> not every CA will have a collection beyond its own end >>> entities so I don't think this is a CA function since its >>> not checking one public key, but one public key against >>> a population of keys and is independent of X.509, PGP, SSH >>> etc. >>> >>> >>>> Then it turns >>>> >>>> out that these are all (apparently) embedded systems generated keys >>>> and only some of those are CA certified. So maybe there is a need for >>>> this protocol. >>> >>> >>> >>> That too. >> >> >> Problem being that it is probably easier to implement a RNG right than >> to implement any additional protocol for checking. Or at least the >> only people who are likely to implement your protocol are people who >> (1) would do the RNG right or (2) are required to for an audit >> requirement. > > > A fair point. But one we don't seem to be able to get right > and there are to also be fair some subtleties. > > Apparently some of the problem is also not the PRNG algorithm, > but rather when you call it. > > For example, the 1st prime generation might happen just after > 1st boot when there're few sources of randomness on the device. > So while the 2nd prime may be much more random the probability > of the 1st one being the same as someone else's is high enough > to be a problem. (Apparently. He repeated:-) > > So there might be reasons to call this even if you're > confident of your key generation code, in case someone fed the > PRNG a crap seed. > > > S > >> >> It would be really nice if there was some way to audit RNGs >> algorithmically... >> >> >>>> As I have mentioned before though, public key is problematic in >>>> embedded systems. Most of the systems don't have the resources to do >>>> the job right and this will only get worse as time goes on because as >>>> a $1 processor gets more powerful a chip with a 6502 core gets cheaper >>>> and more are made. More 6502 type chips were made last year than in >>>> any previous year. >>>> >>>> >>>> So my view is that we have to get away from the idea that the endpoint >>>> has to do public key crypto. >>> >>> >>> >>> Well, that's one position but not necessarily the only one >>> with merit. >> >> >> Well certainly it is better that they do have a PKI stack but only if >> they do it right and that puts us way above what a PIC controller >> class chip with only a few Kb can be expected to do. >> >> Hence we need to have two tracks. Rather than telling people that they >> must do PKI on 16 bit chips, maybe have a different approach there. >> > -- Website: http://hallambaker.com/ _______________________________________________ therightkey mailing list [email protected] https://www.ietf.org/mailman/listinfo/therightkey
