On Wednesday, December 13, 2017 at 5:52:16 PM UTC-6, Peter Gutmann wrote:

> >Sitting on my desk are not less than 3 reference designs.  At least two of
> >them have decent hardware RNG capabilities.  
> 
> My code runs on a lot (and I mean a *lot*) of embedded, virtually none of
> which has hardware RNGs.  Or an OS, for that matter, at least in the sense of
> something Unix-like.  However, in all cases the RNG system is pretty secure,
> you preload a fixed seed at manufacture and then get just enough changing data
> to ensure non-repeating values (almost every RTOS has this, e.g. VxWorks has
> the very useful taskRegsGet() for which the docs tell you "self-examination is
> not advisable as results are unpredictable", which is perfect).

I agree - and this same technique (the use of a stateful deterministic 
pseudo-random number generator seeded with adequate entropy) - is what I was 
proposing be utilized in the case of the generation of random data needs for EC 
signatures, ECDHE exchanges, etc.

This mechanism is only safe if that seed data process actually happens under 
secure circumstances, but for many devices and device manufacturers that can be 
assured.

> 
> In all of these cases, the device is going to be a safer place to generate
> keys than the CA, in particular because (a) the CA is another embedded
> controller somewhere so probably no better than the target device and (b)
> there's no easy way to get the key securely from the CA to the device.

Agreed, as I mentioned the secure transport aspect is essential for remote key 
generation to be a secure option at any level.

> 
> However, there's also an awful lot of IoS out there that uses shared private
> keys (thus the term "the lesser-known public key" that was used at one
> software house some years ago).  OTOH those devices are also going to be
> running decade-old unpatched kernels with every service turned on (also years-
> old binaries), XSS, hardcoded admin passwords, and all the other stuff that
> makes the IoS such a joy for attackers.  So in that case I think a less-then-
> good private key would be the least of your worries.

So, the platforms I'm talking about are the kind of stuff that sit somewhere in 
the middle of this.  They're intended for professional consumption into the 
device development cycle, intended to be tweaked to the specifics of the use 
case.  Often, the "manufacturer" makes quite few changes to the hardware 
reference design, fewer to the software reference design -- sometimes as 
shallow as branding -- and ships.

A lot of platforms with great potential at the hardware level and shockingly 
under-engineered, minimally designed software stacks are coming to prominence.  
They're cheap and in the right hands can be very effective.  Unfortunately, 
some of these reference software stacks encourage good enough practice that 
they won't be quickly caught out -- no pre-built single shared private key, yet 
a first-boot random initialized with a script that seeds a PRNG with uptime 
microseconds, clock ticks since reset, or something like that, which across 
that line will be a very narrow band of values for a given first boot of a 
given reference design and set of boot scripts.

Nevertheless, many of these stacks do at least minimize extraneous services and 
the target customers (pseudo-manufacturers to manufactures) have gotten savvy 
to ancient kernels and known major remotely exploitable holes.  We could call 
it the Internet of DeceptiveInThatImSomewhatShittyButHideItAtFirstGlance.

> 
> So the bits we need to worry about are what falls between "full of security
> holes anyway" and "things done right".  What is that, and does it matter if
> the private keys aren't perfect?

Agreed and I attempt address the first half of that just above -- my "Internet 
Of ....." description.
_______________________________________________
dev-security-policy mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to