Marsh Ray wrote:
OK, but to what extent is this distinction between "true" and "pseudo" entropy equally theoretical when the system as a whole is considered?
True entropy is a statistical assessment over *repeated*experiments*. Cryptographic strength pseudo-randomness is a (mostly theoretical) test of indistinguishableness between a pseudo sequence output and some truly random sequence.
The RSA modulus observation over a large sample set is indeed a consideration of repeated experiments (SSL key generations by multiple independent systems) which seems to confirm the true entropy flaw in /dev/urandom (or equivalent in Windows?).
So, if you define the "system" as the installed base of SSL engines, then the distinction appears empirically significant. The theoretical assumptions used for pseudo-randomness testing are not matched by the real world deployment (e.g. the theory uses a clean definition of a PRNG seed which is not like the Linux kernel entropy pool). It's nice to have a complexity theory proof of an algorithm, but the proof collapses when assumptions are not verified in the algorithm usage.
The RSA modulus observation may be a first empirical result for actual true entropy in fielded security systems (these repeated experiments were never observed as diversely with other means of getting assurance about true entropy sources). And the empirical result is "not enough entropy".
Personally, I'd like to see it get sorted out well enough that kernels can save the tens of KiB of nonpageable RAM they use for their entropy pools
Maybe you want to be cheap and secure at once. Good luck. Regards, -- - Thierry Moreau CONNOTECH Experts-conseils inc. 9130 Place de Montgolfier Montreal, QC, Canada H2M 2A1 Tel. +1-514-385-5691 _______________________________________________ cryptography mailing list [email protected] http://lists.randombit.net/mailman/listinfo/cryptography
