>On 13/05/2016 10:43, Krisztián Pintér wrote:
>
>> okay, let me rephrase, because i was not very clear. what i tried to
>> say is: if we assume a precision with which we can possibly measure
>> the initial conditions, then there is a time interval after which the
>> system is in total uncertainty. this time interval is dependent on the
>> initial precision, and the system itself.
>>
>> once i heard a claim, not confirmed, just a fun factoid, that weather
>> has 20 days of "memory". that is, if the initial conditions change on
>> molecular level, which is clearly unmeasurable, 20 days later the
>> weather is totally different. that is quite literally the butterfly
>> effect. if we were to use the weather as an entropy source, we should
>> sample it every 20 days. the data will be true random. except, of
>> course, the weather is public, but that aside.
>
>Exactly, that's the point, and i think that this feature is an approach
>to the statistical independence of the samples obtained by this method.

The difference is important. Weather is no more 'true random' than the
short term variation of a ring oscillator is. In fact the consequences for
sampling both are the same. The larger the undersampling ratio, the more
independent the samples.

However the inability to measure independence doesn't imply the absence of
independence and when your extractor demands absolute independence for the
mathematical proof to hold, undersampling isn't necessarily enough.

An example, which I believe I've shared before, but I consider it important..

The output from a certain real world entropy source based on sampling a
fast oscillator with the output of a VCO with a noisy control input looks
very serially correlated. You can measure it. If you slow down the
sampling, the serial correlation goes down. You will reach the point where
the measured signal is lost in the noise. This is then fed into an Yuval
Perez whitener (iterated VN), which requires independent samples on the
input as a prerequisite of the proof.

However if you take the raw undersampled data and run through a test that
tries to distinguish it from random in terms of the SCC, rather than
producing a metric of the SCC, it can differentiate it every time. It's
not full entropy. It's partially entropic data.

The same is true of the weather.

If you use it as a noise source, you must establish a conservative lower
bound for the min entropy and then feed it through an extractor where that
min-entropy meets the min-entropy requirement of the input.

One sample every 20 days is really, really slow.




_______________________________________________
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

Reply via email to