Cryptography-Digest Digest #390, Volume #9       Wed, 14 Apr 99 15:13:03 EDT

Contents:
  Re: True Randomness & The Law Of Large Numbers (Mok-Kong Shen)
  Re: Are LFSR's codebook rngs? (Herman Rubin)
  Re: True Randomness & The Law Of Large Numbers (R. Knauer)
  Re: looking for specific technical report (Medical Electronics Lab)
  Re: PGP, The Big Lie (Theorem with Incomplete Proof) (Matthias Bruestle)
  Re: Not a PGP Expert (Geoff Thorpe)
  Re: True Randomness & The Law Of Large Numbers (R. Knauer)
  Re: Adequacy of FIPS-140 ("Trevor Jackson, III")

----------------------------------------------------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Wed, 14 Apr 1999 17:53:37 +0200

R. Knauer wrote:
> 
> On Tue, 13 Apr 1999 14:11:53 +0200, Mok-Kong Shen
> <[EMAIL PROTECTED]> wrote:
> 
> >How do you ever determine this 'vanishingly small' quantity? You
> >do some measuments. And here comes statistical theries in! I appears
> >to me to be an irony to cite a book of mathematical statistics and
> >deny the application of its theory to practice at the same time.
> 
> I never said any such thing. You are trying to put words in my mouth.

You never said that?? Here is the paragraph quoted from a previous post 
of yours, containing the phrase 'vanishingly small'! Or have you
forgotten what you wrote?

   It is completely attainable with a quantum computer programmed 
   to calculate true random numbers. It is also attainable to a 
   very high degree of precision in a carefully designed ratioactive 
   TRNG, one with vanishingly small detector deadtime. If you are 
   ever interested in the problem of deadtime, I can recommend the 
   sections in Feller's book that deal with "geiger counters".
> 
> Read my lips: I said that I challenge the use of simple, standard,
> small sample, statistical tests like the FIPS-140 Monobit Test for
> determining randomness or non-randomness. That does not mean that I
> challenge ALL applications of statistical tests.

Well, tell us which statistical tests are applicable to detect
non-randomness of arbitrarily given sequences in your esteemed opinion.
If none, why is it that that you allow applications of statistical
tests in certain applications while denying their applicability
in others? Why are e.g. the FIPS tests bad? In which way do they need
improvements according to you?

> 
> >Ah, true randomness according to you is nominal and can't be measured.
> 
> I did not say it could not be measured. I said it could not be
> measured on an interval or ratio scale. Randomness is not like
> temperature or driving distance. It is like membership in a class.

But memership in a class presupposes the existence of an algorithm 
to determine whether a given object is a member or not. In the 
present context please name a concrete algorithm that you favour to 
test whether a given sequence belongs to the class of random sequences.

M. K. Shen

------------------------------

From: [EMAIL PROTECTED] (Herman Rubin)
Subject: Re: Are LFSR's codebook rngs?
Date: 14 Apr 1999 10:36:16 -0500

In article <[EMAIL PROTECTED]>,
Trevor Jackson, III <[EMAIL PROTECTED]> wrote:
>[EMAIL PROTECTED] wrote:

>> > All FSR generators have a PERIOD with which they repeat.  Thus there is
>> > a cycle of values that forms a loop in the state space.  No matter where
>> > you start on that loop you will end up where you started after PERIOD
>> > steps.

>> > Some FSR cycles have periods of one or two while others have periods of
>> > 2^WIDTH-1.  A long period is considered a Good Thing.

>> > For more information read "Shift Register Sequences" by Golomb revised
>> > in 1982.


>> Ok, so I am right?  It will produce the same sequence just shifted....

>> BTW, are there any good online sources for LFSR's?  I really don't want to
>> buy a book to learn about it...

>I have looked but not found any good online sources for FSR theory and
>operation.  There are several sources that have info on particular
>configurations that have been used, or that have maximum periods. 
>Search engines can turn up this anecdotal information.

There has been work done on these and related procedures for
generating pseudo-random numbers.  It seems that word generators
of the form 

        X[n] = X[n-j] OP X[n-k],

where OP is one of a variety of operations such as XOR, addition,
with or without carry, subtraction, and some others.  To get 
long periods, typically the larger of j and k is a Mersenne
prime p congruent to 1 or 7 mod 8, and the other chosen from a
small list.  The period in these cases is at least 2^p - 1.
The LFSR corresponds to the X's being one bit.

These are now known to have weaknesses.  There have been 
attempts to improve the situation, but these are not LFSRs.
-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED]         Phone: (765)494-6054   FAX: (765)494-0558

------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Wed, 14 Apr 1999 16:46:09 GMT
Reply-To: [EMAIL PROTECTED]

On 14 Apr 1999 09:11:32 -0500, [EMAIL PROTECTED] (Herman
Rubin) wrote:

>>I meant for finite sequences. An infinite sequence must be Borel
>>normal, which is a quantitive entity. (But see below for further
>>comments.)

>I suggest we ignore infinite sequences.  We will never observe one.

It is interesting to note that Billingsley begins his book on
probability and measure with Borel normality and infinite sequences.
He seems to be saying that only in such a manner can one determine the
precise meaning of probability.

>No, the statement is not at all harsh.  Like most people, physicists
>are comfortable only with the ideas coming from games of chance with
>equally likely alternatives, so they set up this "ensemble" from
>which one selects "at random".  It is necessary to get away from
>this restrictive idea.

Your statement is very interesting, because it constitutes a challenge
on two fronts:

1) That physicists are in error when they use their traditional
notions of probability, including ensembles;

2) That physicists must adopt new ways of thinking, about which you
can comment further.

I look forward to those further comments from you.

Are physicists who are exploring the "physics of information" at
places like the Sante Fe Institute and others, going in the right
direction according to your thinking?

>The concise definition is that any measurable event determined by
>the entire collection of observations has a probability.

That is true for orthodox QM. Those probabilities are determined from
"probability amplitudes", which are related to projections of the wave
vector in the Hilbert space representation for which the Hamiltonian
matrix is diagonal.

Apparently I am missing something from your original comment that
there are joint probabilities in QM.

>>>This model is huge.

>>How huge is it?

>If one has n bits, the probabilities lie in a space of 2^(2^n) - 1
>dimensions.  

I take that to be 2^{(2^n) - 1}. Why that is the case?

The number of sequences is 2^n but why are there that many
probabilities? I vaguely recall something like this coming up in both
Li & Vitanyi and Feller, but obviously I failed to grasp the
significance.

>I doubt that it would help.  A quantum computer is still a computer.

But it is not the same as a classical computer. For one it does not
rely solely on the recursive properties of a classical computer. There
is an intrinsically non-deterministic element in quantum computation.
Also there is entanglement where presumably the results of certain
states of the device at one instant couple in such a manner as to
effect all other states, much like the kind of thing that happens to
give the interference pattern for the double slit experiment. The fact
that all possible results consistent with the algorithm driving the
unitary evolution of the wave function occur simultaneously makes the
quantum computer fundamentally different from a classical recursive
computer.

>The scale for randomness I am using is similar to the expected number
>of bits which would have to be changed, knowing all the probabilities,
>and having a TRNG available, to get a TRNG.

That appears at first glance to be a kind of entropic measure. IOW, a
TRNG is 100% random if has 100% entropy. Is that not just a statement
of the independence of each of the bits of the sequence, that they can
be selected equiprobably from the sample space {0,1}? After all, if
one of those bits is fixed (i.e., it is known), then the entropy drops
from its maximum by one bit, and the TRNG loses that amount of
randomness, say to a level of 99.999% or whatever.

If that is correct, how do you go about measuring this entropic-like
property quantitatively without having to sample all or a large
fraction of the possible sequences (which is overwhelmingly greater
than just measuring the properties of one relatively small sequence)?

Bob Knauer

"I read a funny story about how the Republicans freed the slaves. The
Republicans are the ones who created slavery by law in the 1600's.
Abraham Lincoln freed the slaves and he was not a Republican."
- Marion Barry, Mayor of Washington DC


------------------------------

From: Medical Electronics Lab <[EMAIL PROTECTED]>
Subject: Re: looking for specific technical report
Date: Wed, 14 Apr 1999 12:36:46 -0500

Mark Hammell wrote:
> 
> I am looking for the following technical report:
> 
> B.S. Kaliski Jr. and Y.L.Yin. Data-dependent rotations help prevent
> differential cryptanalysis.  Technical note, RSA Laboratories, August, 1996.
> 
> If anyone would happen to know where I can get it, please email me.

Contact them directly, they may be able to send it to you.
>From the IEEE P1363 web page:

The officers of IEEE P1363: 

     Chair: Burt Kaliski, [EMAIL PROTECTED] 
     Vice-chair: Terry S. Arnold, [EMAIL PROTECTED] 
     Secretary: Roger Schlafly, [EMAIL PROTECTED] 
     Treasurer: Michael Markowitz, [EMAIL PROTECTED] 
     Editor: Yiqun Lisa Yin, [EMAIL PROTECTED] 

Patience, persistence, truth,
Dr. mike

------------------------------

From: [EMAIL PROTECTED] (Matthias Bruestle)
Subject: Re: PGP, The Big Lie (Theorem with Incomplete Proof)
Date: Wed, 14 Apr 1999 14:50:28 GMT


Charles Booher ([EMAIL PROTECTED]) wrote:
> Poof:

Oh how I fear your big POOF!

--
PGP: SIG:C379A331 ENC:F47FA83D      I LOVE MY PDP-11/34A, M70 and MicroVAXII!
-- 
...the Mouse That Roars, the Crew of the Flying Saucer, the
Magnificent Ambersons, the House I Live In, the Sound of One Hand, the
Territorial Imperative, the Druids of Stonehenge, the Heads of Easter
Island, the Lost Continent of Mu, Bugs Bunny and His Fourteen Carrots...

------------------------------

From: Geoff Thorpe <[EMAIL PROTECTED]>
Subject: Re: Not a PGP Expert
Date: Wed, 14 Apr 1999 15:35:07 -0400

Hi there,

[EMAIL PROTECTED] wrote:

> In article <[EMAIL PROTECTED]>,

> //snip a good explanation of how PGP works
>
>
> Your explanation would help any normal person understand how PGP works and why
> the proposed "attack" does not in fact exist, if not for the fact that Anthony
> Stephen Szopa has a certain agenda of his own. Namely, he sells overpriced
> snake-oil software pretending to offer real cryptographic strength through his
> web site at www.ciphille.org. Explaining to him why PGP works or from where it
> derives its strength is like explaining why Windows is flawed to a Microsoft
> programmer; he or she knows that you are right, but merely wants to spread FUD
> (Fear, Uncertainty, and Doubt) in the minds of others.

If so I'll be a little pissed off - I doubt I will bother answering such posts in
future and possibly risk not giving something useful to someone who genuinely
wants it.

The usenet group who cried wolf I guess <wry grin>

Anyway, I thought Darwinism was supposed to prune these people out - do snakeoil
salesmen exhibit some "winning trait" that allows them to procreate through the
ages? (apart from rapid breeding with their own cousins?).

Cheers,
ME





------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Wed, 14 Apr 1999 18:00:23 GMT
Reply-To: [EMAIL PROTECTED]

On Wed, 14 Apr 1999 17:53:37 +0200, Mok-Kong Shen
<[EMAIL PROTECTED]> wrote:

>Well, tell us which statistical tests are applicable to detect
>non-randomness of arbitrarily given sequences in your esteemed opinion.

In the first place I am not an "expert", therefore I do not have any
"esteemed opinon" to offer. The most I claim to have are some of my
own questions and the comments of others - and maybe an occasional
opinion of my own which people usually jump all over (it's a tough job
being Devil's Advocate, but someone has to do it :-).

I do not believe that one can use standard, small sample, statistical
tests to determine true randomness or non-randomness to any reasonably
certain degree. Therefore I cannot be called upon to offer any even if
I were an "expert". I think those tests all just snake oil, for the
reasons I have given.

True randomness is dependent on the process which generates it, and
statistical tests require a model of that process in order to arrive
at meaningful statistical significance. But the standard models are
not adequate to represent true random processes, models such as the
uniform Bernoulli process. All that model gives you is the
*appearance* of randomness because it does not tell you how the actual
process is operating.

Such models always falls back on some physical process such as a fair
coin toss. But that is circular reasoning involved because the fair
coin toss is modeled by the UBP and the UBP is modeled by the fair
coin toss. The UBP does not model the actual process of flipping a
fair coin - that would take an incredibly complex description.

Listen to what Herman Rubin has been saying all along, namely that you
cannot model these processes simply. You can model some of the
characteristics, but that is not the same as actually modelling the
process itself.

If you could model true randomness correctly at the level of the
actual process which generates it, you could then model quantum
mechanics, which means you have discovered a hidden variable theory. I
do not believe that can be done, and offer the abject failure of the
brightest minds of the century as prima facie evidence for the
impossibility of finding hidden variable theories.

I could easily be wrong, since I am not all knowing. But until such a
hidden variable theory does come along - one which is better than
orthodox QM - I will stick with the prevailing consensus. I have no
problem accepting that God plays dice with the Universe - in fact I
encourage him to continue doing so, because it is the best way to run
a Universe in terms of exposing the manifest order in that Universe. 

There is no ongoing exposition of truth and beauty in a clockwork.
Once it has been discovered how it works, it is static for all time,
which is a colossal bore. With an intrinsically random Universe at its
base of operations, it can never become a static Universe. It is
always dynamic, exposing new truths and new beauty all the time -
things we can never know in advance because they are inherently
unknowable.

>If none, why is it that that you allow applications of statistical
>tests in certain applications while denying their applicability
>in others?

The answer is quite easy to understand: The true random process is not
the same as the usual kinds of of classical processes for which we use
statistical tests. True randomness has its roots in QM, and
statistical tests are not relevant to QM.

>Why are e.g. the FIPS tests bad?

At this stage of the discussion we are focusing on the Monobit Test,
deferring discussion of the other tests for another time. The FIPS-140
Monobit Test is inadequate for the reasons given earlier, namely that
the model upon which it is based is inadequate and the sample size it
uses is way too small. I go one step further and claim that all
classical tests are inadequate for the determination of true
randomness or non-randomness because classical tests cannot address a
quantum mechanical process such as that which is responsible for true
randomness in the first place.

I have no proof for that. I personally consider it obvious based on
the fact that QM is inherently random at its most fundamental level.

>In which way do they need
>improvements according to you?

There is no way to improve fatally flawed tests. The error is in the
misapplication of the concept of "statistical significance" to the
determination of randomness or non-randomness. It has meaning in other
determinations, but not that one. It is a classical concept that has
no validity with quantum processes.
 
>But memership in a class presupposes the existence of an algorithm 
>to determine whether a given object is a member or not.

That is not true. If I observe the decay of one radioactive nucleus on
a scintillation screen, there is no algorithm to tell when that event
will occur in time. The time it occurs is completely unknown - there
is no algortithm possible to predict when it will happen.

Such an algorithm, if it did exist, would have an internal state as
large as the entire Universe, and knowledge of that state would
preclude the accurate prediction of the event. You are caught up in a
contraction - the more you want to know about the future, the more you
must know about the present, but the more you know about the present,
the less you know about the future.

That dilemna (which I do NOT find paradoxical at all) is a direct
consequence of the Uncertainty Principle in its most general terms.

>In the 
>present context please name a concrete algorithm that you favour to 
>test whether a given sequence belongs to the class of random sequences.

There is no such thing as an algorithm to test whether a sequence
belongs to the class of random sequences. For one thing there is no
such classification algorithmically - true randomness is a property of
the generation process not the sequences it produces. (That is not the
case for kolmogorov-Chaitin randomness. But then such is not true
randomness.)

The Universe is intrinsically random at the base of its operation.
Rather than despair over that fact of reality, realize that it is the
best of all possible Universes because now it can be whatever it wants
to be consistent with Reality - and is not restricted to some initial
condition like a clockwork.

The order which emerges from the randomness of the Universe is like
the occurance of highly ordered sequences generated by a TRNG. Most
sequences have full Kolmogorov-Chaitin randomness, but some do not and
are algorithmically compressible. The program which reproduces them is
a statement about their high degree of order - it contains the
"reason" for their being highly ordered.

For example, Newton's Laws is the "reason" for the orderly motion of
the planets, because it algorithmically compresses the data of that
motion into a very concise form that is capable of reproducing the
data of the actual motion.

Bob Knauer

"I read a funny story about how the Republicans freed the slaves. The
Republicans are the ones who created slavery by law in the 1600's.
Abraham Lincoln freed the slaves and he was not a Republican."
- Marion Barry, Mayor of Washington DC


------------------------------

Date: Wed, 14 Apr 1999 03:01:12 -0400
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Adequacy of FIPS-140

R. Knauer wrote:
> 
> On Wed, 14 Apr 1999 01:27:04 -0600, [EMAIL PROTECTED] (wtshaw) wrote:
> 
> >The key concept is not to pretend that you get more randomness from
> >distilling text than you reasonably can.  Clearly it is between one and
> >three bits per letter, given a meaningful passage.  Therefore, given a
> >certain sized key, and a desired level of randomness for that key, you
> >should be able to figure a window of much text you should start with.
> 
> Are you saying that if I want 8 bits of entropy per character, I
> cannot simply hash approx. 3 times as much text as key size? How about
> hashing 6 times as much text as key size? 12 times, 24 times, 100
> times. Text is cheap and so is computer time.

No.  Only secret text is suitable.  Secret text is NOT cheap.

> 
> Bob Knauer
> 
> "I read a funny story about how the Republicans freed the slaves. The
> Republicans are the ones who created slavery by law in the 1600's.
> Abraham Lincoln freed the slaves and he was not a Republican."
> - Marion Barry, Mayor of Washington DC

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to