Cryptography-Digest Digest #359, Volume #9        Thu, 8 Apr 99 06:13:03 EDT

Contents:
  Re: True Randomness & The Law Of Large Numbers ("Douglas A. Gwyn")
  Re: True Randomness & The Law Of Large Numbers ("Douglas A. Gwyn")
  Re: R. Knauer : True Jerk ("Douglas A. Gwyn")
  Re: R. Knauer : True Jerk ("Douglas A. Gwyn")
  Re: True Randomness & The Law Of Large Numbers ("Franzen")
  Re: True Randomness & The Law Of Large Numbers ("Douglas A. Gwyn")
  Re: True Randomness & The Law Of Large Numbers (R. Knauer)
  Re: True Randomness & The Law Of Large Numbers (Eric Smith)
  Re: True Randomness & The Law Of Large Numbers ("Douglas A. Gwyn")
  Re: True Randomness & The Law Of Large Numbers (R. Knauer)
  Douglas A. Gwyn : True Jerk (R. Knauer)
  Re: is it true that Irish teen found crypto alg faster that RSA (Daniel James)

----------------------------------------------------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Thu, 08 Apr 1999 07:31:33 GMT

[EMAIL PROTECTED] wrote:
> So you are right that the test does not give us three categories,
> but the one to eliminate is "random"; the test yields either
> "not random" or "maybe random".

That's oversimplified to the point of being misleading.
A statistical test such as the ones we've been talking about
(applied to outputs from putative RNGs of various kinds)
doesn't make absolute in/out judgements; it provides
information with which we can assess the likelihood of the
bypothesis under test.  Simpler tests of this sort are
attempts to "disprove" a stated ("null") hypothesis,
where "disprove" is meant in a probabilistic sense: to
an agreed degree of certainty, less than full certainty.
If 200 different RNGs are tested and culled at a 95% level
of confidence, that means that we expect around 10 of them
to have been unfairly eliminated by the test.  (Use of
"confidence levels" is not essential to the argument, but
it is a familiar idea to most people, due to Fisher's
influence on statistics early this century, and it is
fairly intuitive.)  A good test (and acceptance criteria)
will eliminate most *actually* bad generators and retain
most *actually* good ones.  If you apply enough
independent filers of this sort, the *combination* of
tests will more "sharply" discriminate between actual
badness and goodness than any single test does.
(This is usually called the "power" of the test.)
None of that depends on Gaussianness, infinite limits,
nor anything else that would make it inapplicable to
quantum-computer-based RNGs.

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Thu, 08 Apr 1999 07:42:35 GMT

Herman Rubin wrote:
> This is the case if the random process has certain properties.  Any
> physical source generates a random process, but not necessarily with
> the desired propeties.  "Random process", to someone who uses
> standard probability terminology, merely means that all the values
> have a joint distribution, not that this distribution is any
> particular one.

Yeah, but in the context of a OTP keystream generator,
"random" is being used informally to mean one that is
suitably random for secure use in that context.  Most
of us think that that includes, but is not limited to,
having equiprobability over the key's "alphabet"
(which can be limited to 0 and 1 without changing the
issues).

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: R. Knauer : True Jerk
Date: Thu, 08 Apr 1999 08:01:53 GMT

"Douglas A. Gwyn" wrote:
> hapticz wrote:
> > truth is relative.
> And you're asserting that as an absolute?
> If not, it's self-defeating.

I guess I should add: If so, it's self-contradictory.

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: R. Knauer : True Jerk
Date: Thu, 08 Apr 1999 07:45:20 GMT

hapticz wrote:
> truth is relative.

And you're asserting that as an absolute?
If not, it's self-defeating.

------------------------------

From: "Franzen" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Thu, 8 Apr 1999 03:34:31 -0500

Bob Knauer previously wrote:

>>>I would ask what are you using for the TRNG sequence in that test.
>>>IOW, you must be comparing a purported TRNG to a known TRNG. What is
>>>that known TRNG?

On Tue, 6 Apr 1999 00:51:02 -0500, "Franzen" <[EMAIL PROTECTED]> replyed:

>>The fair flipping of a two-sided coin is one.

>Unfortunately it is a classical process, and therefore not TRULY
>random. For all practical purposes it is as nearly random as one can
>expect for a classical process, but it is still not a true random
>process.

>>What the devil is a "classical" TRNG?

>Any process that can be decribed by classical physics. A fair coin
>toss, for example. A quantum TRNG would be a process that depends on
>quantum physics.

>>100% random? That implies 99, 98, ... . Are you one of those who think
>>there exists some sort of uniform randomness acme.

>No, actually I think there are only one of two possibilities: certain
>true randomness, and all the rest.

>>So your hypothetical TRNG will "always" be flawed. I am not very
encouraged
>>by this part of your vision.

>I was referring to existing TRNGs like a radioactive TRNG (Cf.
>"HotBits"). But a quantum computer will presumably not suffer from any
>such flaws.

Here is a hierarchy derived from what you say above, and adding 4, 5 & 6
from
my experience:

1 - Quantum random number generator (QRNG) = Unbiased perfection
2 - Radioactive random number generator (RRNG) = Flawed perfection
3 - Uniform random number generator (URNG) = Nearly classical random
4 - Full-cycle Pseudo random number generator (PRNG) = Disorder
5 - Part-cycle PRNG = Minimum disorder
6 - Ordered number generator (ONG) = Everything else

The "certain true randomness" process can only exist in #1 per these and
your previously posted arguments. 2 through 5 conform to the PRNG model per
your previous argument postings. Does something seem out of wack here? Or is
my hierarcy listing somehow unfairly stated?

#1 sure looks like the acme (epitome?) of the hierarchy to me.

Let me share with you how odd I feel when you write (over and over again)
nobody can distinguish (discriminate?) randomness OR non-randomness by use
of ANY statistical measure. I freqently distinguish 4, 5 & 6, and I am
working on #3 now.

I am certain you do not know all the possible statistical measures and their
possible applications. So, unless God has whispered in your ear, you cannot
know everything about statistical measuring. You and Berry seem to share an
enthusiasm for making absolute pronouncements about things you cannot know.
I think this enthusiasm is not part of the scientific method.

I freely admit I am coming at our subject matter in a very different
direction, and with a much simpler goal than yours. I merely wish to
indistinguishably emulate the fair coin toss on my computer. When/if I get
to that goal I will consider it a substantial gain over current practice.
Conventional thought currently holds this goal unachievable.

What is this "nearly" random stuff related to the fair coin toss? What
flaw(s) prevent this process from being completely classically random?

---
>The standard for true randomness is a quantum computer that is
>programmed to calculate true random numbers. The algorithm for that
>already exists but the quantum computer does not. But it will be
>built, so the standard will exist some day.

Boy oh boy! I wish I had the gall to make such an assertion with a straight
face. Truthfully, I would only be willing to make such an extraordinary
assertion if I had an overpowering urge to be seen by others as a potent
visionary. No doubt it is different for you.

Does this unicorn quantum computer you envision have a variable radix
ability? I ask because I have the current impression physical processes
you and many others seem to have unbounded faith in translate into just two
possible states of output: We usually call them 0 and 1. If so, how do you
expect to unbiasedly translate these two states into radix 10 (decimal)?

---
>>I think of uniform randomness as concurrently "perfectly imperfect"and
>>"imperfectly perfect."

>Berry's Paradox in disguise.

My formulation is enabling for me; Berry's is disabling. Therefore, they are
not equivalent. I keep my mind open to the possibility you are referring to
a "quantum" disguise which no doubt differs from a "classical" disguise.

---
Douglas McLean




------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Thu, 08 Apr 1999 06:47:48 GMT

"R. Knauer" wrote:
> FIPS-140 has a Long Run Test that uses 20,000 bits:
> 1.A long run is defined to be a run of length 34 or more (of either
> zeros or ones).
>  2.On the sample of 20,000 bits, the test is passed if there are NO
> long runs.
> Yet it could occur, in which case you would have thrown out a
> perfectly good TRNG. What are you going to do when a quantum computer
> calculates a true random number that has a long run length that is
> outside your limits?

If the (hypothetical) quantum-powered TRNG is working properly,
there is approximately 1 chance in 1,000,000 of it failing the
FIPS-140 test cited.  We can be certain that one is selecting
the RNG from a vastly smaller number of candidates, so the
chance of a good one being unfairly rejected by the test is
exceedingly small.  Otherwise, the FIPS-140 criterion should be
challenged and the limits adjusted to improve its ability to
discriminate.  I suspect "one in a million" was the informal
statement of which that specification is a refinement.

> Of course, if the TRNG or quantum computer outputs many sequences like
> that, then you have serious cause for concern. But the very fact that
> such an occurance is extremely rare tells us we are discussing a moot
> issue.

No, such an occurrence is rare *only* when the equipment or
system does work as advertised.  The tests are to detect and
eliminate *faulty* (malfunctioning) equipment or systems.

> FIPS-140 discusses:
> 1) The Monobit Test
> 2) The Poker Test
> 3) The Runs Test
> 4) The Long Run Test
> That last one is the one you discussed above. How about commenting on
> the other three.

How about reading section 3.3 of Knuth (The Art of Computer
Programming, Vol. 2 - Seminumerical Algorithms), which is
specifically about statistical tests for random number
generators.

As discussed previously, it is desirable to use a whole
battery of not-strongly-related tests.

> >Assuming it passed my tests I'd use it.  The important thing
> >about the numbers is NOT where they came from it is the
> >properties they possess.
> I believe that is against prevailing consensus. We seemed to have
> decided 1 1/2 years ago that it is the process of random number
> generation that determines randomness of numbers, not any intrinsic
> properties of the numbers.

Testing the output of a process is one way to test the process.

No amount of statistical testing can ever make one 100% *sure*
that there is no hidden pattern in a number sequence.  But then,
neither can mere inspection of the process that produces them,
because in reality there is ample opportunity for something
to not work according to the theoretical model that the
system's security relies on.

This doesn't much matter for cryptology anyway, since the
key management problems of OTP systems makes them undesirable
for most purposes.  The main use of RNGs in cryptology is to
generate "initialization vectors", and it is well understood
that successful cryptanalysis is not going to occur through
weaknesses in the randomly-generated IV (except perhaps for
catastrophic ones, such as the RNG breaking and emitting a
constant).

------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Thu, 08 Apr 1999 08:47:32 GMT
Reply-To: [EMAIL PROTECTED]

On Thu, 8 Apr 1999 03:34:31 -0500, "Franzen" <[EMAIL PROTECTED]> wrote:

>>The standard for true randomness is a quantum computer that is
>>programmed to calculate true random numbers. The algorithm for that
>>already exists but the quantum computer does not. But it will be
>>built, so the standard will exist some day.

>Boy oh boy! I wish I had the gall to make such an assertion with a straight
>face. Truthfully, I would only be willing to make such an extraordinary
>assertion if I had an overpowering urge to be seen by others as a potent
>visionary. No doubt it is different for you.

I got my "vision" from reading Williams & Clearwater's book on quantum
computing. It is their assertion, not mine. They devote an entire
chapter to the subject of true randomness, even naming the chapter
"True Randomness".

Bob Knauer

Signature files traditionally present comments from a population with an IQ over 100.
I thought it would be appropriate to present comments from the other half of the 
population, 
namely the one with an IQ under 100. After all, they do make up 50% of the human 
condition.
Never mind that they also tend to become politicians. Here is a representative example:

"People blame me because these water mains break, but I ask you, if the water mains
didn't break, would it be my responsibility to fix them then? WOULD IT!?!"
- Marion Barry, Mayor of Washington DC


------------------------------

From: Eric Smith <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: 08 Apr 1999 01:58:54 -0700

"Trevor Jackson, III" <[EMAIL PROTECTED]> writes:
> Penrose made much of this issue in his book on artificial intelligence. 
> In principle there is supposed to be something special about quantum
> events that makes them distinguishable from non-quantum events.  He
> argues that brains are based on, subject to, this special quality that
> machines. computers, cannot simulate.

It really aggravated me that he presented this as a postulate with no
convincing evidence, and then proceeded to use this as a basis for a "proof"
that an intelligent computer can't be made.  Never mind that if we wanted to,
we could fairly easily make even an otherwise conventional computer (i.e., not
a quantum computer) that had a quantum randomness based random number
generator as a peripheral.

It seemed to me that Penrose was waving quantum randomness around like it was
a magic wand that could be used to prove or disprove anything.  I was recently
in an argument with someone who insisted on using Godel's Incompleteness
Theorem in much the same way.  He claimed that compilers will never be able
to produce object code as efficient as the best human hand-coded assembly,
because of GIT.  When I asked how GIT was even relevant, he told me to think
about it and I should be able to figure it out.  I guess that's what Penrose
intended us to do regarding the connection between quantum randomness and
intelligence.

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Thu, 08 Apr 1999 07:37:52 GMT

Herman Rubin wrote:
> It would not be too surprising to get this result in ONE trial; the
> probability of 8 or less is about 7%.

Here is another factor, quite common in claims of "paranormal"
abilities:  If somebody was searching for an example to illustrate
his point, would be necessarily try just once and give up if that
attempt didn't meet the criteria?  No, he'd probably try several
time, until he found a case that came out the way he wanted.
Then, he'd only present the one that "worked" and not mention all
the ones that didn't.
So, OF COURSE the example of unexpected behavior turns out to be
unusual behavior.  And OF COURSE it is not SO unusual as to have
taken forever to occur during the search for an example.

------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Thu, 08 Apr 1999 09:21:38 GMT
Reply-To: [EMAIL PROTECTED]

On Thu, 08 Apr 1999 08:01:09 GMT, [EMAIL PROTECTED] wrote:

>In another thread I showed that a test based on the binomial
>distribution of bit counts could show that a generator fails
>your true randomness criteria.  This directly disproves claims
>you made about the inapplicability of the test.

You are mising the point. I agree that the binomial/normal
distribution correctly characterizes the 1-bit group distribution for
a flat distribution. After all, that is what the binomial distribution
is all about, namely 1-bit grouping.

What I am currently challenging is the test methodology to exploit
that fact. Specifically I am challenging it with regard to sample size
on a per test basis.

I think it is clear that if you could test the whole ensemble, you
would have certain knowledge. And it is clear that if the sample size
is too small, that your knowledge is very poor. I am looking for the
knowledge criterion I call "reasonable certainty". I am leaving that
open for you to specify. Once you specify it, then you should be able
to some up with a sample size for a given test. Just keep in mind that
the gaussian falls off very slowly, so each increase in significance
costs exponentially more in sample size.

I am also still concerned on the meta-mathematical level with the
(apparent) circular reasoning of assuming that a sampling procedure is
random in order for the tests to be valid and then concluding that the
sampling procedure is not random because the sample failed the tests.

That tells me that either the notion of sample is confused or the
notion of random is confused, as they apply to the notion of sampling
and generation. For example, in FIPS-140, the authors tell you to
gather a sample of 20,000 contiguous bits. Is 20,000 the sample size?
If so, then you are testing 1-bit properties since the sample space is
now {0,1}. But that is not what our distribution is about. It is about
sequences of much larger size than 1 bit. So is the 20,000 bit
sequence just one sample? If so, that is a very small sample size.

I am sure this point can be cleared up, but for me it remains a
question I must ask.

>I think I mentioned the Central Limit Theorem once to correct
>something you had said about it.

So? What is your point?

As Devil's Advocate I am going to make many mistakes, and most will be
undeliberate. And since I am not an Expert (in either sense of the
word), I do not have a brittle ego to defend. I liken this procedure
of debate to a sport and imagine causing the "experts" to stumble now
and then. There are few things as funny as an "expert" with egg on his
face. But that is the extent of the sport to me. When I as Devil's
Advocate stumble, it is to be expected since I am not an expert.

My purpose is to generate a prevailing consensus so we can move on to
topics more closely related to crypto. It is difficult to make headway
when there are as many opinions about something as there are
participants to a discussion. That is the case with randomness and
statistical testing.

But I do admit I have a fascination with randomness because of its
importance to quantum physics. There is something very metaphysical
about randomness, where the term "metaphysical" means the science of
being.

>The number of bits _is_ large enough to do
>tests that may reject the randomness hypothesis with high
>reliability.  For example consider the case you yourself
>introduced in which n = 1,000,000 and we count cases in
>which the number of ones is more than 0.05*n from the mean.
>What's the probability of a truly random generator, again
>by your definitions, finding such cases in a few thousand
>tries?

Is that the actual test procedure - to run tests on 1 megabit
sequences a few thousand times? Is a few gigabits of sample sufficient
to give you reasonable certainty? It would seem so, but then it seems
that the random walk should visit the origin more than 8 times in
10,000 steps.

>Are you hoping if you just keep repeating that some
>justification will magically appear?

You still don't get it. I am doing no such thing. I am looking for an
explanation that is based on first principles.

As Herman Rubin commented several times, most books on statistics are
crumby, so the only way left for the Informed Layman (TM) to acquire
knowledge in a reasonable amount of time is consult the experts - true
experts, not fake experts. 

True experts will take the time to explain things in clear terms. Fake
experts will hide behind a smoke screen of obfuscation, pedantry,
bluster and pontification.

>Oh, one more time,

Who cares what I said? I say a lot of things to spur discussion.
Sometimes I say things I know are wrong (but don't tell anyone I said
that).

You act like you are some kind of fundamentalist preacher who is
trying to tell the congregation that I am a sinner. Lighten up willya,
or you will get a coronary at an early age.

Bob Knauer

Signature files traditionally present comments from a population with an IQ over 100.
I thought it would be appropriate to present comments from the other half of the 
population, 
namely the one with an IQ under 100. After all, they do make up 50% of the human 
condition.
Never mind that they also tend to become politicians. Here is a representative example:

"People blame me because these water mains break, but I ask you, if the water mains
didn't break, would it be my responsibility to fix them then? WOULD IT!?!"
- Marion Barry, Mayor of Washington DC


------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Douglas A. Gwyn : True Jerk
Date: Thu, 08 Apr 1999 09:28:33 GMT
Reply-To: [EMAIL PROTECTED]

On Thu, 08 Apr 1999 07:45:20 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
wrote:

>hapticz wrote:
>> truth is relative.
>
>And you're asserting that as an absolute?
>If not, it's self-defeating.

It is also contradictory.

Bob Knauer

Signature files traditionally present comments from a population with an IQ over 100.
I thought it would be appropriate to present comments from the other half of the 
population, 
namely the one with an IQ under 100. After all, they do make up 50% of the human 
condition.
Never mind that they also tend to become politicians. Here is a representative example:

"People blame me because these water mains break, but I ask you, if the water mains
didn't break, would it be my responsibility to fix them then? WOULD IT!?!"
- Marion Barry, Mayor of Washington DC


------------------------------

From: Daniel James <[EMAIL PROTECTED]>
Subject: Re: is it true that Irish teen found crypto alg faster that RSA
Date: Thu, 08 Apr 1999 08:09:14 +0100
Reply-To: [EMAIL PROTECTED]

In article <[EMAIL PROTECTED]>, John Savard wrote:
> Odd. That doesn't sound right, ...
>

Well, John, I don't pretend to have gone into the maths myself but I 
believe the work is genuine, and you can find a bit more from the URL I 
gave ....

Oh, and my "... PLI company Baltimore ... " should have been "... PKI 
company Baltimore ...", in case that wasn't obvious.

Cheers,
 Daniel.


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to