Cryptography-Digest Digest #142, Volume #9       Thu, 25 Feb 99 20:13:03 EST

Contents:
  Why Physical Randomness? (John Savard)
  Re: Define Randomness (Patrick Juola)
  Re: Define Randomness (Anthony Stephen Szopa)
  Re: Define Randomness (Terry Ritter)
  Re: A New Public-Key Cryptosystem (Lowball61)
  Re: Define Randomness (R. Knauer)
  Re: Define Randomness (Terry Ritter)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Why Physical Randomness?
Date: Thu, 25 Feb 1999 22:38:48 GMT

[EMAIL PROTECTED] (Terry Ritter) wrote, in part:

>We cannot distinguish between PRNG and TRNG by general external tests.
>But the whole point of the device is to produce unknowable numbers.
>These are TRNG, and there is a difference.  That difference is the
>unknown algorithm which can*recover the internal PRNG state, and then
>reproduce the sequence.  The reason for having a physically-random
>device is that we can assume that there can be no such algorithm.

This says - as well as anything that anyone has said here - why a
physical source of randomness is useful to have, and what the
advantage is that it has over an algorithmic method of generating
random numbers, however good.

John Savard (teenerf is spelled backwards)
http://members.xoom.com/quadibloc/index.html

------------------------------

From: [EMAIL PROTECTED] (Patrick Juola)
Subject: Re: Define Randomness
Date: 25 Feb 1999 16:28:03 -0500

In article <[EMAIL PROTECTED]>,
R. Knauer <[EMAIL PROTECTED]> wrote:
>On 25 Feb 1999 13:17:56 -0500, [EMAIL PROTECTED] (Patrick Juola)
>wrote:
>
>>No.  I'm saying that any deterministic procedure is algorithmic.
>
>If you see two cars heading for each other, do you consider that
>algorithmic? A car crash is deterministic, and there is a procedure
>involved.
>
>If I run a calculation on a nondeterministic computer, is it any less
>an algorithm because it is not a deterministic procedure?

Um.... "All housecats are mammals" does *not* imply that all mammals
are houscats.

And I don't regard two cars driving as a procedure.  If you do, then
yes, what you've just described is an *algorithm* for producing a
collision.

        -kitten

------------------------------

From: Anthony Stephen Szopa <[EMAIL PROTECTED]>
Subject: Re: Define Randomness
Date: Thu, 25 Feb 1999 15:40:26 -0800
Reply-To: [EMAIL PROTECTED]

John Savard wrote:

> Anthony Stephen Szopa <[EMAIL PROTECTED]> wrote, in part:
>
> >We would all probably agree that the Keno games in Reno or Lake Tahoe or
> >Las Vegas, Nevada, produce the twenty numbers used in the game at
> >random, by encasing eighty uniquely numbered ping pong balls in that
> >plastic sphere and driving them into a tumult with a continuous stream
> >of modestly compressed air.
>
> >We would agree to the randomness of the process in part because the
> >outcomes are essentially non reproducible.  Furthermore, we would agree
> >to the randomness of the process because the outcome generally over long
> >but modest runs would certainly demonstrate that each numbered ball has
> >the same probability of being among the twenty chosen.
>
> >But we do not have a perfect Keno machine, or perfect information, or a
> >sufficiently powerful computer to process this data and give us our
> >predictable outcome.  But if we had, we could predict the outcome!
>
> >We now use Big Blue and make our calculations based on a given assigned
> >starting point or initial conditions with certain ping pong balls
> >located in specific places in space with certain velocities and spins,
> >etc.  After about five days and several games we determine a string of
> >100,000 numbers with each number being from 1 to 80.
> >So the argument is twofold:  first randomness is relative.  What is
> >random for some is predictable and non random for others.  And secondly,
> >computer programs can produce genuinely true random numbers.
>
> >I rest my case.
>
> >And may you rest in peace trying to refute what I have described and
> >concluded because you will surely die trying.
>
> Actually, you have put forth a good argument here.
>
> Rolling dice, or picking lottery balls - those are classical physical
> processes. If they're sufficient to produce what is accepted as "true
> randomness", then, why wouldn't an _equally detailed_ computer
> simulation be acceptable?
>
> One important point - a large amount of randomness goes into the
> starting conditions of the physical version. We don't measure the
> positions and velocities of all the air molecules in the room. We
> don't measure the size of the cage with the balls to twenty decimal
> points.
>
> But if we use a pure computer program, without a source of true
> randomness to scatter the initial conditions in a way unknown to us,
> then we don't have all the free initial entropy that a physical
> randomizer gives.
>
> Computer programs can produce cryptosecure pseudorandom sequences.
> Mimicing a physical system would be an extraordinarily inefficient way
> of doing so.
>
> You may well claim that distinguishing that sort of random number
> stream from those produced by dice by calling the one "pseudorandom"
> and the other "truly random" is only quibbling.
>
> However, look at it from the other point of view: while an algorithmic
> method of generating pseudorandom numbers can be very, very good, it
> doesn't have to be. So, if one person can call his algorithmically
> generated numbers "truly random", what about the next person, with a
> somewhat simpler generator, and the next person after him?
>
> Far better to keep claims straightforward. One fellow generates random
> numbers from noise in gas tubes or something - these are true random
> numbers, subject to certain cautions and limitations. Those who
> generate random numbers algorithmically - they explain what algorithm
> they're using, and make claims about how good their techniques are -
> how well they approach the ideal of true randomness (which physical
> systems fall short of, but in a different way).
>
> And people judge their offerings by the quality of the PRNG - its
> level of cryptosecurity. But no one is charged with the task of
> setting a boundary where more secure PRNGs are called "truly random"
> while less secure ones are only pseudorandom.
>
> Yes, one can design a cipher - and it doesn't have to be a stream
> cipher - that allows itself a larger key, and a longer running time,
> than necessary for 'just enough' security. But if I can already claim,
> with perfect justice, that my method offers more security than anyone
> could possibly need - why should I go further, and spoil everything,
> by claiming that my method is _perfect_? When that is not *strictly*
> true, even if the computer big enough to prove it in practice could
> never be built.
>
> John Savard (teenerf is spelled backwards)
> http://members.xoom.com/quadibloc/index.html

Thank you for the philosophy that kept its feet on the ground.  Some of this
stuff reaches escape velocity.  But I  guess the posters have their reasons
for wanting to transcend reality.

Crypto is above all else:  practicable.  And any discussions should keep this
in mind.


------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Define Randomness
Date: Fri, 26 Feb 1999 00:46:47 GMT


On Thu, 25 Feb 1999 21:59:28 GMT, in
<[EMAIL PROTECTED]>, in sci.crypt
[EMAIL PROTECTED] (R. Knauer) wrote:

>On Thu, 25 Feb 1999 19:51:11 GMT, [EMAIL PROTECTED] (Terry Ritter) wrote:
>
>>>It is my understanding that Bayesian inference can be used to break
>>>stream ciphers if they leak enough information. And that is not the
>>>only kind of inferential procedure. There is also Solomonoff
>>>inference.
>
>>I would like to know more about this.  If you have references to
>>papers, articles on the web, etc., I would read them.  
>
>I can only refer you to that book I have mentioned at least 100 times
>here in the last month: Li & Vitanyi "An Introduction to Kolmogorov
>Complexity and Its Applications".

It may come as a shock to learn that there are many sources for
information on randomness.  Nor is it appropriate to assert that
because it was in a particular book, or written by a particular
author, that it is necessarily true.  


>As to the applicability of Bayesian inference for breaking stream
>ciphers, Patrick Juola is the one to talk to. I also would like to
>find some references to that method.

The last I heard from him on this was that the process was
exponentially complex (which means we cannot test all possibilities),
and also a probabilistic result, which seems to me to provide less
than I associate with the term PROOF.  

I am confused as to how you could possibly interpret these
capabilities differently.  


>As I understand it, each step of the Bayesian method yields a
>quantitiative measure of the correctness of the hypothesis. It is
>called "probabily approximately correct". I would imagine that if you
>were using the Bayesian method to discover the message, you could use
>the level of calculated certainty to quantify the security of the
>cipher.
>
>But I am just as much in the dark as you - that's why I keep bringing
>this up.

For this to be useful, I think we would have to make the leap that we
have *identified* all possible hypotheses, which is impossible at
realistic length, and that we have *tested* all possible hypotheses,
which is even more impossible.  I don't see this helping.  Since you
do, perhaps you could explain how.

It is often said that if the confusion sequence has more "entropy"
than the data, the system must be secure.  So, it may be that if we do
things like reduce the entropy in our data to some small maximum
level, and then ASSUME that our TRNG always produces more than that,
we can claim to protect the data.  

I think various practical problems here include measuring the
"fundamental" "entropy" of the confusion sequence, and assuming that
the computed "entropy" is constant and that it will apply to each
element of the sequence.  If we are willing to accept PROOF modulo
various unprovable assumptions, why not just call the system "secure"
and be done with it?  

I think "proof" means PROOF: a 100% iron-clad demonstration that no
other possibilities exist.  This is *not* the same as proof *if* only
something else is what we hope and wish it to be.  


>>I find this whole idea that cryptographers can somehow quantify the
>>strength of a cipher or TRNG as quite unlikely.
>
>Yet we can certify the OTP cryptosystem as proveably secure.

No.  OTP systems are provably secure only in concept. 

The *provably* secure OTP is a *theoretical* OTP -- and that is only
good for sending theoretical data.  

In practice, OTP's depend upon TRNG qualities which cannot be ASSUMED
and also cannot be PROVEN.  

This is not to say that an OTP cannot be secure in practice.  I am
sure it can.  But I am fairly sure that it cannot be PROVEN for a real
system.  


>>If we had this, we
>>could quantitatively select the best cipher, or the best TRNG, or the
>>best PRNG, or even reliably distinguish between TRNG and PRNG.  This
>>seems very much at odds with the cryptography I know.  
>
>We *can* select the best cipher - the OTP. 

It is false reasoning to concentrate on potential strength to the
exclusion of operational problems.  If the OTP really was "the best"
cipher, everyone would be using it.  They aren't.  This is not because
they have never heard of it.  This is because most people consider the
requirements for secure OTP use to be essentially impractical.  I
think they are right.  


>And we can select the best
>TRNG - a radioactive decay TRNG. 

I am aware of no PROOF that some other TRNG cannot be as good.  In
fact, if we *have* a TRNG, presumably it *must* be as good as any
other, unless we are to start measuring (and defining!) the quality of
each TRNG.  Without such quality specifications, your claim might
imply that no other source of randomness can possibly be called a
TRNG.

There are sources of quantum randomness other than radioactivity.  And
the use of radioactivity as the random source implies substantial
negative baggage:  This includes non-zero danger from the source,
cost, size, and detector qualities.  Other sources may not carry this
much negative baggage.  Note that this is the same sort of issue we
have with the OTP itself.

But if we are using radioactivity, and wish to show that we are
measuring that and not something else, we should have some way of
"turning it off," so we can see the result of the processing section
absent stimulation.  We would like to do this automatically, so we can
check frequently, and that probably means having a moving mechanical
assembly to block the radioactive source from the detector.  But if we
use diode noise, we can just turn it off, or turn it down, or turn it
up, and each time measure the results.  So in this case we can be
quite sure of the source of noise, and the capabilities of our
detector.  


>Both are proveably secure when
>implemented properly. 

There is no such proof, either for the OTP, or a radioactive TRNG.  


>It is true that they cannot be implemented
>*perferctly* but they can be implemented properly to an arbitrarily
>small degree on imperfection.

This may or may not be true, but in any case, how would one PROVE that
we know about *all* of the imperfections, and then PROVE that none of
them could be exploited?  

It is easy to PROVE the theoretical construct.  But once we pass from
theoretical perfection into practical reality we gain overwhelming
requirements for test and measurement which all must be met before we
can claim PROOF.  

Certainly we can measure closeness with respect to any particular
test, but we cannot hope to run all possible tests.  So we cannot
prove the machine does what we would like to think it should.

Can we be 95% sure about any particular test?  Yes.  Can we be 95%
sure that we have run all possible tests?  Only at small lengths where
we can enumerate all possible combinations.  

Can we be 95% sure that a particular TRNG produces greater than, say,
50% entropy for every particular byte?  I don't know; maybe.  But I
sure don't know how we PROVE future operation based on past data.  

So if we are 95% sure that we have such a system, we could restrict
the plaintext to 4 bits per (mixed) byte, and have a 95% surely secure
OTP.  Which is not PROOF, of course.  


>It would seem that in principle we could quantify the security of a
>stream cipher by subjecting it to systematic attempts at decryption
>withcarefully selected test messages.

Well, it might *seem* that in principle someone else *must* have such
capabilities, if you aren't ready to do this yourself.

I think you ascribe capabilities to cryptanalysts which they do not
have.  I suggest that there is no magic here, and that if such
techniques exist, we should be able to learn them and use them.  We
don't need to depend on the imaginary experts to solve the problem and
thus provide our "almost proof" for us.  

Once we get down to actually trying to do this, I suspect the size of
the problem and our limitations will make themselves clear.  I suggest
that the goal is simply not do-able.  But you start, and tell me how
it goes.  


>Notice that such a characterization is not being performed on the
>random keystream - we know there is no way to characterize the
>security of a random keystream itself. Using the keystream to build
>test ciphers and then try to decipher them is a fundamentally
>different methodology.
>
>BTW, when one talks about "leaking information" there is the
>implication that a certain quantity of information has been leaked,
>and that implies that the quantity of leakage is a measure of the
>unsecurity of the cipher. IOW, if half the information has leaked,
>then the cipher is only 50% secure, etc.

OK, you start measuring and calculating.  Start with experimental
PRNG's which can be controlled.  Let me know.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: [EMAIL PROTECTED] (Lowball61)
Subject: Re: A New Public-Key Cryptosystem
Date: 25 Feb 1999 23:20:30 GMT

>I remember trying to make a public-key cipher out of the Hill cipher,
>but my attempt didn't work, because matrices can be diagonalized.
>

Unlike the Hill cipher, the New Public-key Cryptosystem is based on a
rectanglar matrix. One of the key questions about such a system is: can a
rectanglar matrix (longer than it is wide) be diagonalized? And, if so, at what
work factor?

I tried a bruce force attempt at diagonalizing a 24x12 binary matrix. The
number of operations ran well into the trillions, and still did not put the
"diagonalized" matrix in a form that was useful.

How can a rectanglar matrix be put in a diagonalized form that is useful in the
sense that somehow an inverse-like operation can be preformed?
Charles Larry Craig

------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Define Randomness
Date: Thu, 25 Feb 1999 14:48:59 GMT
Reply-To: [EMAIL PROTECTED]

On 24 Feb 1999 14:43:53 -0500, [EMAIL PROTECTED] (Patrick Juola)
wrote:

>>Therefore in order for a TRNG to be *capable* of outputting a normal
>>number, it cannot have any bias in its implementation.

>This is simulaneously both true and false, depending on how you look
>at it.

Omigid! We're into Godel Territory now!

Any minute I expect the Centurian to start babbling about being a
liar.

>My finite K-sided golf ball is *capable* of outputting a normal number;
>it just does so, in the infinite case, with probability zero.

What does it mean for something to exist with probability zero?

>But yes, in general, a uniform generator and only a uniform generator
>will output a normal number.

I believe that the notions of "uniform" and "normal" are essentially
the same thing. I cannot imagine a number that possesses one of those
characteristics not possessing the other. For example, what is a
non-uniform normal number, or a uniform non-normal number. I suppose
one could fabricate some weasel-worded contorions to get around that,
but then they would not be using conventional meanings for those two
words.

>Problem : Li & Vitanyi state (p. 158, p. 3) that "it is universally
>agreed that a random infinite sequence must be normal."  Well, bluntly,
>it ain't so, guys.  There's lots of random infinite sequences that
>aren't normal.  This paragraph counts as one that, most kindly, they
>oversimplified.

Hmm... I did not expect Li & Vitanyi to publish falsehoods. I am truly
crushed.

Maybe they have a different notion of randomness than you have. We all
know that the term "random" has been heavily abused in both math and
crypto.

Maybe their definition in terms of prefix complexity forces random
numbers to be normal. After all, they did say that the reason a random
number must be normal is that if it isn't then there is a regularity
about it that can be exploited to reproduce it with a program shorter
than itself - which means it is not Kolmogorov random.

But then we know that Kolmogorov randomness is not suitable for
crypto. There are numbers that fail Kolmogorov randomness which
otherwise are quite suitable for crypto-grade keystreams.

I have come to the conclusion that most of the debate over the concept
of randomness arises because people are discussing different kinds of
randomness which are fundamentally different from one another.

Bob Knauer

"Democracy is the theory that the common people know what they
want, and deserve to get it good and hard."
--H.L. Mencken


------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Define Randomness
Date: Fri, 26 Feb 1999 01:11:36 GMT


Just a comment...

On Thu, 25 Feb 1999 22:05:22 GMT, in
<[EMAIL PROTECTED]>, in sci.crypt
[EMAIL PROTECTED] (John Savard) wrote:

>[...]
>Rolling dice, or picking lottery balls - those are classical physical
>processes. If they're sufficient to produce what is accepted as "true
>randomness", then, why wouldn't an _equally detailed_ computer
>simulation be acceptable?

Any computer has only finite resolution for objects, positions,
velocities and flaws.  At the level of discussion, Nature is far
different.  There are no 64-bit reals in Nature.  And every Natural
impact picks up information from a level we do not and can not
represent.  This multiplies through other interactions, so there is a
continuing influx of quantum-level randomness.  This is also why
Natural chaos is fundamentally different from computed chaos.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to