Cryptography-Digest Digest #39, Volume #9         Fri, 5 Feb 99 12:13:03 EST

Contents:
  Re: Metaphysics Of Randomness (R. Knauer)
  Re: Metaphysics Of Randomness (R. Knauer)
  Re: On a Method of Session Key Generation (revised) ("Trevor Jackson, III")
  Re: Threat Models: When You Can't Use a One-Time Pad ([EMAIL PROTECTED])
  Re: Threat Models: When You Can't Use a One-Time Pad ("Trevor Jackson, III")
  Re: Metaphysics Of Randomness ("Tony T. Warnock")
  Re: Random numbers generator and Pentium III (John Briggs)
  Re: *** Where Does The Randomness Come From ?!? *** ("Tony T. Warnock")
  Re: RSA implementation (Doug Stell)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Metaphysics Of Randomness
Date: Fri, 05 Feb 1999 13:00:58 GMT
Reply-To: [EMAIL PROTECTED]

On 3 Feb 1999 09:23:12 -0500, [EMAIL PROTECTED] (Patrick Juola)
wrote:

>>I'm going to remind you just one more time
>>that dividing random strings into True Random Numbers and Pseudo Random
>>Numbers creates a distinction without a difference since no test exists to
>>distinguish them.

>It's not a distinction without a difference.  You're just doing the
>wrong test.  There's a test to distinguish Chryslers from Fords.
>But it's not a test you're going to perform by measuring the gas mileage.

If the crypto-grade randomness of a given set of numbers is based on
the generation process and not the numbers themselves, then how do you
propose to test those numbers themselves to show that the generation
process is crypto-grade secure? IOW, since any given set of numbers
could have been produced by a PRNG or by a TRNG, how is testing the
numbers themselves going to tell you which RNG produced them?

I realize that you can test them and obtain a certain level of
confidence regarding the generation process, but that assumes that you
have tested a very large number of such sets. There is always the
possibility that a set of numbers from a poor-security PRNG passed the
tests and another set from a crypto-secure TRNG failed the tests even
if the level of confidence is high.

How many tests does it take to get the level of confidence required
for crypto-grade security, either proveably secure or secure in terms
of work effort? After all, only infinitely long numbers *must* possess
statistical properties of randomness, such as no bias.

Your level of confidence is still a probablistic measure which only
tells you what is *expected* to happen if you carried the tests out to
infinity. As you know from Bayesian inference, the inference process
can take turns from one hypothesis to another as the number of test
data increase. How do you know that you did not stop your tests short
of learning something better about the presumed randomness of the
generator? How do you set the limits properly to know with confidence
that a generator cannot be broken cryptanalytically?

Perfectly secure generators can produce numbers which do not pass bias
tests, and perfectly unsecure generators can produce numbers which do
pass bias tests. Sure, in the limit of very large numbers these tests
converge on an absolute measure of randomness in terms of bias. But
how do you perform such a large amount of testing in a practical
situation?

All we hear in this regard is that there are these wonderful
statistical tests out there that anyone can use easily that perform
all this wonderful mathematical magic, but when challenged to produce
these magical tests and defend them as being so wonderful, we never
see any presented.

That smacks of Snake Oil.

Bob Knauer

"Sometimes it is said that man cannot be trusted with the government
of himself.  Can he, then, be trusted with the government of others?"
--Thomas Jefferson


------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Metaphysics Of Randomness
Date: Fri, 05 Feb 1999 13:14:53 GMT
Reply-To: [EMAIL PROTECTED]

On 3 Feb 1999 12:13:08 -0500, [EMAIL PROTECTED] (Patrick Juola)
wrote:

>Formally, the definition should read :

>i) For all finite lengths N, a TRNG should be capable of outputting
>all possible N-bit sequences equiprobably.
>(the second clause, beginning "a TRNG" is implicitly univerally
>quantified -- you can rewrite it as "all TRNGs" if you like.)

>Instead, you're interpreting it as

>ii) For any TRNG, there exists a finite length N such that the TRNG
>is capable of outputting all possible N-bit sequences equiprobably.

I am not interpreting it like that.

>First, i) implies ii), as you can easily.  However, ii) does not
>necessarily imply i).  The first interpretation  is the stronger
>condition and the necessary one.

I do not recall ever saying that the definition was limited to any
particular value of N.

But to satisfy the formalists among us, let's come up with a clearer
statement:

+++++
A TRNG must be capable of producing all possible sequences of finite
length equiprobably.
+++++

>>Who ever said that an actual string was "allowed" or not. Re-read the
>>specification for the TRNG - it applies to the generation process, not
>>any actual string.

>Sorry, you are correct.  Rephrasing : A generator that only generates the
>periodic sequence 010101010101010101010... is NOT a TRNG in the sense
>defined above.

I never said it was. The number you just gave is infinitely long and
cannot have the order present that is obvious.

>It's fairly easy to construct an algorithmic generator such that
>condition ii) holds -- or even such that condition ii) holds for
>all strings less than a given (finite) length.

You have keyed on the word "given" in a way I never intended it. But
for the purposes of formal clarity let's see if the defintion above
meets with your approval - which I repeat here:

+++++
A TRNG must be capable of producing all possible sequences of finite
length equiprobably.
+++++

Bob Knauer

"Sometimes it is said that man cannot be trusted with the government
of himself.  Can he, then, be trusted with the government of others?"
--Thomas Jefferson


------------------------------

Date: Fri, 05 Feb 1999 08:44:27 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: On a Method of Session Key Generation (revised)

R. Knauer wrote:

> On Thu, 04 Feb 1999 18:35:48 GMT, [EMAIL PROTECTED] (Terry Ritter) wrote:
>
> >>2) There cannot be any reasons, since correlation cannot be removed
> >>algorithmically. Once the cryptanalyist discovers your method you are
> >>no better off than before you used it.
>
> >This is clearly false when the algorithm does not retain all of the
> >input information.
>
> Nothing in crypto is "clearly" anything, especially when it comes to
> discussions of crypto-grade randomness.
>
> Which part of the paragraph above are you objecting to, the first
> sentence or the second sentence?
>
> Do you agree that algorithms do not produce randomness, that is, the
> output of an algorithm is no more random than the input?

The first sentence above is false.  The answer to the question immediately
above is "yes".  There is no conflict here because there are two classes of
algorithms to consider.

An algorithm that folds, spindles, and mutilates a data set, but leaves the
data set size unchanged cannot increase the total entropy in the data.  It
can decrease the total entropy in the data.  An algorithm that manipulates
a data set into a smaller size cannot increase the total entropy in the
data.  It can decrease the total entropy in the data.  However, it can
increase the entropy per bit of data to the cieling of 100% depending on
the ratio of initial/final data set size.

This last capability is one of the useful properties of hashes.  They do
not create new entropy, but they distill the existing entropy of a data set
down to a smaller size and thus higher density.

I believe there to be two logical steps in this kind of process.  Most
hashes perform both steps simultaneously, but separating them may simplify
analysis.  The first logical step is to fold, spindle, and mutilate the
input data in such a way as to represent the input data in a form where
each bit of the representation data depends equally on each bit of the
input data in sequence.  The last bit, "in sequence" is necessary to
eliminate trivial manipulations that are sensitive to any change in the
input values, but insensitive to any change in the order of the input.

I'm sure this step described above can be formalized more rigorously, but
let's take the data representation loosely described above as containing
all of the entropy present in the input data.  Please note that this can be
verified by the fact that, in principle, step one can be reversed.   Thus
both the order and the disorder available in the input data can be
preserved.  Step one can be lossless.

The second logical step is to shrink down the data representation while
preserving the entropy it contains.  A sampling method is required.
Something as simple as folding the excess data into a smaller
representation with XOR should suffice.  Alternatively, one could simply
take the first data elements, discarding the remainder.  I suggest that the
subsample obtained this way is an adequate representation of the disorder
present at the beginning of step two.  Please note that, by definition,
step two is non-invertible (lossy).

The citical difference between the output of step one and that of step two
is this:  The output of step one contains all of the correlations present
in the original input data.  Thus, were the input data text, one would not
have a suitable RNG at the completion of any possible step one.  The output
of step two does not contain the correlations present in the input data.
Thus, no matter how poor the input data is (barring zero entropy which, by
the definition of entropy as the output of a logarithmic function, cannot
exist) the entropy in it can be represented in a form suitable for use as
an RNG.


------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Threat Models: When You Can't Use a One-Time Pad
Date: Fri, 05 Feb 1999 13:54:48 GMT

In article <79ed8f$cf2$[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] wrote:
> Case 1
> Unencrypted OTP
> If there is no covert physical access, just ensure you destroy bits of OTP you
> use, or overwrite used bits with new random ones (preferable - make it less
> obvious). Once it's gone people cannot decrypt the old messages.
>
> Case 2 Encrypted OTP If you encrypt an OTP with say 56 bit DES. You'll have
> to do a brute 56 bit DES XOR every result with a sample encrypted message,
> sliding it bit by bit till you get the suspects. If the encrypted OTP is
> replaced as its used it could very difficult if the current starting point is
> unknown.
>
> I know I've asked about PCBC before. I've got to check out the attack on PCBC
> mentioned to see if it will work if the "plaintext" is an unknown random
> string which is to be XOR'ed with a known encrypted string at an unknown
> point.
>
> Link.
> p.s. Yo Scott.. Here's where you do your advertising bit :).
>

  Sorry I have not been following your thread and I would not normally
anwser more stuff about OTP's since the subject is a never ending
tale. And even I get tired of it. But PCBC chainning especially
3 or more passes of "wrapped PCBC" is basically what is used to
provide the "all or nothing encryption" properties of the famous
scottNu encryption methods. I consider the bit shifting at each
pass part of this. In my codes I am using every possible single
cycle S table of a given size. Like in scott19u I use a 19 by 19
bit look up table. I seriously doubt if even the NSA which has the
largest historical references on encryption has seen any one use
a look up table of this exact size. Since it was not possible to
even do this in a timely manner a few years ago. But even you
could use this multipass chainning methods with proven old block
methods like DES but since 56 bits for a key is kind of ridiculous
and since one should use at least 3 passes any way. Why not use a
different key each pass that way one can have a very simple 3 DES
wiht "wrapped PCBC" that is 168 bits of key space. This should even
give people like mr B.S. a feeling of secruity.


David Scott

P.S. or get the real thing with source code. Is this how you
wanted me to pulg it? Oh by the way it is free but you need
to know C if you want to mode it or compile it your self.

> -----------== Posted via Deja News, The Discussion Network ==----------
> http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own
>

http://cryptography.org/cgi-bin/crypto.cgi/Misc/scott19u.zip
http://members.xoom.com/ecil/index.htm

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

Date: Fri, 05 Feb 1999 09:08:06 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Threat Models: When You Can't Use a One-Time Pad

[EMAIL PROTECTED] wrote:

> Trevor Jackson, III ([EMAIL PROTECTED]) wrote:
> : The second issue is that the use of lesser security to protect the pad may
> : be a bit stronger than that same security used to protect the plaintext.
> : I am unable to formulate this with the rigor I desire, but, FWIW, I'll
> : sketch the fundamental concept.  The use of an OTP as an intermediate
> : cipher enlarges the unicity distance of the system past the length of the
> : message.
>
> In practice, you probably would be right. A very slight bit of additional
> security is provided, since an attacker would have to match the correct
> one-time-pad sheet to the correct encrypted file.
>
> But assuming no additional means of _encryption_ is employed, I assume
> that the encrypted file is stored with an unencrypted pad number.
>
> Now, if we encrypt the one-time-pad with a symmetric algorithm, and do the
> same thing to the enciphered message, when solving both ciphers, we're
> working back to pure random text, and so we have to look at correlations
> between two plaintexts which have no redundancy individually. Is this
> harder than cracking the two ciphers, applied one after the other, to the
> plaintext?
>
> I'd tend to think it might be, but when I mentioned this, I've heard a
> contrary opinion voiced.

Well, assuming separate keys we've doubled the work, creating a natural form of
MITM attack.  Assuming the same key I think it reduces to the case described
below.

>
>
> If one of the two files - the pad or the ciphertext - is encrypted by an
> additional method, then the improvement in security, if any, is much
> smaller, in my opinion.
>
> But by 'more secure', in *either* case, I mean the work factor is
> (possibly) greater. The _unicity_ distance is not affected, since a
> brute-force search for plaintext will require trying exactly the same
> number of keys, and verifying plaintext will only take a trivial step.

Yes.  My comments were not correct.  I was concentrating on the intermediate
files which is invalid because the analyst can simply complete the, as you
state, trivial operation to reveal the inner plaintext.


------------------------------

From: "Tony T. Warnock" <[EMAIL PROTECTED]>
Subject: Re: Metaphysics Of Randomness
Date: Fri, 05 Feb 1999 08:12:26 -0700
Reply-To: [EMAIL PROTECTED]

R. Knauer wrote:

> You cannot require the TRNG to generate the sequences, only be capable
> of generating the sequences. Some sequences that are possible may
> never be generated in a million years.
>
> In fast, forcing the TRNG to actually generate all possible sequneces
> destroys its indeterminate nature. The whole point behind the TRNG
> which gives it its proveably security is that no one can ever know how
> it is actually going to behave.

The temporal behavior is an important part of any RNG. The probability of
a subsequence must not be a function of how far along the main sequence
you have gone. My example of Champernowne's number shows that a
computation that provable generates all k-bit sequences with the proper
frequency (1/2^k) cannot be random because some million bit subsequences
cannot be generated in the first million bits.

Tony


------------------------------

From: [EMAIL PROTECTED] (John Briggs)
Subject: Re: Random numbers generator and Pentium III
Date: 5 Feb 1999 15:43:43 GMT
Reply-To: [EMAIL PROTECTED]

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (R. Knauer) 
writes:
>On 3 Feb 1999 12:00:44 -0500, [EMAIL PROTECTED] (Patrick Juola)
>wrote:
>
>>>Anyway, all numbers produced by a properly functioning TRNG are valid
>>>output, so the number you just alluded to is a valid output, yet you
>>>would have shut the TRNG down if it had occured.
>
>>Yes, but the chances of that happening are smaller than the chance
>>of the generator being hit by lightning.
>
>Not if it is caused by a shorted output.

Then it's not a properly functioning TRNG, is it?

        John Briggs                     [EMAIL PROTECTED]

------------------------------

From: "Tony T. Warnock" <[EMAIL PROTECTED]>
Crossposted-To: sci.philosophy.meta,sci.physics,sci.physics.relativity,sci.skeptic
Subject: Re: *** Where Does The Randomness Come From ?!? ***
Date: Fri, 05 Feb 1999 08:13:41 -0700
Reply-To: [EMAIL PROTECTED]

Along these lines, it is amusing to design a Turing machine that will simulate
the EPR paradox.

Tony


------------------------------

From: [EMAIL PROTECTED] (Doug Stell)
Subject: Re: RSA implementation
Date: Fri, 05 Feb 1999 15:30:00 GMT

On Fri, 05 Feb 1999 11:53:44 +0100, Florent Dudan
<[EMAIL PROTECTED]> wrote:

>> If it always gives the same sequence of numbers, then it is not
>> suitable for cryptography. This type of PRNG may give you statistical
>> randomness. However, a cryptographically strong PRNG must make it
>> impossible to predict the next number from all of the previous numbers
>> it has generated or, as mentioned below, anything else.

What you describe is neither random nor pseudo-random, as understood
by cryptographers. It may be PR from a stastical standpoint, but that
is useless cryptographically.

>I know it was not a very good idea but I really cannot use the Smalltalk
>random generator, it is given in every implementation of this language.
>By the way you must remember that everyone using your application in
>Smatallk may have access to your code !!! 

We generally assume this is true with any software that is not
physically protected

>So it is first based on trust, that an owner would not distributes the
>random generator....

...and that only the good guys have a copy of the application.

>> You need to seed your PRNG with a lot more entropy than time to the
>> nearest millisecond. As pointed out, this mistake has been made before
>> and the results were disasterous and embarssing.
>
>So what do you propose as a good random generator ? 
>Do you have source codes to propose (by the way my project won't be
>commercialized !). 

There has been a lot of discussion about true random number
generators, which usually relies on hardware generators.

There has also been a lot of talk and literature on pseudo-random
number generators and where/how to obtain the entropy to seed them.
The standard PRNG is built around a pair of hashes.

Here's another PRNG, built around the discrete log problem.
P1 and P2 are large primes or could be the same prime
G1 and G2 are generators in Z*P1 and Z*P2 respectively
X0 is the initial seed

Inner loop, a free running random permuter of itself

        Xn = G1^Xn-1 mod P1

Outer loop, a permutation on the output of the inner loop

        Rn = G2^Xn mod P2

I suggest using only about half of Rn and discarding the other half.

Entropy can be added as time goes on by hashing it and XORing it with
the current  Xn.

>Remember also that in Smalltalk you cannot based your random generator
>on how the compiler (as in C) does make roundings of numbers (i.e.
>9.87566 to 10) !!

I wouldn't consider using floating point math for cryptographyic
purposes.

>> I'm not sure what you are asking here, but I can guess.
>
>You guess it more or less well...
>
>> I guess that you are breaking a long plaintext into blocks that will
>> be RSA-encrypted separately. We generally avoid doing that, because
>> the algorithm is slow. Instead, we use conventional encryption for the
>> message with a randomly chosen key and RSA-encrypted only a single
>> block containing the conventional/symmetric key.
>
>In fact I do not really divides a text into groups, I packed with binary
>shift all the first ASCII codes in a Largeinteger while I test each time
>I add an ASCII code that the Largeinteger isn't > n. Then when I cannot
>add any more ASCII codes to the number, I compute its modular
>exponentiation...

Since I guessed correctly, I also guess that it is SLOW. This method
is not generally used, due to its low speed. We normally use faster
encryption and limit our RSA operations to a single block.

>Then I takes the following next ASCII codes and so on till the end...So
>it is never determined the exact numbers of characters in a group...
>
>The problem is that it gives me (after encryption) largeintegers of
>different size and I don't know I can append them to form a unique
>seuquence of numbers and thereafter subdivides this sequence to refind
>the largeIntegers that composed it.

It sounds like your worry is really about the leading zeros that may
come out of the RSA calculation and the ability to identify the block
boundaries in the output. In that case, you can either

1. keeps the zeros so that all blocks are the same length
2. encode the length of each block, such as with ASN.1

doug


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to