Cryptography-Digest Digest #369, Volume #9 Sat, 10 Apr 99 19:13:04 EDT
Contents:
Re: Not a PGP Expert (Scott Fluhrer)
Re: Test vector repository--specifically, help with a broken Blowfish (Boris Kazak)
Re: True Randomness & The Law Of Large Numbers (Terry Ritter)
Re: True Randomness & The Law Of Large Numbers (Herman Rubin)
Re: True Randomness & The Law Of Large Numbers (R. Knauer)
Re: True Randomness & The Law Of Large Numbers (R. Knauer)
Re: True Randomness & The Law Of Large Numbers (R. Knauer)
Re: True Randomness & The Law Of Large Numbers (R. Knauer)
----------------------------------------------------------------------------
From: Scott Fluhrer <[EMAIL PROTECTED]>
Subject: Re: Not a PGP Expert
Date: Sat, 10 Apr 1999 18:39:18 GMT
In article <[EMAIL PROTECTED]>,
Anthony Stephen Szopa <[EMAIL PROTECTED]> wrote:
>Not a PGP Expert
>
>Say someone publishes his/her PGP public key and 1,000 people use it and
>send him/her messages over the Internet using PGP and this public key.
>
>Now suppose the NSA has this person's Internet line bugged. The NSA
>also has the public key. Can the encrypted messages from these 1,000
>users of this public key have their messages decrypted by the NSA (or
>anyone else, for that matter) since they have the public key and the PGP
>software, too?
If those 1,000 messages make PGP insecure, then PGP is already insecure.
After all, if the NSA finds it useful, it could generate 1,000 (or a
trillion) messages to that same public key, without bothering to bug any
lines.
In general, any public key encryption algorithm must be proof against
a chosen plaintext attack. After all, the attacker can encrypt vast
mounds of plaintext of his own choosing.
For that matter, if even a secret key encryption algorithm could be
broken if the attacker had access to 1,000 messages, then by modern
standards, that algorithm is considered hopelessly flawed.
--
poncho
------------------------------
From: Boris Kazak <[EMAIL PROTECTED]>
Subject: Re: Test vector repository--specifically, help with a broken Blowfish
Date: Sat, 10 Apr 1999 11:48:57 -0400
Reply-To: [EMAIL PROTECTED]
Nathan Kennedy wrote:
> Now, maybe broken PC compilers only have 16 bit ints, but I have about 0
> interest in portability to them. Cockroaches outnumber humans more than
> 100 to one, but I'm not interested them either. I highly debate your 99%
> figure anyway.
>
> Right now, I'm just trying to get a working, test vector-passing Blowfish.
> Then I'll make a pretty one that will compile on x86 GNU/Linux, Sparcs,
> Alphas, Suns, and Crays. But not PC's running Borlando Turbo C++ for DOSO.
>
> Nate
======================
Assuming you are right and the sizeof(unsigned int) in your
compiler is actually 4 (although I'd test it just to
be 100% sure...),
the desciptions of BLOWFISH are not very specific
about byte ordering which produces the desired results.
So your:
#define blowround(l,r,n) l ^= P[n]; \
r ^= ((S[0][l>>24]+S[1][(l>>16)&255])^S[2][(l>>8)&255])+S[3][l&255];
could be any of the following:
#define blowround(l,r,n) l ^= P[n]; \
r ^= ((S[3][l>>24]+S[2][(l>>16)&255])^S[1][(l>>8)&255])+S[0][l&255];
or:
#define blowround(l,r,n) l ^= P[n]; \
r ^= ((S[0][l&255]+S[1][(l>>8)&255])^S[2][(l>>16)&255])+S[3][l>>24];
(most probable...)
or:
#define blowround(l,r,n) l ^= P[n]; \
r ^= ((S[3][l&255]+S[2][(l>>8)&255])^S[1][(l>>16)&255])+S[0][l>>24];
Then, since you actually never swapped L and R, these 2 lines:
*R=xL^P[16]; /* I'm not so sure about where */
*L=xR^P[17]; /* xL and xR are here... */
should probably read:
*R=xR^P[16];
*L=xL^P[17];
or:
*R=xR^P[17];
*L=xL^P[16]; ( in the "blow" function,
although your 2 versions should be also tested )
and as well:
*L = xR ^ P[0];
*R = xL ^ P[1];
should probably read:
*L = xL ^ P[0];
*R = xR ^ P[1]; ( in the "suck" function, this is almost certain )
The routine of XOR-ing the key into the P-array will
work in your particular case, when the key is exactly
8 bytes long, for keys of different length it does
not conform to the description.
BLOWFISH cyclically repeats the key to make it 56
bytes long, whatever the initial keylength is.
So BINGO would be extended as BINGOBINGOBINGO...
and not as BINGOxxxBINGOxxxBINGOxxx... which would be
the case in your program.
However, this is the least of concerns, since the
first thing you have to do is to make the programm
pass the test with the 00000000 key.
Please keep us posted.
Best wishes BNK
------------------------------
From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Sat, 10 Apr 1999 19:39:41 GMT
On Sat, 10 Apr 1999 01:29:54 GMT, in
<[EMAIL PROTECTED]>, in sci.crypt
[EMAIL PROTECTED] (R. Knauer) wrote:
>On Fri, 09 Apr 1999 23:44:22 GMT, [EMAIL PROTECTED] (Terry Ritter) wrote:
>
>>Nah, "correlation" is very acceptable terminology. "Bit-bias" is a
>>specific case of strings of length 1 correlating to constants.
>
>>In general, we can assume strings of arbitrary length, with arbitrary
>>correlating functions (linear, or nonlinear in all its forms), against
>>constants, string positions, or ever-more-complex constructions. Any
>>predictable relationship reasonably can be called a "correlation."
>
>I have seen that claim on a couple of occasions here, namely that
>1-bit bias is just another from of correlation.
>
>Could you explain that further, because it is not obvious.
The term "correlation" expresses the idea that values may occur
together, vary together or in opposition, or be otherwise related in
some way beyond that expected by chance. This concept is widely used
in Science, Engineering, Mathematics, and ordinary speech. I suspect
that it predates formal Statistics in that the workings of Science
would seem to require such a concept. The specialized term
"correlation coefficient" of course represents a particular
statistical measure.
This particular use of the term "bias" expresses the concept of a
constant difference from an expectation, such as a numerical offset
from the expected value in a statistical measure. I have experienced
and tracked to its source just such a bias in the "Runs Up / Runs
Down" test described in Knuth II: When used on small symbol sets, the
Knuth expectations systematically differ from the statistic values
measured from random data. This is a statistical "bias" or difference
due to the use of a real approximation to a discrete population.
The ratio of the occurrence counts of two symbols can be a statistical
measure, and if that systematically differs from what we expect from a
random distribution, then we have a bias from randomness. A measured
difference from the 0.5 bit-value expectation would be a "bias,"
although "bias" need not apply specifically to bits.
Strings can be tested for "correlation" to external constant
substrings. Presumably the test would be some sort of bit-by-bit
comparison and count of bit-similarity, but the form of the test is
unimportant. If the test substring is one bit, and the overall string
has bit bias, we expect that to reveal itself as bit correlations and
anti-correlations.
I would call the overall effect from such correlation-detection a
"bias" toward or away from the substring. But when trying to denote
all possible defects with a single term, "bias" (to me) would not
describe the important case where bits in particular positions oppose
each other while leaving the overall bit-counts generally unchanged.
That is "correlation" in another form, and is exactly the sort of
weakness we expect to find in random generator. We certainly do not
want to leave that out, so I would use "correlation" as a one-term
description for randomness flaw.
Individual bits are presumably not special. If we are very concerned
about bit-bias, we should also be concerned about 2-bit bias and n-bit
bias -- that is, the individual symbols composed of bits, at each
possible bit offset. Whereas 1-bit bias will not detect correlations
between adjacent or nearby bits, n-gram analysis may. We might find
that for some symbol sizes, perhaps in some positions, the symbol
population would have systematic nonuniformity (some symbol counts --
the correlation with a particular symbol -- showing bias). Yet the
overall concept of what is actually being detected might be best
described as a "correlation." So by extension one might use the
natural n-bit term also when talking about 1-bit symbols.
All that said, I find excessive concern about particular terms used in
ordinary technical conversation somewhat disturbing. The context is
often not "what did you mean by that" but instead "you misused that
term," as though supposedly-correct usage somehow also implies correct
thought. It does not. Technical terms acquire shades of meaning as
they are used by working communities, and a good way to find current
usage is to inspect the recent literature. But some comments on usage
sound like virgins precisely defining sex.
---
Terry Ritter [EMAIL PROTECTED] http://www.io.com/~ritter/
Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
------------------------------
From: [EMAIL PROTECTED] (Herman Rubin)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: 10 Apr 1999 09:05:39 -0500
In article <[EMAIL PROTECTED]>,
R. Knauer <[EMAIL PROTECTED]> wrote:
>On 7 Apr 1999 12:42:47 -0500, [EMAIL PROTECTED] (Herman
>Rubin) wrote:
>>"Honest" random bits would be such that for any n bits, all sequences
>>are equally likely.
>That's what we have been calling true randomness. Maybe full
>randomness would have been a more descriptive term.
That is what YOU have been calling true randomness. By doing so,
you exclude ALL of the actual random processes taking place in
the universe.
..................
>>From Archimedes on, physical models were done by using mathematical
>>objects. In some cases, the modeling is incomplete, and parts of
>>the physical system is modeled. But modeling of the real world is
>>only done by using a mathematical model and a dictionary.
>You don't seem to appreciate the heavy empirical content in physics.
The empirical content is in deciding which models are acceptable.
It is extremely important.
Observations only enable one to choose from the vast collection of
mental models available. It is even the case that people postulate
mental models without knowing their properties.
--
This address is for information only. I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED] Phone: (765)494-6054 FAX: (765)494-0558
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Sat, 10 Apr 1999 21:19:43 GMT
Reply-To: [EMAIL PROTECTED]
On 10 Apr 1999 09:33:02 -0500, [EMAIL PROTECTED] (Herman
Rubin) wrote:
>Equiprobability is an unattainable goal for a physical
>process. What is attainable is a reasonable certainty
>that this is "sufficiently closely" approximated.
Are you saying that certain quantum states are not equiprobable in
physical reality, like in the Stern Gerlach experiment?
If the beam of particles in a magnetic field gradient splits into up
and down spin states, what causes one direction to be preferred over
the other?
>And the informal use also requires independence, which may
>be harder to approximate.
If they are not independent spin states, then what causes them to be
dependent, namely what are they dependent on?
Bob Knauer
"I am making this trip to Africa because Washington is an international
city, just like Tokyo, Nigeria or Israel. As mayor, I am an international
symbol. Can you deny that to Africa?"
- Marion Barry, Mayor of Washington DC
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Sat, 10 Apr 1999 21:31:02 GMT
Reply-To: [EMAIL PROTECTED]
On 10 Apr 1999 10:31:57 -0500, [EMAIL PROTECTED] (Herman
Rubin) wrote:
>You are confusing random with coming from a particular model
>or class of models.
Isn't the uniform Bernoulli process a classical model for a random
process, like a fair coin toss?
But whatever, I am relying on the specification for a TRNG to
represent my notion of true randomness, and that specification is not
based on any model, only the ensemble distribution of sequences
generated by a TRNG.
That is a statement of the fact that if a process is capable of
generating a flat distribution (which is binomially distributed in
terms of 1-bit groups), it is also capable of generating true random
numbers for purposes of proveably secure crypto, since a cryptanalyst
has no reason to accept any one number over any other number as the
correct key to decrypt a given OTP cipher.
>Markov processes are not independent.
Somewhere (perhaps Feller) I thought I read that a UBP is the same as
a particular kind of Markov chain. Maybe I misread it.
In any event, I have not stated what classical theoretical model is
suitable to generating true random numbers, since it is impossible to
do it algorithmically for a classical calculation device.
I have alluded to the quantum computer programmed to calculate true
random numbers, but you are reluctant to follow up on my comments
regarding that.
Bob Knauer
"I am making this trip to Africa because Washington is an international
city, just like Tokyo, Nigeria or Israel. As mayor, I am an international
symbol. Can you deny that to Africa?"
- Marion Barry, Mayor of Washington DC
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Sat, 10 Apr 1999 21:41:18 GMT
Reply-To: [EMAIL PROTECTED]
On 10 Apr 1999 10:11:35 -0500, [EMAIL PROTECTED] (Herman
Rubin) wrote:
>There is no TRUE random number
>generator as Knauer defines it.
The TRNG is not my definition. It is a specification that arose from
the prevailing consensus of discussions here.
That TRNG specification is if for a flat distribution for unique
sequences, which gives a binominal distribution for 1-bit groups.
What you seem to be saying is that there can be no TRNG capable of
generating a uniform binomial distribution, i.e., p = q= 1/2. I agree
with you if you limit that to classical TRNGs, even the radioactive
TRNG will have a exceedingly small amount of bias due to deadtime
effects. BTW, Feller has a writeup about geiger counters and
deadtimes.
You continually ignore my allusions to the quantum computer programmed
to calculate true random numbers, as described in Williams &
Clearwater's book on quantum computing, with a full chapter dedicated
to True Randomness. Those authors maintain that only a quantum
computer can generate true random numbers.
>The problem is not an easy one. It is necessary to consider it a
>decision problem, with the various consequences considered simultaneously.
Sounds a bit like QM to me.
Bob Knauer
"I am making this trip to Africa because Washington is an international
city, just like Tokyo, Nigeria or Israel. As mayor, I am an international
symbol. Can you deny that to Africa?"
- Marion Barry, Mayor of Washington DC
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Sat, 10 Apr 1999 22:15:24 GMT
Reply-To: [EMAIL PROTECTED]
On 10 Apr 1999 10:22:22 -0500, [EMAIL PROTECTED] (Herman
Rubin) wrote:
>>a) Random sampling is a necessary condition for the validity of
>>statistical tests. If the population distribution is not randomly
>>sampled, then the statistical test is invalid - which means nothing
>>can be determined from it whether it is passed or it is failed.
>This is if one is sampling from a finite population.
That seems to say that you agree with my statement above.
>One can do statistical testing under any probability model whatever.
That is an example of the circularity that Kolmogorov pointed out in
those quotes I have given here from Li & Votanyi:
+++++
"In everyday language we call random those phenomena where we cannot
find a regularity allowing us to predict precisely their results.
Generally speaking, there is no ground to believe that random
phenomena should possess any definite probability. Therefore we should
distinguish between randomness proper (as absence of any regularity)
and stochastic randomness (which is the subject of probability
theory). There emerges the problem of finding reasons for the
applicability of the mathematical theory of probability to the real
world".
--A. N. Kolmogorov (as quoted in Li & Vitanyi, p. 52, op. cit.)
"The frequency concept based on the limiting frequency as the number
of trials increases to infinity does not contribute anything to
substantiate the application of the results of probability theory to
real practical problems where we always have to deal with a finite
number of trials."
--A. N. Kolmogorov (as quoted in Li & Vitanyi, p. 55, op. cit.)
+++++
>That government testing standards are what SHOULD be done is not
>the issue.
I do not understand what you are driving at.
All I intended with that statement was to repeat the FIPS-140
procedure for sampling, not pass judgement on it. We chose FIPS-140 as
an example of approved statistical testing since no one would come up
with a test of their own.
Someone mentioned Chi Square once, but never followed up on it after
repeated requests to do so. What a test for variance has to do with
true random numbers is a bit obscure, anyway. Best stick with 1-bit
bias for starters.
>>c) The claim of the Monobit Test is that if it is not passed, then the
>>distribution of bits produced by the TRNG process is determined not to
>>be random within reasonable certainty. It is that claim which I
>>challenge.
>I do not need the test,
Neither do I, but that is irrelevant. The experts on here swear by
those tests, so I introduced them to have something to critique.
Novody else would volunteer a test, so I had to pick one from the
expert's repertoire.
>and it is the wrong problem.
Please clarify what you mean by that, since I think it is the wrong
testing methodology too.
>The question is whether it is CLOSE to being a TRNG.
It is tempting to equate statistical significance with closeness to
being a TRNG, as if there is some kind of measurement being conducted.
We know that there can be no statistical tests that pass judgement on
how good a TRNG is. The candidate TRNG could pass the tests with a
99.999% significance and still be a very poor TRNG.
Testing of a TRNG is not a measurement in the traditional sense.
Randomness and non-randomness are nominal entities, not real measures
along an interval.
>>d) The result obtained, namely that the population distribution is not
>>random within reasonable certainty,
>What population?
The population of all sequences in the ensemble.
>Populations are only relevant when one is sampling
>from a finite population.
The TRNG specification calls for a finite population. That was done
deliberatelyt to avoid nondiscrete distributions, the kind that lend
themselves to probability measures.
>Consider probability models instead, and
>these do not involve such objects, although the term lingers on.
That requires that you choose a model for the TRNG. What is the model
you choose? A finite uniform Bernoulli process?
I have not seen any prevailing consensus for that as the model TRNG
process. I think most people realize that a fair coin is impossible to
construct, much less flip in a fair manner.
>One of the areas of concern in statistics is that of inference from
>a single realization of a stochastics process.
I have said that all along: You cannot infer the properties of a TRNG
from a single instance of the TRNG output. Nor can you for an
infinitesimally small sample of outputs compared to the size of the
ensemble of all possible sequences.
Are you agreeing with me?
Bob Knauer
"I am making this trip to Africa because Washington is an international
city, just like Tokyo, Nigeria or Israel. As mayor, I am an international
symbol. Can you deny that to Africa?"
- Marion Barry, Mayor of Washington DC
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************