Cryptography-Digest Digest #318, Volume #9 Thu, 1 Apr 99 12:13:04 EST
Contents:
Re: True Randomness & The Law Of Large Numbers (R. Knauer)
Re: True Randomness & The Law Of Large Numbers (R. Knauer)
New York Times article on Differential Fault Analysis (Jean Widson Petit-Frere)
Newbie intro: what modern crypto systems assume the enemy knows (Sundial Services)
Re: My Book "The Unknowable" (Neil Nelson)
Re: New and recent crypto books (Bruce Schneier)
Re: True Randomness & The Law Of Large Numbers (Peter Pearson)
Re: Live from the Second AES Conference (Bruce Schneier)
Re: North Korean A3 code (John Savard)
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Thu, 01 Apr 1999 15:28:10 GMT
Reply-To: [EMAIL PROTECTED]
On Thu, 01 Apr 1999 02:57:14 GMT, Dave Knapp <[EMAIL PROTECTED]> wrote:
>Thus, the repeated measurement of
>the position of a particle undergoing a random walk has a very different
>statistical distribution than would measurements of independent
>particles undergoing the walk.
That is the same as saying that the time average of one particle has
nothing to do with the ensemble average of all particles.
>Then you _do_ understand that your objections to statistical testing of
>random-number generators are groundless?
No, I do not.
I brought the random walk into the discussion to point out that
statistical tests cannot be deemed reliable methods of characterizing
true randomness. Also, if the random walk is not suitable (and I never
said it was or was not) for the model of a TRNG, then why do people
claim that the uniform Bernoulli process which underlies it, is
suitable? After all, people talk about the random coin toss as a
suitable model of true randomness. And all one is doing with the
random walk is modeling 1-bit bias for the UBP, which is what Sn
measures.
>Unless you are pointing out the well-known fact that one cannot use the
>moving average of the random-number generator as if it were independent.
The quantity Sn / n which you allude to is used by Feller to derive
the weak law of large numbers. He states that
P{ |(Sn / n) - p| < eps} -> 1 as n -> infinity.
You can find it on page 152 of the 3rd edition.
Isn't it interesting that something so fundamental as the law of large
numbers is constructed from something you claim is so "correlated".
>For _independent_ bitstreams, a TRNG has no bias, and a measurement of
>significant bias would indicate a potential problem.
I fully agree with your statement in principle. However, I disagree
with the use of statistical tests in making that measurement, since I
do not believe that tests conducted on an infinitesimally small sample
is adequate to characterize the properties of a TRNG with reasonable
certainty.
It takes the full ensemble average to uncover lack of bias, and for a
keystream of length n, that means you must sample something of the
order 2^n sequences, which is impossible. If you are willing to relax
the requirement from absolute certainty to reasonable certainty, you
can reduce the number of sequences you must test. But never can you
reduce the test sample size to the infintesimally small size that
statistical testing purports to be able to make inferences from. That
inference is based on a model for pseudorandomness, and such is not
adequate for characterizing true randomness of a TRNG process.
>No decent statistician would use repeated measurements of the bias of a
>_given_ bitstream, re-using previous data, and claim that it would be a
>useful measure of randomness.
>In other words, you are attacking a strawman that doesn't exist.
I am attacking no such thing.
I have seen repeated comments on sci.crypt that bit bias is the most
important thing in determining the non-randomness of a process. These
people claim that you can determine bit bias from the output of a
TRNG. That means they are measuring (Sn / n) and equating it to p for
large n, claiming that if (Sn / n) = p = 1/2, that the TRNG is
reasonable certain not to be non-random. Some even go so far as to
claim that (Sn / n) = 1/2 is proof that the TRNG is random, but these
people do not understand the difference between a necessary and a
sufficient condition.
So don't give me this "strawman" crap. I have been on sci.crypt far
too long to be accused of that. Just go back to the archives and find
out for yourself how many people have made the assertion that a
measurement of (Sn / n) = 1/2 determines rules out non-randomness.
>Once again, I suggest that before you criticize statistical analyses of
>random number generators, you try to learn _something_ about the
>statistical analyses you are criticizing.
First someone has to demonstrate in a convincing manner that
statistics can even be used to describe true randomness. It is
interesting to note that such demonstration has not been forthcoming
thus far. No one has convincingly demonstrated that statistics has
anything to do with true randomness, other than the *appearance* of
randomness as in pseudorandomness.
BTW, your comment above about not repeatedly sampling the same
sequence does not seem to be correct. If someone were to generate 1
million sequences of 20 bits length, and calculate the bias of each
one, and then do an ensemble average, how would that differ from the
calculation of (Sn) or (Sn / n)? The bias of the ensemble is the sum
of the individual biases of each sequence. I chose those numbers
because 2^20 = 1 million (approximately).
BTW, I just ordered that book by Trivola from the public library and
with any kind of luck I will get it in time for the weekend. For the
record it should be noted that until recently I too was a proponent of
the position that statistical tests can be used to demonstrate that a
process was not truly random.
It was only until I read Li & Vitanyi, and now Feller, that I have
challenged that assertion. If it can be demonstrated that I am wrong,
I will gladly return to the position I held for so long previously.
But I need to be convinced in a clear and certain manner - I will not
merely accept bluster and pontification as reasons to change my
position.
Bob Knauer
"The laws in this city are clearly racist. All laws are racist.
The law of gravity is racist."
- Marion Barry, Mayor of Washington DC
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Thu, 01 Apr 1999 15:39:18 GMT
Reply-To: [EMAIL PROTECTED]
On Thu, 01 Apr 1999 11:25:20 +0100, Mok-Kong Shen
<[EMAIL PROTECTED]> wrote:
>We have discussed many times in the past on the issue of doing (any)
>tests but unfortunatedly it hasn't been cleared up.
I am beginning to suspect that it never will be cleared up.
>Hence I am
>taking the liberty to raise one basic point once again. If you do any
>tests involving gathering data from physical phenomena, some measuring
>apparatus is involved. That apparatus has certain (non-perfect)
>precison and has certain range of errors. So the results are only
>acceptable with certain level of confidence and one is thus back to
>statistics, as far as I can see. (To determine the precision
>of the appratus itself needs the invocation of theory of statistical
>tests!)
I have never claimed that statistics is invalid in every regard. For
someone who used to do basic research in physical measurements of
stochastic processes, that would be completely the opposite.
I am claiming that statistics is not valid for characterizing a
process as not truly random with reasonable certainty.
I really wish people would get that fact straight, namely that just
because I criticize the use of statistics in one regard does not mean
I am criticizing statistics in all regards.
It is only in regard to characterizing non-randomness that I call into
question the validity of statistical methods, and then only when it is
claimed that one can have reasonable certainty about non-randomness. I
readily accept the diagnostic role that statistics plays in uncovering
*possible* non-randomness by comparing the output stream to a model
for pseudo-randomness.
If a TRNG were to output a run of 100 zeros, I would shut it down
immediately and run diagnostics on its subsystems to see if if is
malfunctioning. That's because a run of 100 zeros is very
non-pseudorandom. But it does not mean that with reasonable certainty
that the TRNG is malfunctioning. To determine that one must diagnose
the actual TRNG, not just its output.
Please, let's try to avoid this happening again - where I am falsely
accused of claiming that all statistics is wrong. I never said that, I
never even implied that. I know better than to do that.
<jeez>
Bob Knauer
"The laws in this city are clearly racist. All laws are racist.
The law of gravity is racist."
- Marion Barry, Mayor of Washington DC
------------------------------
From: Jean Widson Petit-Frere <[EMAIL PROTECTED]>
Subject: New York Times article on Differential Fault Analysis
Date: Thu, 01 Apr 1999 10:52:24 -0500
I am looking for a New York Times article (fall of 1996) article
extolling a fault-based attack model on the security of
computing devices developed by Bellcore researchers. The attack was
referred to as Differential Fault Analysis.
Would someone point me to the right place to find (on the net)? I have
already tried New York Times and Bellcore (Telcordia) web sites.
Thanks.
------------------------------
Date: Thu, 01 Apr 1999 09:01:53 -0700
From: Sundial Services <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Newbie intro: what modern crypto systems assume the enemy knows
Anyone can invent an algorithm that scrambles text so that it looks like
garbage (and might be unrecoverable with casual effort). But this is
rather like locking your door each night, trusting its security without
ever stopping to think how someone -might- break in. (There might be an
open window, etc.) To obtain real security, you have to consider what
someone might do to defeat the cipher and recover your information. The
assumptions made by modern cryptographers are aggressive indeed!
* The attacker is a professional cryptanalyst.
* The attacker knows, or can determine, what algorithm was used and
knows all
about how it works. He can experiment with it at his leisure.
* The attacker can surf the Internet and explore all kinds of
strategies and
possible attacks against copies of your data, without your
knowledge!
* The attacker has thousands of messages at his disposal. Megabytes.
Gigs.
* The attacker can guess all or part of the plaintext of the messages.
* The attacker can generate and encrypt or decrypt his own "chosen
plaintext"
messages and study the result.
* The attacker has a lot of computational power at his disposal, and
the
willingness to use it.
* People make mistakes. Cryptanalysts exploit those mistakes.
* No cryptosystem can provide "absolute security." A message can be
recovered
given time. The system must protect the message "long enough."
Every system
design is a compromise.
Suddenly that "home-grown" cryptosystem of yours might not sound so good
after all. You're probably better off using, or even licensing, a
cryptosystem that has known characteristics and is known to be (a) good
enough for your application; and (b) known to be correctly implemented.
This is similar to the advice to "look at your home as a burglar
would." You assess the attacks that you need to defend against
(possibly enlisting the help of a professional in the field), you select
a well-made lock from a known manufacturer, AND you use it dilligently
and correctly... and for legal purposes. Suddenly your home, your data,
your messages, become unlikely to be broken into. Your data is
reasonably secure... AND you are not merely speculating about this; you
actually know.
------------------------------
From: Neil Nelson <[EMAIL PROTECTED]>
Crossposted-To: sci.math,sci.physics,sci.logic
Subject: Re: My Book "The Unknowable"
Date: Thu, 01 Apr 1999 16:07:43 GMT
In article <OhM$[EMAIL PROTECTED]>,
Paul Healey wrote:
[ Materialism for-itself is vulgar, so vulgar materialism is
[ for-itself - the principles which belong to it cannot be discerned
[ by its own schema. It presents itself as if this is the way things
[ actually are, but in fact it is only concerned with the appearance
[ of things. It is just a more modern variant of empiricism, as Hegel
[ points out ! Both are ethically suspect, just as transcendental
[ logic is, in that they fail to ground their schema's.
Materialism as physical science would not appear to be an adequate
subject of vulgarity. Materialism as a set of values for the
accumulation of things in describing a person's behavior or stated
purpose could be a subject of vulgarity. People are vulgar; physical
objects or their study to gain useful information would not be vulgar.
[ Nothing is unknowable, precisely because it is nothing more than a
[ word. [...] What Georgias, like many a modern logician fails to do,
[ is differentiate between what a thing is in-itself and that which it
[ belongs to. Constructing a schema which can do this, which is
[ categorically consistent, I claim has a ground, whereas an
[ arbitrarily set of axioms is merely a correspondence with what
[ things are in-themselves. This is why, axiomatic deductive models
[ of reasoning fail to capture the dynamic nature of reasoning as it
[ is related to language; you can have a dialogue with someone else,
[ precisely because you have some idea, or can work out and learn what
[ they mean. This requires, that what exists has to be knowable.
A presumption of science is that there is that which is not yet known
but is knowable, but there would not appear to be any necessary
argument that what is not yet known, if there be any, can become
knowable.
The thing-in-itself as against the set of properties (that which it
belongs to) that describe the thing would not appear to be a necessary
distinction, or rather the properties would be primary as the
distinction of things can only be made by properties. Thingness can
be seen as only for the purpose of attaching properties and without
any meaning or use in itself. I.e., it is sufficient to speak of
groups of properties without regard to any underlying thing.
Neil Nelson
------------------------------
From: [EMAIL PROTECTED] (Bruce Schneier)
Subject: Re: New and recent crypto books
Date: Thu, 01 Apr 1999 16:22:40 GMT
On Wed, 31 Mar 1999 14:44:57 GMT, [EMAIL PROTECTED] wrote:
>In article <[EMAIL PROTECTED]>,
> [EMAIL PROTECTED] (CryptoBook) wrote:
>>
>> Classical Crypto Books is pleased to announce that the following new and
>recent
>> crypto books are in stock and ready for immediate shipment.
>
>Commercial advertising is a no-no. Please don't do it again.
I think it's nice to know what books are being published. He only
sends these emails once in a while, and you're free to order the books
from your favorate bookstore. Think of it more as a public service
than an advertisement.
Bruce
**********************************************************************
Bruce Schneier, President, Counterpane Systems Phone: 612-823-1098
101 E Minnehaha Parkway, Minneapolis, MN 55419 Fax: 612-823-1590
Free crypto newsletter. See: http://www.counterpane.com
------------------------------
From: [EMAIL PROTECTED] (Peter Pearson)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Wed, 31 Mar 1999 17:42:21 GMT
In article <[EMAIL PROTECTED]>,
R. Knauer <[EMAIL PROTECTED]> wrote:
>On 31 Mar 1999 02:16:01 GMT, [EMAIL PROTECTED] (Bryan G. Olson; CMSC (G))
>wrote:
>
>>Not so. As n grows large, a greater and greater fraction of
>>the ink molecules will be within 0.05*n of where they started.
>
>That defies physical intuition, as well as the rules for mixing
>entropy. Everyone knows that if I open a perfume bottle in the middle
>of the room, the odor will spread all over the room with time.
Bryan Olson is correct in saying that as the number of steps, n,
increases, a larger fraction of random walks of length n will be
within 0.05*n of the origin. If you bear in mind that the mean
distance from the origin increases as sqrt(n), which for large
n increases more slowly than 0.05*n -- or *any* positive constant
times n -- this is not surprising.
R. Knauer is correct in saying that perfume molecules will eventually
diffuse throughout the whole room. There is no contradiction with
what Bryan said, since the physical analogue of the situation Bryan
was discussing is a room whose walls recede at a constant rate. The
experiment R. Knauer describes reflects the proportion of random
walks whose distances from the origin exceed some *fixed* value,
and this proportion does indeed tend to 100%.
- Peter
------------------------------
From: [EMAIL PROTECTED] (Bruce Schneier)
Subject: Re: Live from the Second AES Conference
Date: Thu, 01 Apr 1999 16:25:25 GMT
On 01 Apr 1999 09:56:48 +0200, Jaap-Henk Hoepman <[EMAIL PROTECTED]>
wrote:
>[EMAIL PROTECTED] (Bruce Schneier) writes:
>
>> >: IBM's Pankaj Rohatgi explained how he got all 128 bits of
>> >: a Twofish key after only 50 (that is 50 not 2^50) uses of a smart
>> >: card!
>> >
>> >I wonder how secure some of the other ciphers would be, if the kind of
>> >optimizations Bruce suggested for fitting Twofish on a smart card were
>> >applied to them. That is, if it were possible.
>>
>> He said in his talk that every cipher is vulnerable. We've done this
>> sort of work, too, and we have found that you can't defend against
>> these types of attack with the algorithm. You can do some things with
>> the implementation and some things with the hardware, but basically
>> you need to defend in the protocol layer.
>
>I'm not sure I understand how a defense in the protocol layer would prevent a
>DPA style attack. Could you give an example?
Sure. When building smart-card based systems, I try very hard to make
sure all secrets within a device can be known by the person holding
the device.
Bruce
**********************************************************************
Bruce Schneier, President, Counterpane Systems Phone: 612-823-1098
101 E Minnehaha Parkway, Minneapolis, MN 55419 Fax: 612-823-1590
Free crypto newsletter. See: http://www.counterpane.com
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: North Korean A3 code
Date: Thu, 01 Apr 1999 17:04:08 GMT
[EMAIL PROTECTED] (Jim Dunnett) wrote, in part:
>The Chinese also have a system which codes a subset of the
>ideograms of Mandarin into 5-figure (letter?) groups.
I know there's a system that takes Chinese characters into four digit
groups: and the book Codebreaker in the Far East shows that the system also
has a three-letter version. There is also a conversion of Chinese telegraph
code into two 8-bit characters, but it is seldom used in comparison to
GuoBiao and Big 5.
John Savard (teneerf is spelled backwards)
http://members.xoom.com/quadibloc/index.html
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************