Cryptography-Digest Digest #362, Volume #9 Fri, 9 Apr 99 12:13:05 EDT
Contents:
Re: Live from the Second AES Conference (Matthias Bruestle)
Re: True Randomness & The Law Of Large Numbers (Dave Knapp)
Re: True Randomness & The Law Of Large Numbers (Mok-Kong Shen)
Re: True Randomness & The Law Of Large Numbers ("Douglas A. Gwyn")
Re: Announce - ScramDisk v2.02h (Andrew Haley)
Re: Douglas A. Gwyn : True Jerk ("Douglas A. Gwyn")
Re: Douglas A. Gwyn : True Jerk (R. Knauer)
Re: True Randomness & The Law Of Large Numbers ("Trevor Jackson, III")
More Diffie-Hellman musings... (Peter Gunn)
Re: True Randomness & The Law Of Large Numbers (John Briggs)
Re: DES or RSA Program source (C langage) (RSAEuro General)
Re: True Randomness & The Law Of Large Numbers (R. Knauer)
Re: KL-43? (John Savard)
test ([EMAIL PROTECTED])
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (Matthias Bruestle)
Subject: Re: Live from the Second AES Conference
Date: Thu, 8 Apr 1999 11:02:57 GMT
Mahlzeit
Ian Goldberg ([EMAIL PROTECTED]) wrote:
> Yes. HINDE, for example.
Is there somewhere on the net documentation about his available?
Thanks
endergone Zwiebeltuete
--
PGP: SIG:C379A331 ENC:F47FA83D I LOVE MY PDP-11/34A, M70 and MicroVAXII!
--
I changed instruments.
Piano?
The blunderbuss.
------------------------------
From: Dave Knapp <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Wed, 31 Mar 1999 01:49:21 GMT
"R. Knauer" wrote:
>
> On Tue, 30 Mar 1999 21:28:05 GMT, Dave Knapp <[EMAIL PROTECTED]> wrote:
>
> >Notice the "Xi" in
> >that equation: it says that Xi+1 depends on Xi. That is the
> >_definition_ of correlation.
>
> Not in my book.
>
> The underlying process for the random walk is the UBP, and it is
> completely uncorrelated. Each step is completely independent of all
> other steps, including the one preceding it.
>
> Sn measures the bias of the path after n steps - it is the net
> difference between steps in one direction versus steps in the other
> direction. There is no "correlation" involved.
Let me make sure I get this straight: you are claiming that Sn and Sn+1
are uncorrelated?
> Nowhere have I said I was going to use the Sn of the random walk to
> make a TRNG keystream. Nowhere. It is the sequence X1X2X3...Xn that is
> used as the keystream.
You claimed that statistical analysis of the TRNG bitstream was
analogous to measuring Sn, and that, because of the properties of S as a
function of n, such analysis would have no meaning.
Let me ask once again, just to be sure I see it: are you _really_
claiming that Sn is independent for each n in a given bitstream?
If you are, then it's no _wonder_ you think statistics is useless.
-- Dave
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Fri, 09 Apr 1999 11:19:35 +0200
R. Knauer wrote:
>
> On Wed, 07 Apr 1999 19:05:29 +0200, Mok-Kong Shen
> <[EMAIL PROTECTED]> wrote:
>
> >If entirely independent of statistics then based on what? (Note e.g.
> >the calibration of equipments also depends on statistics.)
>
> Not necessarily. Most of the TRNG is composed of digital circuitry, so
> the tests are not analogue.
Did I say 'analogue'? I wanted to know the kind concrete scientific
base (the theories) of the tests you use.
>
> >Perhaps on
> >simple visual inspection of the apparatus and the free thoughts of the
> >inspector??
>
> Absolutely not. You must calibrate your equipment. But that does not
> mean you have to use statistical inference. Any even if it does, I
> have never had a quarrel with using such methods where applicable.
>
> I am calling statistics into question where it is not applicable,
> namely a reasonably certain determination of non-randomness.
If you calibrate you equipments, the precision is determined by
statistics and you have no absolute certainty but only a result
at a certain confidence level. So everything you measure, including
the random sequence you obtain, is not absolutely exactly that of the
real physical event but can deviate from it by a certain amount
according to a figure associated with a certain confidence level.
Now please tell me how SURE are you obtaining true randomness without
statistics, even on the assumption that the physical event being
measured were absolutely truely random according to certain definition.
>
> >What and how do you choose, when you are given several sequences
> >without knowing their origin??
>
> You can't.
Then how do you solve the practical problem of encryption that
needs such sequences? Simply give up and do nothing or what?
>
> Randomness is a property of the generation process, not samples of the
> generation process. Only the full ensemble will tell you with absolute
> certainty if the TRNG is truly random.
>
> >Please give a concrete and
> >understandable description of YOUR approach that leads to an
> >unambigious decision making in practice.
>
> I did that several times with the radioactive TRNG. You will have to
> consult the archives using keywords like my name, "TRNG",
> "radioactive", "detector", "dead time", "radioisotope", "true
> randomness".
There were never concrete, exact descriptions, all were vague like
inspecting the engineering designs employing experts. But decisions
from experts without using scientifically sound theories of
measurements, which essentially depend on statistics, are simply
not dependable and hence useless.
>
> Quantum Mechanics claims to be a complete system. If you know the wave
> function for a system, then you know all you can ever hope to know
> about the system. The wave function is the determinant part. The
> random part can never be known, since to know it you would have to
> know all the constituents that make it up, and measuring those would
> distrub the system enough to alter the thing you want to know.
>
> >In ALL braches of physics there are idealizations, I believe.
>
> QM has no idealizations at its foundations. There are idealizations
> with specific applicantions, but QM itself is not an idealization.
Now obtaining random sequence from quantum experiment IS a 'specific
application'. Is there idealization or idealizations??
M. K. Shen
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Fri, 09 Apr 1999 11:16:31 GMT
"Trevor Jackson, III" wrote:
> superiority of hand coding is wrong. In fact, I'd be willing to bet
> real money that I can design an instruction set that no human can code
> as well as a compiler.
It doesn't have to be "obfuscated". There are actual, reasonable
architectures for which optimization technology usually beats
"hand-coded" assembler. This can happen, for example, with wide
instruction words where several computational threads are occurring
in parallel, or for large register sets, or other situations in
which there are mathematical methods of optimization that are not
intuitive.
------------------------------
From: [EMAIL PROTECTED] (Andrew Haley)
Subject: Re: Announce - ScramDisk v2.02h
Date: 9 Apr 1999 11:10:56 GMT
[ Newsgroups list trimmed ]
Lincoln Yeoh ([EMAIL PROTECTED]) wrote:
: I like the idea of superencryption too, and I don't know why so few
: people seem to like it. So far I have not had a good answer to how
: an attacker would know if he or she has succeeded.
The answer is simple. Kerckhoff's maxim says that your attacker knows
the cryptosystem you're using, but does not know the key. If you're
using superencryption, your attacker knows which systems you're using.
Of course, your attacker must now analyze the compound cipher, which
is almost certainly harder to do than than attacking a single cipher.
Andrew.
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Douglas A. Gwyn : True Jerk
Date: Fri, 09 Apr 1999 11:21:53 GMT
So R. Knauer is calling me names?
What does that say for the quality of his arguments?
(N.B. I didn't originate the Subject in the "R. Knauer: True Jerk"
thread. It is inherited in follow-ups, just like this one is.)
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Douglas A. Gwyn : True Jerk
Date: Fri, 09 Apr 1999 11:55:21 GMT
Reply-To: [EMAIL PROTECTED]
On Fri, 09 Apr 1999 11:21:53 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
wrote:
>(N.B. I didn't originate the Subject in the "R. Knauer: True Jerk"
>thread. It is inherited in follow-ups, just like this one is.)
Then don't post to it.
Bob Knauer
"I am making this trip to Africa because Washington is an international city, just
like Tokyo,
Nigeria or Israel. As mayor, I am an international symbol. Can you deny that to
Africa?"
- Marion Barry, Mayor of Washington DC
------------------------------
Date: Fri, 09 Apr 1999 08:27:30 -0400
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Douglas A. Gwyn wrote:
>
> "Trevor Jackson, III" wrote:
> > superiority of hand coding is wrong. In fact, I'd be willing to bet
> > real money that I can design an instruction set that no human can code
> > as well as a compiler.
>
> It doesn't have to be "obfuscated". There are actual, reasonable
> architectures for which optimization technology usually beats
> "hand-coded" assembler. This can happen, for example, with wide
> instruction words where several computational threads are occurring
> in parallel, or for large register sets, or other situations in
> which there are mathematical methods of optimization that are not
> intuitive.
I've working in both. VLIW or horizontal microcode does not require
mathemaical optimization. They are why basket weaving is taught at MIT.
;-) Even heavy conditional pipelining can be dealt with manually.
Large register sets (~1000) are painful to keep track of, but, unless
you have stack frame windowing, not that difficult.
Typically the coder's knowledge of the expected behavior of the code
dominates the optimizer's low-level techniques. This is the basis for
most of the claims of superiority of hand coding over compiler
optimization. I epect it to be invalidated when the coder cannot keep a
mental image of the ALU's state in his head.
------------------------------
From: Peter Gunn <[EMAIL PROTECTED]>
Subject: More Diffie-Hellman musings...
Date: Fri, 09 Apr 1999 12:32:45 +0100
Noone has commented on my proposal to avoid the man-in-the middle
by simply creating a key by hashing a secret shared between the
client & server with the DH value...
A->B: y1=(2^x1)%p
B->A: y2=(2^x2)%p
A: key=H(secret,(y2^x1)%p)
B: key=H(secret,(y1^x2)%p)
thinking on this a bit more, and wanting to use a user specific secret,
how about the following...
x1 is a random number
x2 is a random number
A is the Client.
B is the Server.
H() is a one way crypto hash function (SHA or similar)
p a large safe prime
key is the key used for the symmetric block cipher
U is the userid (perhaps just the user's name??)
A->B: y1=(2^x1)%p, H(U)
B->A: y2=(2^x2)%p, H(U,y2)
A: key=H(U,(y2^x1)%p)
B: key=H(U,(y1^x2)%p)
So, what happens is...
1) The client generates a random number (x1), works out
y1=(2^x1)%p and sends it along with SHA(userid) to
the server.
2) Server looks up list of users keyed by SHA(userid)...
disconnects client if it is not recognised, otherwise
generates a random number (x2) and returns y2=(2^x2)%p
to the client, along with SHA(userid,y2). Server works out key for
block cipher using SHA(userid,(y1^x2)%p).
3) Client calculates SHA(userid,y2) and disconnects if
the value is wrong. Client works out key for block cipher
from SHA(userid,(y2^x1)%p)
Then the server and client encrypts all the traffic using their
shared key. If this works, its possible to have a secure
client/server network connection that avoids the man in
the middle based simply on userid, which the server needs
to keep a list of anyway.
If the user list needs to be public, then a short password
could be associated with the user, and "userid+password"
could be used in place of userid (U) above.
Comments please :-)
ttfn
PG.
------------------------------
From: [EMAIL PROTECTED] (John Briggs)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: 9 Apr 99 07:51:43 -0400
In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (John
Briggs) writes:
> In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (R. Knauer)
>writes:
>> I think it is clear that if you could test the whole ensemble, you
>> would have certain knowledge. And it is clear that if the sample size
>> is too small, that your knowledge is very poor. I am looking for the
>> knowledge criterion I call "reasonable certainty". I am leaving that
>> open for you to specify. Once you specify it, then you should be able
>> to some up with a sample size for a given test. Just keep in mind that
>> the gaussian falls off very slowly, so each increase in significance
>> costs exponentially more in sample size.
>
> It is not clear what you are talking about here. Increase in significance
> need not cost exponentially more in sample size.
[my demonstration/clarification snipped]
> So rather than sample size being an exponential function of significance,
> we've got significance being something like an exponential function
> of sample size.
Since R. Knauer has responded to this article and has snipped the text
above without response, we can apparently take it that he agrees that
I was right and he was wrong and increasing significance does not
require exponentially increasing sample size.
Hot dog! A point on which consensus has been reached.
John Briggs [EMAIL PROTECTED]
------------------------------
From: [EMAIL PROTECTED] (RSAEuro General)
Subject: Re: DES or RSA Program source (C langage)
Date: Fri, 09 Apr 1999 12:41:59 GMT
Reply-To: [EMAIL PROTECTED]
On Sat, 27 Mar 1999 16:08:07 +0100, BERTINI Fabien
<[EMAIL PROTECTED]> wrote:
>Do you know where we can find the source code in C langage of the RSA or
>DES system?
>I will be very happy if anybody could help me or send me the adress
>where I can find it.
You could have a look at RSAEuro which can be found at
http://www.reapertech.com/RSAEuro/
Regards
RSAEuro Team.
============================================================================
RSAEURO: [EMAIL PROTECTED]
RSAEURO Bugs: [EMAIL PROTECTED]
Tel: +44 (0)370 566687
Http: http://www.reapertech.com/RSAEuro/
RSAEURO - Copyright (c) J.S.A.Kapp 1994-1997.
============================================================================
RSAEURO - Cryptography for the World.
Reaper Technologies - Computer Security Specialists
Note:
All Unsolicited Email (SPAM) sent to Reaper Technologies email
addresses will result in the sender being billed for all
resources used in processing this mail.
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Fri, 09 Apr 1999 14:11:04 GMT
Reply-To: [EMAIL PROTECTED]
On 9 Apr 99 07:51:43 -0400, [EMAIL PROTECTED] (John Briggs)
wrote:
>Since R. Knauer has responded to this article and has snipped the text
>above without response, we can apparently take it that he agrees that
>I was right and he was wrong and increasing significance does not
>require exponentially increasing sample size.
As I pointed out earlier, I was referring to the fact that the
gaussian is an exponential which falls off very slowly, which means
that in order to gain more significance you have to pay an
exponentially greater price in terms of sample size.
I did not intend for that comment to be a precise analytical
statement, but just a qualitiative observation. I do recall reading in
Triola's elementary statistics book that, in general, to double the
level of confidence of a measurement you must get an order of
magnitude larger sample.
I do not care to chase down that specific reference, since I did not
intend anything quantitatively precise by my original comment. I agree
with your analysis for a specific test statistic, so let's move on,
unless you think there is something fundamental to be gained by
further discussion.
>Hot dog! A point on which consensus has been reached.
We should be so lucky, eh.
I notice we haven't gotten any consensus on some of the questions I
asked recently. Now that I have your attention, maybe you would like
to comment on them.
I focus entirely on the so-called "Monobit Test" in FIPS-140, which I
reproduce here (we can take up other tests later):
+++++
A single bit stream of 20,000 consecutive bits of output from the
generator is subjected to each of the following tests. If any of the
tests fail, then the module shall enter an error state.
The Monobit Test
1.Count the number of ones in the 20,000 bit stream. Denote this
quantity by X.
2.The test is passed if 9,654 < X < 10,346.
+++++
Here are my questions.
Assumming that the sample consists of 20,000 individual samples of one
bit each (which is what "monobit" means):
What is the exact statistical test being used and what are its
parameters?
Mario Triola, the statitical expert recommended by one poster here,
states that parametric tests cannot be used to determine true
randomness. Is FIPS-140 employing a parametric test, and if so, what
is the justification?
But here is the real problem to be solves:
a) Random sampling is a necessary condition for the validity of
statistical tests. If the population distribution is not randomly
sampled, then the statistical test is invalid - which means nothing
can be determined from it whether it is passed or it is failed.
b) Producing 20,000 bits in a continuous sequence is the method of
sampling chosen for FIPS-140 tests (see above).
c) The claim of the Monobit Test is that if it is not passed, then the
distribution of bits produced by the TRNG process is determined not to
be random within reasonable certainty. It is that claim which I
challenge.
d) The result obtained, namely that the population distribution is not
random within reasonable certainty, means that condition "a" above is
not met, since the process which generated the population distribution
is the exact same process used to produce the sample. If the process
that generates the population distribution is not random, as the test
declares, then the same process which produces the sample is not
random either.
e) Therefore, because the sampling procedure is not random, the
Monobit Test is invalid, and the results are also invlaid. You can't
have it both ways - either the RNG is random, in which case the test
would be passed, or it is not random, in which case the test is
invalid because the same process which produces the distribution also
produces the sample.
The usual assumption is that one can interpret failure to pass
statistical tests as a determination that the TRNG cannot produce true
random numbers, wherein in fact the tests are invalidated because the
TRNG did not produce a random sample.
For the Monobit Test, the sample and the distribution are the same
thing because they are produced by the same process. There is no way
to distinguish one from the other. If the distribution is not random,
neither is the sample, in which case the Monobit Test is invalid,
since it is a statistical test which requires a random sample. But the
determination that the distribution is not random came from the
failure to pass an invalid test, and that determination is invalid
too.
I await your reply (or anyone else's reply) to these questions.
Bob Knauer
"I am making this trip to Africa because Washington is an international city, just
like Tokyo,
Nigeria or Israel. As mayor, I am an international symbol. Can you deny that to
Africa?"
- Marion Barry, Mayor of Washington DC
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: KL-43?
Date: Fri, 09 Apr 1999 15:22:23 GMT
Dan <[EMAIL PROTECTED]> wrote, in part:
>Does any one know what ecryption the KL-43 uses??
Probably no one who is allowed to tell you...that sounds like a designation
used by the U.S. military for a current or recent encryption device.
John Savard (teneerf is spelled backwards)
http://members.xoom.com/quadibloc/index.html
------------------------------
From: [EMAIL PROTECTED]
Subject: test
Date: Sat, 10 Apr 1999 00:58:48 +0900
test
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************