Cryptography-Digest Digest #20, Volume #9 Tue, 2 Feb 99 11:13:11 EST
Contents:
Re: Entropy and SHA1 (Bloefeldt)
Re: *** Where Does The Randomness Come From ?!? *** (Marty Fouts)
What's the newest on MD4? ( Mr. Anssi Bragge)
Re: Crypt Info ???? (Jekman321)
Re: Truth, theoremhood, & their distinction (R. Knauer)
Encryption for telemedicine (Themos Dassis)
Re: Random numbers generator and Pentium III (R. Knauer)
Re: *** Where Does The Randomness Come From ?!? *** (R. Knauer)
Re: RNG Product Feature Poll (Herman Rubin)
Re: Japanese Purple encryption
----------------------------------------------------------------------------
From: Bloefeldt <[EMAIL PROTECTED]>
Subject: Re: Entropy and SHA1
Date: Tue, 02 Feb 1999 03:02:49 -1000
Eric W Braeden wrote:
>
> What, if anything, can be stated about the effect of changing
> the entropy of the INPUT to SHA1 on its output? What
> I want here is the effect of changing the total entropy
> or its density in the input to SHA1 and not that hashing
> functions destroy entropy.
>
> Eric
Entropy calculations deal with probabilities of certain variables
occuring. You can control the variables that are hashed and SHA1 will
translate the inputs into outputs in a mapping that is repeatable. If you
define the input alphabet to have a small range of choices, like a-z, and
if you use the input alphabet symbols with certain probabilities, then
the SHA1 output symbols that correspond to those symbols will have the
same entropy as the input symbols. Only 26 output symbols will occur
since SHA1 does not use a variable key.
------------------------------
Crossposted-To: sci.skeptic,sci.philosophy.meta
Subject: Re: *** Where Does The Randomness Come From ?!? ***
From: Marty Fouts <[EMAIL PROTECTED]>
Date: 01 Feb 1999 17:52:51 -0800
>>>>> Tom Norback pounded silicon into:
> Marty Fouts wrote in message ...
>> I tend to agree. Non-determinism is a satisfactory answer, but
>> the more interesting question is about non-causality. In the
>> expanded formulation above, non-causality would be represented by
>> any C_(n+1) that is _not_ a member of U' (= F(C_(n)).
>> Non-determinism is obvious from QT. But is non-causality?
>>
> Over its proper range I think QT is entirely causal in the way you
> define causal. For a particular measurable property QT yields a
> probability weighted set of possible results. Measurement will
> always yield a result from within that set. So far at least this
> has always been the case.
Well, we are ignoring whether n is a natural number or an integer, of
course. If we suppose n to be natural there is a 'first cause' that
is missing, which, we can avoid, by claiming that 'the big bang did
it', that is that QT doesn't apply until the temperature causes the
forces to start to manifest, and so the 0th element isn't properly a
member of the universal set. I find that satisfactory, although some
don't.
> Your C_(n), however, is more general than a particular measurable
> property. It seems to imply something like: "the actual value
> of_every_measurable property of the system at time (n)".
That is the intent.
> According to QT that sort of C_(n) can't exist. For every
> property of the system with an actual value there is a
> complementary property of the system with no determinate value.
I suppose one could replace individual values with conjugate-pair
products and use ranges rather than single numerical values, thus the
property of a conjugate pair is known only as the product described as
a range. This preserves HUP and leaves the state definable.
> (There is a further problem if C_(n) implies that a real system is
> or can be exhaustively described with a finite number of
> properties.)
Now *that* is a can of worms I don't want to get into in detail, but a
sketch of a potential work around is that we can assume that the total
energy in the universe is finite and thus can be represented by a
finite number of mass/energy quanta. If we assume that each quanta
has a finite number of parameters (that's the handwave, of course,)
then the configuration C _can_ be described with a finite number of
properties. (Of course remembering that we have chosen a particular
frame of reference.)
> When Bohr was asked for the complementary property of "truth" he
> thought about it for a while and then said "clarity".
Cool. Do you have a reference for that quote? That's a Bohr-ism I
didn't know before and like a lot.
> Later,
> Tom
--
that is all
------------------------------
From: [EMAIL PROTECTED] ( Mr. Anssi Bragge)
Subject: What's the newest on MD4?
Date: 02 Feb 1999 12:04:49 +0100
I read some papers from RSA website, mainly about the PKCS
stuff. I came across MD4 as being on the faster edge of the
development, but then, perhaps just not collision free. At least MD4
with 2 rounds is not safe, but with 3 rounds not so sure, according to
these papers.
Those papers were dated 1993. Now, I haven't seen here or
elsewhere any war about MD4/5. I've only read this and
talk.politics.crypto for about a year or so, so I can't say if this
is an ancient-and-already-forgotten subject times ago.
What's the situation with MD4 now? Everyone seems to be
talking only about MD5 on daily basis.
abe
--
Anssi Bragge
UBS AG, Messages & File transfer systems I46S
Bahnhofstrasse 45, CH-8045 Zuerich, Switzerland
Tel: +41 1 236 0485 / Fax: +41-1-236 41 41 / GSM: +41-76-388 7722
------------------------------
From: [EMAIL PROTECTED] (Jekman321)
Subject: Re: Crypt Info ????
Date: 2 Feb 1999 11:49:32 GMT
>From: fungus <[EMAIL PROTECTED]>
>Date: 2/2/99 2:11 AM Eastern Standard Time
>Message-id: <[EMAIL PROTECTED]>
>
>
>
Fungus wrote........
>How often will you be using the software? How much computer experience
>do you have?
I'll be using it for all personnal info on my HD as well as all e-mail
correspondance. I have been doing desktop support for 2.5 yrs now. Most of it
in a medical environment. Working on the Y2K issue now. Extensive replacement
and data x-fers will be involved. Business is good : )
There are plenty of free programs around which will
>probably do the same job. Many of them are regarded by us as better
>because they don't hide their algorithms behind "company secrets"
>(ther are very few secrets in crytography...) and there are no
>clueless marketing men to deal with.
Could you suggest a few as well as inform me what to stay clear of..?? I
always said that if you want to know something, go to the ones who know and
ask.
>"Forensic attaack" is a buzz phrase invented by the marketing people
>of that company. (Short enough?)
Thanxxxx
>> Can Uncle Sammy & the SS obtain my "key" with a "key" retrieval
>> program..??
>
>Not if the program is any good.
>
>
>> Is there anything out there to just prevent OLE Uncle Sammy from
>> sticking his nose into my hd, e-mail, or business..?? Or @ least make
>> it a serious pain in their ass to have even tried..??
>>
>
>Yes, plenty of things (some of them very simple) can make this impossible.
>What type of computer and operating system do you have? What kind of
>usage are we looking at?
>
Compaq DeskPro with a P-166. Current O/S is WIN-95 (still waiting on the
pros/cons of 98)
>--
><\___/>
>/ O O \
>\_____/ FTB.
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Crossposted-To: sci.math,comp.theory
Subject: Re: Truth, theoremhood, & their distinction
Date: Tue, 02 Feb 1999 11:46:14 GMT
Reply-To: [EMAIL PROTECTED]
On Mon, 01 Feb 1999 20:06:57 -0500, Nicol So <[EMAIL PROTECTED]>
wrote:
>No. "Reality" is the "possible world" described by an interpretation.
What if there is an interpretation but it is incorrect - does that
mean there is still Reality?
Is the incorrect interpretation of a Correct Interpretation also
Reality?
What if there is no interpretation - does that mean there is no
Reality?
Bob Knauer
"Sometimes it is said that man cannot be trusted with the government
of himself. Can he, then, be trusted with the government of others?"
--Thomas Jefferson
------------------------------
From: Themos Dassis <[EMAIL PROTECTED]>
Subject: Encryption for telemedicine
Date: Tue, 02 Feb 1999 14:59:42 +0200
Reply-To: [EMAIL PROTECTED]
I am working in a European project in telemedicine and trying to
identify the security components that will be needed.
The system consists of LAN's connected with ISDN reserved lines.
The LAN's will be Ethernets, while on the ISDN the EURO-ISDN protocol
will be used. The data communicated is sensitive data related to
the patient, and quite big (around 70Mbytes). The project is not
concentrated in security, but we have to implement some kind of
encryption.
I was thinking of an ISDN modem with a symmetric encryption hardware
incorporated to it, but I was told that such a solution is out of the
trend and that a public key solution with a TTP suits best to my
problem.
What kind of a solution would you propose me?
Themos Ntasis
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Random numbers generator and Pentium III
Date: Tue, 02 Feb 1999 13:00:17 GMT
Reply-To: [EMAIL PROTECTED]
On Tue, 02 Feb 1999 05:49:44 -0500, "Trevor Jackson, III"
<[EMAIL PROTECTED]> wrote:
>> So if a finite number fails your test, does that mean it isn't random?
>It means that we have some confidence (probability) that it is not random. The
>confidence is defined statistically.
This statement is yet another prime example of the kind of sophistry
that comes from people who attempt to characterize randomness from
statistical tests of the output of a number generator.
Let's say you want to buy a TRNG but first you require that it be
certified proveably secure. So the person selling you the TRNG tells
you that he will perform statisitcal tests. Let's call that test T1.
In T1 he claims to have found random strings and non-random strings.
Let's call them "bad" strings and "good" strings respectively.
According to the sophistry of statistical testing of the output, you
want as many "good" strings and as few "bad" strings as possible from
your TRNG .
In T1 we find a certain number of strings that are bad because they
failed the statistical tests and a certain number of strings that are
good because they passed the statistical tests. Let that ratio be:
(bad/good) = r1.
Since you are still skeptical about this whole procedure, you requre a
second test T2 to be performed which yields the ratio:
(bad/good) = r2.
Because r2 is not equal to r2, you get suspicious and require a third
test T3:
(bad/good) = r3
....
(bad/good) = r_n
After n such tests you begin to see that the ratio (bad/good) is
bounded on the lower and upper extremes by RL = min (r_i) and RU = max
(r_i), so you have a high degree of confidence that:
RL <= (bad/good) <= RU.
Let's say you are SO confident that you assign a probability of one to
that confidence. IOW, that range above has probability one of being
correct:
Pr (RL <= (bad/good) <= RU)) = 1
where Pr (x) is the probability of x.
Since you know nothing about how the numbers are being generated, you
just as well assume that the distribution of ratios is symmetric
around the mean of the bounds.
That implies that
Pr (Rl <= (bad/good) <= 1/2(RL+RU)) = 1/2
and
Pr (1/2(RU+RL)<= (bad/good) <= RU)) = 1/2
where 1/2(RU+RL) is the mean of the bounds RL and RU.
Now you take the RNG to another person and he submits it to
statistical tests in the same manner as above and he gets identical
results to those above, but this time he considers the ratio
(good/bad) instead of (bad/good).
Using the distribution above, namely
1/RU <= (good/bad) <= 1/RL
we get
Pr (1/RU <= (good/bad) <= 1/2(1/RU+1/RL)) = 1/2
and
Pr (1/2(1/RU+1/RL) <= (good/bad) <= 1/RL)) = 1/2
But this contradicts the results obtained earlier from the ratio
(bad/good) since the means are not the same when you go back to the
original ratio (*). Clearly something is wrong here.
This is known as Bertrand's Paradox (see Li and Vitanyi, op. cit), and
illustrates what can happen when you attempt to infer the distribution
of the generator from probablistic tests of the output.
Probability only tells you about an event *after it happens*. It
cannot tell you about the generation process. For a TRNG to be
proveably secure (not just probablistically secure) you must do an
analysis on the generator itself. If you try to infer the
characteristics of the generator from statistical tests of the output,
you can get contradictory results depending on how the measurements
are conducted.
One of the things that is wrong with statistical testing of the output
is that it assumes that there are so-called "bad" strings. But that is
not possible, since for a TRNG to be proveably secure it must be
capable of generating *ALL* possible strings of a given finite length
equiprobably.
If *ALL* strings must be "good" for a TRNG to be proveably secure,
then none of them can be "bad". DUH! Tests which purport to uncover
"bad" strings in the output are of nonsense tests. Statistical tests
for randomness ("good") and non-randomness ("bad") of the output
strings do not pertain to crypto-grade random number generation for
the (proveably secure) OTP cryptosystem.
If you allow someone to sell you a TRNG based on statistical testing
of the output, you just bought a Snake Oil Generator.
Bob Knauer
(*) A numerical example is given in Li and Vitanyi, page 316 (2nd Ed.)
"Sometimes it is said that man cannot be trusted with the government
of himself. Can he, then, be trusted with the government of others?"
--Thomas Jefferson
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Crossposted-To: sci.skeptic,sci.philosophy.meta
Subject: Re: *** Where Does The Randomness Come From ?!? ***
Date: Tue, 02 Feb 1999 13:19:25 GMT
Reply-To: [EMAIL PROTECTED]
On Tue, 02 Feb 1999 03:51:25 GMT, "Tom Norback" <[EMAIL PROTECTED]>
wrote:
>> > When Bohr was asked for the complementary property of "truth" he
>> > thought about it for a while and then said "clarity".
>>Cool. Do you have a reference for that quote? That's a Bohr-ism I
>>didn't know before and like a lot.
>It's from a footnote in Steven Wienberg's "Dreams of a Final Theory". I
>don't own the book or I'd give you the page number. (If I recall, one of
>the chapters made a remarkably good case for reductionism).
According to Li and Vitanyi in their book on Kolmogorov Complexity,
there are several inference schemes that have been proposed over the
course of history, from Epicurus to William of Occam to Thomas Bayes
to A.N. Kolmogorv. None of them passes rigorous mathematical
inspection except the inferences drawn from Kolmogorov's algorithmic
complexity, which Solomonov and Chaitin have also vontributed
substantially to. Even Occam's Razor can lead to incorrect results.
According to this theory, Bohr's statement is valid only if it implies
algorithmic complexity theory. K-complexity, K(x), is measured as the
length of the shortest program which can produce x (and then halt) on
a universal computing machine, without any inputs.
If x is simple, then it has a regularity which can be exploited in
fabricating a short algorithm to produce it. Otherwise if x is
irregular, that is it is complex, then the shortest program will have
to contain x in its entirety. If N is the length of x:
Regular : K(x) << N
Complex : K(x) ~ N
If that is what Bohr meant, then he anticipated Turing and all that
followed in algorithmic complexity theory.
Bob Knauer
"Sometimes it is said that man cannot be trusted with the government
of himself. Can he, then, be trusted with the government of others?"
--Thomas Jefferson
------------------------------
From: [EMAIL PROTECTED] (Herman Rubin)
Subject: Re: RNG Product Feature Poll
Date: 2 Feb 1999 10:27:58 -0500
In article <[EMAIL PROTECTED]>,
R. Knauer <[EMAIL PROTECTED]> wrote:
>On 31 Jan 1999 16:50:38 -0500, [EMAIL PROTECTED] (Herman
>Rubin) wrote:
>>>Since any decay occurs at random, the interval between any two of them
>>>is random. Intervening events, like other decays, are totally
>>>irrelevant to the randomness of the two events that do get measured.
>>The intervals are random, but may or may not be independent.
>How can the time at which one radioactive decay is detected be
>dependent the time another decay is detected? Notice that I said
>"detected" not decayed.
The time at which one radioactive decay occurs can affect whether
another is even detected. This makes the detection times AT BEST
a renewal process, and it can be even worse.
>If you are alluding to undetected events, due to the detector being
>unable to detect two closely spaced decays, those undetected events do
>not cause other events, the ones that are detected, to be dependent on
>one another. The events that are detected are still completely random
>in time.
If by completely random in time, you mean a Poisson process, the answer
is no. If one uses an analog-threshhold detector, it can get quite
complicated, and not even be a renewal process. That is, the times
between detections may not even be independent.
>>Also, how are you using the device to generate the random bits?
>>If you are waiting a length of time and using parity of the number
>>counted, the bias of the bit produced is not zero, and is improved
>>by having even a substantial amount of dead time.
>The method we have been discussing is written up on the HotBits site:
>http://www.fourmilab.ch/hotbits/
>Briefly stated, one measures the time between two detected events and
>compared it to the time between the next two detected events. Half the
>time the interval comparison will produce a 1 bit if the first
>interval is longer than the second and a 0 bit if it is shorter, and
>the other half the time it will produce the reverse. This eliminates
>any bias in the measurements.
I have just looked at this; the numbers should be reasonably good, but
not that good. One problem is that of measuring the time; this can
be seen to produce biases and failure of independence on the order of
10^{-4}. This should not be a problem for cryptography, but would be
for statistical use.
>>I have a report on this.
>Please share it with us.
>>However, I suggest that several runs be taken, and the results
>>XOR'ed. One cannot reasonably test randomness to more than one
>>part in 10^3, if that.
>What do you mean by "test randomness"? If you are talking about
>statistical tests on the output strings then those tests do not
>characterize the TRNG adequately. The best they might do is give you a
>level of confidence, which is not good enough for the proveable
>security of the OTP cyyptosystem.
There is always Murphy's Law to worry about. One might barely be
able to test at a high enough level for cryptographic use, but
this would not be good enough for statistical use, where 10^10
to 10^15 bits are commonly used now.
--
This address is for information only. I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED] Phone: (765)494-6054 FAX: (765)494-0558
------------------------------
From: [EMAIL PROTECTED] ()
Subject: Re: Japanese Purple encryption
Date: 2 Feb 99 15:20:56 GMT
Wipunxit Wiechcheu [or Dave Williams?] ([EMAIL PROTECTED]) wrote:
: A German Enigma Macine recently went up for auction in the Washington,
: DC area. The starting bid was at $20,000 - no one nibbled to the best
: of my knowledge.
: More recently, I saw a NEMA rotor machine go at auction for $780.
Not a bad price. I wonder if the old OMI machines, being sold
commercially, are available at an even lower price?
: M-209s also show up and sometimes get big $$$$ for some reason.
Interesting. I've been told that Hagelin machines, such as the C-36, were
one of the almost "affordable" types of historical cipher machines.
John Savard
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************