Cryptography-Digest Digest #121, Volume #9 Mon, 22 Feb 99 13:13:02 EST
Contents:
Re: Testing Algorithms (Patrick Juola)
Crypt for FTP Protocol ([EMAIL PROTECTED])
rfc1319 algorithm != reference impl. ???? (Adam Worrall)
Re: Testing Algorithms ("Trevor Jackson, III")
Re: Randomness of coin flips (R. Knauer)
Re: Looking for proof on whitening (Jerry Leichter)
Re: Testing Algorithms (Patrick Juola)
Re: Randomness of coin flips (R. Knauer)
IDEA test vectors? (Dominik Werder)
Re: True Randomness (R. Knauer)
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (Patrick Juola)
Subject: Re: Testing Algorithms
Date: 22 Feb 1999 09:35:49 -0500
In article <[EMAIL PROTECTED]>,
Trevor Jackson, III <[EMAIL PROTECTED]> wrote:
>Coen Visser wrote:
>
>> fungus <[EMAIL PROTECTED]> writes:
>> >Vegeta-the original Super Saia-jin wrote:
>>
>> [...]
>>
>> >> As far as security goes, key length helps, but it won't make it secure, I
>> >> think that DES proves that, it's a joke, and a bad one at that.
>> >>
>> >
>> >So you think a 256 bit key will eventually be brute forced?
>>
>> Why not? Who can look 50 years into the future or 100 years or 200 years?
>
>You may be underestimating the steepness of exponentials. 2^256 is a fearsome
>number. Rather than envisioning a machine (or machines) doing trial
>decryptions, try envisioning simply counting up to 2^256.
>
>At forseeable PC speeds of around 1GHz (2^30) you'd need 2^226 machine-seconds.
>If you envision machines a billion times faster you need 2^196 machine seconds.
On the other hand, Moore's Law has suggested that machine speed
doubles about ever eighteen months. This means that a machine a billion
times faster (2^30) will only take about 45 years to develop.
A machine 2^200 times faster than current machines -- which *can* brute
force 56-bit keys, will require about 300 years to develop, according
to Moore's law.
Given that people have, for the past 20 years, routinely been claiming
that "Moore's Law cannot hold much longer due to fundamental physical
limitations," it's starting to look like betting that Moore's law
will NOT hold isn't a safe bet.
-kitten
------------------------------
From: [EMAIL PROTECTED]
Subject: Crypt for FTP Protocol
Date: Mon, 22 Feb 1999 14:13:01 GMT
My users have repeatedly asked if I could add crypt/cipher to my FTPD so I've
finally got around to looking at it ;) So basically I've trying to find if
there is some easy to use crypt library that would do something usable in this
case.
I've looked at "ssh" ofcourse, and if you wanted to just crypt the
control-session it would be fairly straight-forward to tunnel it, but since
it is required to crypt the data-sessions as well it makes it more
complicated. (Far beyond your average user I suspect?) Although ssh2 comes
with what looks like a library of sorts, ssh2's licence makes it pretty much
useless in that sense. The "sftp" that comes with it breaks FTP-Protocols
too much to be really used (still basic ssh challange before it drops down
into FTP-style protocols).
The problem is, I'm rather a newbie to ciphers. I got the impression that
"ssh" uses RSA public-key to send across a session-key which is then used for
IDEA for the remainer of the session. Would this be sufficient? Are there
better options? PGP? (or is that just RSA again, showing my ignorance :) )
The idea is to add something "not too complex" - to make merging with existing
FTPD and FTP client code fairly straight forward.
Well, basically, any information to aid me and further my knowledge will be
appreciated :)
Lund
============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
------------------------------
From: Adam Worrall <[EMAIL PROTECTED]>
Subject: rfc1319 algorithm != reference impl. ????
Date: Mon, 22 Feb 1999 13:51:41 GMT
In section 3.2 of rfc1319 ('The MD2 Message Digest Algorithm'),
some pseudo-code for generating an initial checksum is given.
(Note that this checksum is appended to the data before the
main message digest algorithm is brought to bear.)
In the middle of the loop (shown below), the pseudo-code in the
RFC shows a byte of the checksum being assigned a value from a
lookup table.
In the reference implmentation, supplied in the RFC itself, the
relevant code *xor's* the current value of the byte with the
value from the lookup table.
I've looked around and seen no mention of this, but it seems
pretty clear that the algorithm does not agree with the reference
implementation. Other implementations all follow the reference,
by the way ;)
The relevant excerpts are shown below; line 137 of the rfc
corresponds to line 186 of the C source code.
======={ rfc1319:~131=140 }=======
131 /* Process each 16-word block. */
132 For i = 0 to N/16-1 do
133
134 /* Checksum block i. */
135 For j = 0 to 15 do
136 Set c to M[i*16+j].
137 Set C[j] to S[c xor L].
138 Set L to C[j].
139 end /* of loop on j */
140 end /* of loop on i */
======={ End }=======
======={ md2c.c:182=187 }=======
182 /* Update checksum.
183 */
184 t = checksum[15];
185 for (i = 0; i < 16; i++)
186 t = checksum[i] ^= PI_SUBST[block[i] ^ t];
187
======={ End }=======
Am I mad ?
Is this a "Yawn, heard that eighteen years ago." ?
Do I win a small prize for spotting inconsistencies ?
- Adam Worrall
============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
------------------------------
Date: Mon, 22 Feb 1999 09:21:13 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Testing Algorithms
Coen Visser wrote:
> fungus <[EMAIL PROTECTED]> writes:
> >Vegeta-the original Super Saia-jin wrote:
>
> [...]
>
> >> As far as security goes, key length helps, but it won't make it secure, I
> >> think that DES proves that, it's a joke, and a bad one at that.
> >>
> >
> >So you think a 256 bit key will eventually be brute forced?
>
> Why not? Who can look 50 years into the future or 100 years or 200 years?
You may be underestimating the steepness of exponentials. 2^256 is a fearsome
number. Rather than envisioning a machine (or machines) doing trial
decryptions, try envisioning simply counting up to 2^256.
At forseeable PC speeds of around 1GHz (2^30) you'd need 2^226 machine-seconds.
If you envision machines a billion times faster you need 2^196 machine seconds.
If you envision finding the solution in a century you need 2^164 machines
(because pi seconds is a nano-century). If each machine is has 256 atoms of
silicon forming the counter you need 2^13 AMU (256 atoms * 32 AMU/atom [IIRC])
per machine, or 2^177 AMU for the whole mess. That is 1e53 AMU.
The entire planet earth is only about 2e51 AMU, so you'd need to take apart 50
planets down to the atomic level to build your machines just to COUNT to 2^256
in a century. I don't think we'll be able to do this in the next 100 years.
> I think a better question is, *can* we make ciphers that have a 256 bit key
> that can't be broken using significantly less than those 256 bits. I think
> we can not (yet).
>
> Regards,
>
> Coen Visser
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Randomness of coin flips
Date: Mon, 22 Feb 1999 15:38:06 GMT
Reply-To: [EMAIL PROTECTED]
On 21 Feb 1999 16:47:29 -0500, [EMAIL PROTECTED] (Herman Rubin)
wrote:
>It is possible to define anything in mathematics.
"Anything"?
How about defining something that is contradictory, like a square
circle?
>This does not mean it exists in the real world.
That is still an open metaphysical issue, one which depends crucially
on your definition of the "real world" - but is outside the range of
suitable topics here in sci.crypt.
>A Bernoulli process need not have p=1/2.
I never said it did.
>But does that mean that there is such an attainable animal?
Sure there is.
The bits of Chaitin's Halting Probability, Omega, arise from a uniform
Bernouilli process (p=1/2), namely from inspecting his exponential
diaphantine equation, or equivalently, inspecting the halting behavior
of universal Turing machines.
The bits of Omega are as random as fair coin tosses, and the sequence
is Borel normal. That makes it a uniform Bernoulli process.
>One can come close; this is
>what I have been pointing out. And as there are things that can
>go wrong, it is necessary to test for them.
I have never disagreed with that statement. What I have disagreed with
is the methods of testing.
I do not believe that you can adequately characterize the performance
of a RNG by testing its output solely. True random sequences have NO
computable properties, therefore there is nothing present in random
sequences to decide their randomness or non-randomness.
The best you can hope for is some kind of early warning signal from
such tests, which, on a purely probabilitic basis, gives you a reason
to check the TRNG with internal diagnostics. For example, it is well
accepted that if a TRNG starts outputting all 1s or all 0s, that it is
likely that something *could* be wrong. But that is as far as one can
go using probabilistic measures on the output only.
>It is possible to get crypto-grade randomness without the existence
>of a TRNG being possible.
We agree that a "perfect" TRNG is impossible, therefore a real world
TRNG will have some small amount of imperfection - but it is of a
level such that it does not permit any significant amount of
information leakage in the ciphers it produces.
>It is hard to get full channel capacity,
>and a slight deviation from true randomness is acceptable.
Yes, indeed. That's how the real world operates.
But my point is that one cannot use that as an excuse to implement
flawed tests.
>That one has a low probability that the interceptor will be able
>to figure out the message does not mean it should not be tested.
>While all 0's, or even 90% 0's, is unlikely, a human has an
>excellent chance of reading the message if such is used.
I ask you to prove that statement.
And what do you mean by "unlikely"?
The inventors of Enigma certainly believed that it was "unlikely" that
anyone was ever going to break it, especially when they added the 4th
rotor. Yet Churchill was reading field dispatches before the marshalls
had them decrypted for themselves.
This book by Li & Vitanyi has been a real eye opener in terms of
accepted beliefs about probability theory, especially randomness. And
I look forward to studying the book by Feller that they reference
constantly.
>A wheel does not have to be perfect to work. Neither does a TRNG,
I agree - and have never given anything in disagreement when
addressing the practical side of generating random numbers.
>or even a PRNG,
Here is where I begin to question things.
One of the recent threads on sci.crypt was about using computation,
like digit generation of transcendentals, for substitute random
numbers, and whether they would be crypto-grade secure. We never came
to a consensus, so perhaps you can explain how you can proveably
decide, by theory or experiment, that the output of certain PRNGs is
crypto-grade secure.
I have repeatedly suggested using random numbers so produced to
construct a series of test ciphers from real messages, and try to
break them using the best techniques available. I imagine in that way
one could calculate a level of confidence, not based solely on
probability theory, but based on inferential models such as Bayesian
induction (IOW, pac-measures, where "pac" means probably approximately
correct - which is not the same as the law of large numbers, or other
stochastic measures.)
>have to be perfect to accomplish the job. Testing
>TRNGs is likely to be easier than PRNGs, because physical processes
>are not going to have long-range dependencies. However, physical
>processes are not perfect.
I challenge that statement in the case of radioactive decay. Prove to
me that radioactive decay is not a perfectly random process.
There is absolutely no interference terms between the individual
radioisotopes, so there is no possibility of interdependence. Each
nucleus behaves completely independently of the other.
The fact that one must be extremely careful when designing detection
circuits has nothing to do with the fact that radioactive decay is
completely random.
>It still requires testing to make sure that the devices work.
Of course it does - any experimentalist knows that.
But that is no excuse to implement snake oil tests.
>If the
>message does not have the same type of peculiarities as the RNG,
>the combination will be hard to crack, and a slightly increased
>proportion of 1's will not compromise the security.
It depends how those excess ones are distributed.
>But one can come up with a fair
>number of them adequate for SOME testing.
Define what you mean by "SOME testing".
>And XORing the results
>of different generated threads at different times should be much
>safer than the output of one physical device producing one series.
Please prove that statement.
Bob Knauer
"If the allegations by Monica Lewinsky are true, if the allegations by Paula
Jones are true, if the allegations by Kathleen Willey are true, and if the
allegations by Juanita Broaddrick are true, then there is a particularly
important resident of Pennsylvania Avenue who needs a lot of professional help."
--Steve Dunleavy, New York Post, 2/20/99
------------------------------
From: Jerry Leichter <[EMAIL PROTECTED]>
Subject: Re: Looking for proof on whitening
Date: Mon, 22 Feb 1999 10:21:31 -0500
| I believe that it has been proven that applying
| whitening [1] to a block cipher makes brute force
| searches more difficult
Yes.
| by a factor of 2**N (where
| N is the block size).
No; not even close.
| Does anyone have a pointer to the paper where
| this is proven?
http://wwwcsif.cs.ucdavis.edu/~rogaway/papers/desx.ps
-- Jerry
------------------------------
From: [EMAIL PROTECTED] (Patrick Juola)
Subject: Re: Testing Algorithms
Date: 22 Feb 1999 10:42:56 -0500
In article <7arsl1$51j$[EMAIL PROTECTED]>, <[EMAIL PROTECTED]> wrote:
>What is it that drives people to make these wild claims and speculations
>without doing the arithmetic? Computers can not continue to get faster
>indefinitely.
The fact that doomsayers have been predicting physical limits to the
maximum speed of computers for 20 years now, and the successive violation
of these physical "limits" has become routine.
This whole discussion could be transplanted thirdy years into the past;
with people claiming -- and dragging fairly high-powered physics into
the claims -- that computers simply couldn't go much faster than the
technology of 1969.
What is it about this particular cry of "Wolf" that makes it more
plausible now than in 1969?
-kitten
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Randomness of coin flips
Date: Mon, 22 Feb 1999 15:50:38 GMT
Reply-To: [EMAIL PROTECTED]
On 21 Feb 1999 20:06:16 -0500, [EMAIL PROTECTED] (Herman
Rubin) wrote:
>>> So I can make a statement such as "there is only one chance in 1,000
>>> that a uniform Bernoulli process would have produced such a skewed
>>> distribution" and therefore reject the hypothesis that it was
>>> produced by a u.B.p.
>>... with more than 99.9% confidence? To do this right, you should
>>also consider what your a priori estimate was for the process to be
>>uniform Bernoulli. (For example, if you *know with certainty* that
>>it really *is* a u.B.p., your a posteriori estimate will still be
>>certainty that it *is* a u.B.p.)
>My prior probability it is a uniform Bernoulli process is ZERO.
>All that can be hoped for is that it is close to such a
>process. This is generally the situation; the observations
>almost never have the assumed properties, let alone the
>hypothesis being tested. The questions, can it reasonably be
>take to be close enough.
Let me see if I understand what you just said, because at first glance
you seem to be saying exactly what I have been saying about
statistical tests on the output of a TRNG.
You are saying that the best that can be expected is that a process,
which is purportedly a uniform Bernoulli process (p = 1/2), is very
close to being such a process (i.e., p ~= 1/2). I can appreciate that,
although I pointed our earlier that Chaitin's Omega is a perfect uBp.
Then you say that "observations almost never have the assumed
properties". I take that to mean that the output of a TRNG (i.e., the
"observations") almost never have the characteristics of idesl
stochastic processes (i.e., the "assumed properties").
Now we are getting somewhere - that is exactly what I have been saying
all along, and why I believe that statisical tests on the output of a
TRNG are worthless other than as diagnostic warnings.
Finally you say that the question is whether it can be close enough,
and we both agree that for a TRNG the output can be sufficiently
random, although not perfect, to be crypto-grade. I am still looking
for a measure of how to characterize that near perfect result, and I
have suggested using the output to build test ciphers from real
messages and then try to break them using inference techniques, from
which presumably one can calculate the pac-measure of unbreakability
(as opposed to the purely statistical measure obtained from
probabilistic considerations of the TRNG output only).
Have I interpreted your comments correctly?
Bob Knauer
"If the allegations by Monica Lewinsky are true, if the allegations by Paula
Jones are true, if the allegations by Kathleen Willey are true, and if the
allegations by Juanita Broaddrick are true, then there is a particularly
important resident of Pennsylvania Avenue who needs a lot of professional help."
--Steve Dunleavy, New York Post, 2/20/99
------------------------------
From: [EMAIL PROTECTED] (Dominik Werder)
Subject: IDEA test vectors?
Date: Mon, 22 Feb 1999 16:21:42 GMT
HI!
Where can I find test vectors for my IDEA implementation?
cya!
DoMiNik
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness
Date: Mon, 22 Feb 1999 17:19:04 GMT
Reply-To: [EMAIL PROTECTED]
On Mon, 22 Feb 1999 13:20:28 GMT, [EMAIL PROTECTED]
(Steven Runyeard) wrote:
>How on earth do you prove something is random?
By analyzing the process that creates the randomness.
>Surely things only
>appear random when we are unaware of the rules govourning their
>behaviour.
That does not agree with the fundamental tenents of Quantum Mechanics,
unless you believe in hidden variables.
>Sure, you can say it's unpredictable but that doesn't mean
>it's random.
Define random as you are using it in that context.
I am using the definition for crypto-grade randomness which says that
a random number is one produced by a process that is capable of
generating all possible finite sequences equiprobably.
Equiprobable means independent and equidistributed. It has been
suggested that a uniform Bernoulli process meets that requirement,
although some people have objected to that for reasons I have
forgotten.
Regarding your comment on unpredictability, I would prefer using the
term "equi-predictable" in terms of using a TRNG to produce random
numbers. That is a stronger statement than can be made for a PRNG
which can only output computable sequences. A TRNG can output
uncomputable sequences as well.
Bob Knauer
"If the allegations by Monica Lewinsky are true, if the allegations by Paula
Jones are true, if the allegations by Kathleen Willey are true, and if the
allegations by Juanita Broaddrick are true, then there is a particularly
important resident of Pennsylvania Avenue who needs a lot of professional help."
--Steve Dunleavy, New York Post, 2/20/99
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************