Cryptography-Digest Digest #264, Volume #9       Mon, 22 Mar 99 04:13:05 EST

Contents:
  Re: IDEA algorithm
  Re: Random Walk (Shawn Willden)
  Re: pRNG that is "predictable to the left"? (Scott Fluhrer)
  Re: Requesting Opinions, Clarifications and Comments on Schneier's Statements 
(sb5309)
  Re: Does PGP use salt when hashing passphrases? ([EMAIL PROTECTED])
  Re: Random Walk (karl malbrain)
  Re: Newbie,  what a stupid term... (Myself)
  Common Modulus Attack ("_DEFAULT")
  Re: Requesting Opinions, Clarifications and Comments on Schneier's Statements 
("Trevor Jackson, III")
  Re: Newbie,  what a stupid term... (Shawn Willden)
  Re: Random Walk ("Douglas A. Gwyn")
  Re: To break 40-bit DES (Dean Povey)
  Re: Random Walk ("Douglas A. Gwyn")
  Re: Common Modulus Attack (Boris Kazak)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] ()
Subject: Re: IDEA algorithm
Date: 22 Mar 99 01:07:22 GMT

[EMAIL PROTECTED] wrote:
: I found an IDEA source (may be old, at
: http://rschp2.anu.edu.au:8080/idea.html) and I have 2 questions.

: 1)  Are multiplications suppose to be mod 65537?

Yes. Because 65,537 is prime, multiplication is invertible. That is, if
only the numbers from 1 to 65,536 are used, but not zero. Since that means
that 65,536 different values are used, this can be used well with 16-bit
quantities: IDEA uses 0 to stand for 65,536 which wouldn't fit even in an
unsigned 16-bit integer.

John Savard

------------------------------

Date: Sun, 21 Mar 1999 18:13:15 -0700
From: Shawn Willden <[EMAIL PROTECTED]>
Subject: Re: Random Walk

"R. Knauer" wrote:

> So, there you have it - not only do statistical tests fail in general
> for true random number generation processes, such as the uniform
> one-dimensional random walk, which is based on a uniform Bernoulli
> process, but they pass processes that are not even crypto-grade
> random, such as the digit expansion of pi.

Some questions:

(1) Why are you using a random walk as the model for a random number
generation process?  It's a rather poor one, as it imposes severe
constraints on the relationship between one output and the next.  Further,
it contains a concept of infinite memory which should definitely *not* be
a characteristic of a RNG.

(2) If you are not, in fact, looking at the output of the random walk
process, but rather trying to point to some attributes of the underlying
random variable by analyzing a random walk produced by that variable, why
are you doing this, and can you make your point more clearly?  I
understood the quotes from the book regarding the supposed
counter-intuitiveness of the behavior of random walk processes, but your
conclusion is not obvious to me.  I'm not saying you're wrong, just that
you haven't made your argument well.

(3) What does the fact that decimal expansions of transcendentals have
certain statistical properties have to do with whether or not those same
properties are to be expected in good RNGs?  For an RNG to be any good it
has to produce output that is unpredictable.  That is, an attacker who has
access to the first n bytes of the stream should be able to predict
nothing about the next m bytes of the stream.  A well-known sequence will
fail this test.  So will a sequence the attacker can monitor.  So will a
stream that exhibits any detectable biases.  Statistical tests partially
address the latter requirement.  They have nothing whatsoever to do with
the first two.

Shawn.



------------------------------

From: Scott Fluhrer <[EMAIL PROTECTED]>
Subject: Re: pRNG that is "predictable to the left"?
Date: Mon, 22 Mar 1999 03:56:05 GMT

In article <[EMAIL PROTECTED]>,
        [EMAIL PROTECTED] ("Steve Myers") wrote:

>Scott Fluhrer wrote in message <7d1t5p$[EMAIL PROTECTED]>...
>>As a counterexample, consider the generator:
>>
>>  X(n+1) = SHA( X(n) )
>>
>>
>>One-wayness by itself does not, guarantee unpredictability.
>>However, if you tighten up the definition, making a few additional
>>conditions on top of one-wayness to prevent such pathological
>>conditions, you should get the result.  I could formalize these
>>additional conditions, but I won't take the time :-)
>
>    Sorry, I wasn't trying to be a smart aleck when I made the I won't take
>the time comment, I was just pressed for time in making my response.
>I don't see a way however of  making non-significant changes to any of the
>standard definitions of collision freedom which would allow the
>the above mentioned counter example not to work.  Perhaps you could
>elaborate.
Well, how about:

 1) Every image in the set has the same number of preimages.  This is to
    prevent an attacker being able to make statistical assumptions about
    the previous output.  This also prevents your prepending a 0000,
    because the image consisting of all 1's does have any preimage.

    However, if the number of images is the same as the number of preimages,
        this is obviously equivilant to: "SHA is a one-way permutation".

 2) There is no efficient method of generating partial information about
    the pre-image from the image (for obvious reasons).

>
>The requirement I would require for my hash is that the distribution on the
>range of the SHA to be near uniform, which is quite common, thus the
>universal in universal one-way hash function. An other question about your
>example
>X(n+1)=SHA(X(n)),

>Surely the size of X(n+1) is smaller than X(n),
Why do you say that?  All I specified was that SHA was a one-way function
(permutation), with a few extra requirements.  What is it in the definition
of a one-way function that says that the output must be smaller than the
input?  Even if the algorithm is the Secure Hash Algorithm, that will take
an 160 bit output to a 160 bit output.
>                                                so how do you extend the
>size of X(n+1) to use it in the next iteration? If it is not, then SHA can
>be the identity function, and satisfy all reasonable requirements of a
>collision free hash. This is clearly very predictable.
However, the identity function does not satisfy all reasonable requirements
of a one-way function, in that it is not that difficult to invert.

>
>I'm not convinced, there is nothing in the proof that requires the bits to
>be bits per say. The only requirement is that the alphabet size be of
>_constant_ size, and not dependent on N, which in your example is the case.
>The proof will still go through, and therefore the fact that you have to
>look at these blocks rather than bits does not matter.
Look closer at my example.  It turns out that my alphabet size really isn't
fixed.  Here's why:

As N increases, the sequence is supposed to be stronger, that is, takes a
larger circuit to recognize.  However, for any one-way function with a fixed
M, there is a circuit of size O(2**M) that will recognize it.  So, as N
increases, M must also increase.  However, M is also the size of our
alphabet.


Or, if you still aren't convinced, explain how the original poster's example
of:

  X(N) = X(N-1) ** d mod p*q

can be predicted to the left without the knowledge of p or q (or otherwise
breaking RSA).  It certainly isn't pseudo-random.

-- 
poncho


------------------------------

From: sb5309 <[EMAIL PROTECTED]>
Subject: Re: Requesting Opinions, Clarifications and Comments on Schneier's Statements
Date: Mon, 22 Mar 1999 11:37:58 +0800
Reply-To: [EMAIL PROTECTED]

Gentlemen, thank you for your replies.

sb5309 wrote:

> In Bruce Schneier's book "Applied Cryptography", 1996, he writes on page
> 16 :
>
> "A one-time pad might be suitable for a few short messages, but it will
> never work for a 1.54 Mbps communication channel".
>
> Questions :
>
> (a) What One-time pad has got to do with the high-bandwidth
> communication channel, given in Mr. Schneier's opinion, that it is
> unsuitable for long messages ?
>
> or
>
> (b) From what I can understand from Mr. Scheier passages that, where
> there is heavy traffic, one-time pad is not practical because it is very
> expensice to generate and deliver message keys (and that is why one-time
> pad is unsuitable for long messages). How does this point relate to
> "1.54 Mbps communication channel" ?
>
> Thanks.




------------------------------

From: [EMAIL PROTECTED]
Crossposted-To: comp.security.pgp.discuss
Subject: Re: Does PGP use salt when hashing passphrases?
Date: Mon, 22 Mar 1999 04:04:55 GMT

In article <[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] () wrote:
> Maybe it would be for some purposes. But it isn't used by PGP, because it
> couldn't be used the way it is set up.
>
> In PGP, you want to be able to decode a key file or a conventionally
> encrypted file knowing ONLY the passphrase. If salt were used to produce
> the key from the passphrase, you would have to know the salt value too.
>
> For passwords, the computer keeps both the hash and the salt, so it can

But PGP could always store the salt value plaintext in the keyfile.
e.g.
encrypt key using encrypt/hash(passphrase XOR salt)
store result + salt.
decrypt using hash(passphrase XOR salt).

Are there any problems with this approach? The salt doesn't seem to be leaking
any information.

Would salts add any security at all? Would precomputing passphrases be worth
it?

Cheerio,
Link.

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: karl malbrain <[EMAIL PROTECTED]>
Subject: Re: Random Walk
Date: Mon, 22 Mar 1999 03:59:43 GMT

In article <[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] wrote:
> On Sun, 21 Mar 1999 19:53:51 GMT, karl malbrain <[EMAIL PROTECTED]>
> wrote:

> If you have to get health care, you should become indigent so you will
> not be make more ill just to get all of your money. Also, the best
> health care is at the teaching hospitals and they are usually run by
> county govt and are free if you are indigent.
>
> The surest way to contract a major illness or get into a condition
> poor health is to have lots of money from insurance. If you are not
> ill before you get treatment, you sure as hell will be afterwards.
>

No, sorry, but illness is NOT contracted from having any sum of money.  As
example from the reverse: check the TUBERCULOSIS infection rate among those
WITHOUT enough money to afford adequate VENTILATION/HOUSING.  And I believe
that you can check CDC-ATLANTA for bacterial transmission rates to hospital
in- patients (assuming that was YOUR reference at all.)

> >If you'll take the same ANALOGY to the OTP distribution problem, you'll get
> >the same results.
>
> So what? What is your point - that we should not use the OTP as a
> platform for discussing crypto-grade randomness? Perhaps so, because
> there is another serious problem with the classical OTP, namely the
> very weak mixing of plaintext and keystream. No cipher should ever be
> as susceptible to a known/given plaintext attack as the OTP
> ciphersystem.

Again, I'm sorry, but this makes ZERO sense.  By definition the OTP protects
only ONE known/given plaintext/ciphertext PAIRWISE ORDER PLEASE.

> But what would you propose in its place, if the purpose is to serve as
> a platform for discussion of crypto-grade randomness? No matter which
> platform you chose, people are going to think there is some weakness
> with the platform - yet that is not the issue.

I do not propose anything in its place.  As I've stated previously in posts to
BRUCE_S, I'll settle for any method with NON-LINEAR keystream vis-a-vis the
plaintext.  If you could give me OTP, I'd take it.
>
> The main reason for which I use the OTP in discussions of randomness
> is that it is a proveably secure system. But that is only because of
> two things other than the randomness of the keypad: 1) the fact that
> the keypad is used only once; 2) the fact that the keypad is the same
> length as the plaintext.

I think you're MIS-CHARACTERIZING the concept of KEYPAD.  Please read
KEYSTREAM instead.

>
> Maybe the term "one time equal length truly random pad" (OTELTRP)
> would be more appropriate. You decide - I don't think it really makes
> any difference for those who have been following these discussions of
> crypto-grade randomness.

Again, the same mis-characterization of EQUAL-LENGTH as above.  OTP by
definition is of ANY ARBITRARY LENGTH AT ALL -- YOU CUT, I'LL CHOOSE.
>
> >And again to the CAESARS -- not everyone was GIVEN the GIFT of learning to
> >read in the first place.  TENSOR CALCULUS was as applicable then as now.
>
> Yes, but it was not known at that time.

That is total BULLSHIT.  The mechanics of MATHEMATICS have been widely known
since HUNTER-GATHERORS became FARMERS, 5K or so years ago.

I might suggest you catch up on missed titles: CONNECTIONS by James Burke.
Karl M

>
> Bob Knauer
>
> "If you think health care is expensive now, wait until it's FREE!"
>
>

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: [EMAIL PROTECTED] (Myself)
Subject: Re: Newbie,  what a stupid term...
Date: Mon, 22 Mar 1999 05:10:53 GMT

On 21 Mar 99 20:14:47 GMT, thermal and electromagnetic action caused
[EMAIL PROTECTED] ()'s brain to produce the following pseudorandom
thought:

>The next time the password is typed in, the computer can check that it is
>right, because in the password file the computer has kept both a copy of
>the one-way-scrambled password and salt combination and a copy of the salt
>value.

So if the salt value is saved, what's the point in computing it in the
first place? I've been trying to understand this for some time, and I
think I must be missing some basic point. Does it just throw in extra
bits so it has something longer to hash, because of a weakness in the
hash function?

>Hence, with the salt and the password, it just does the scrambling over
>again, to see that it gets the same scrambled result as before. It doesn't
>try to unscramble, because that can't be done.

How would this be different than if the salt had been left out
completely, and the password been hashed by itself?

Cluelessly,
-Myself-

------------------------------

From: "_DEFAULT" <[EMAIL PROTECTED]>
Subject: Common Modulus Attack
Date: Sun, 21 Mar 1999 23:57:17 -0600

Thanks for reading this post!

I was reading Bruce's book where he talked about Common Modulus Attack on
RSA:
c1 = m ** e1 mod n
c2 = m ** e2 mod n
e1 and e2 are relatively prime, the cryptanalyst knows n, e1, e2, c1, and
c2.
The extended Euclidean algorithm can find r and s:
r * e1 + s *e2 = 1
assume r is negative, extended Euclidean algorithm can be used to find c1
** -1, then
[(c1 ** -1) ** -r] * [c2 ** s] = m mod n

My question is: does c1 ** -1 always exist?  If the cryptanalyst knows c1,
why does he need to use extended Euclidean algorithm to find c1 ** -1 ?  I
tried to use the following number but I can't recover the message m:
m = 13, e1 = 2, e2 = 3, n = 5

then I had
c1 = 4, c2 = 2
r = -1, s = 1
but it doesn't lead to the right m, if you know what mistake I made or what
I mis-understand, please let me know, thanks!





------------------------------

Date: Sun, 21 Mar 1999 14:53:39 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Requesting Opinions, Clarifications and Comments on Schneier's Statements

sb5309 wrote:
> 
> In Bruce Schneier's book "Applied Cryptography", 1996, he writes on page
> 16 :
> 
> "A one-time pad might be suitable for a few short messages, but it will
> never work for a 1.54 Mbps communication channel".
> 
> Questions :
> 
> (a) What One-time pad has got to do with the high-bandwidth
> communication channel, given in Mr. Schneier's opinion, that it is
> unsuitable for long messages ?

Mr. Schneier is doing something extremely dangerous: predicting the
future.  The focus of his assertion is that it is silly to use 1.54 Mbps
of key material, which is that an OTP on a 1.54 Mbps comm channel would
require.

The arithmetic works out to about 17 gigabytes per day, which is pretty
silly.  Today.

> 
> or
> 
> (b) From what I can understand from Mr. Scheier passages that, where
> there is heavy traffic, one-time pad is not practical because it is very
> expensice to generate and deliver message keys (and that is why one-time
> pad is unsuitable for long messages). How does this point relate to
> "1.54 Mbps communication channel" ?

For every bit of data you need a bit of key (pad).  That means a fast
channel requires a fast key source.  Which is unrealistically expensive.
> 
> Thanks.



------------------------------

Date: Sun, 21 Mar 1999 23:57:38 -0700
From: Shawn Willden <[EMAIL PROTECTED]>
Subject: Re: Newbie,  what a stupid term...

Myself wrote:

> [Description of salting password hashes elided]
>
> How would this be different than if the salt had been left out
> completely, and the password been hashed by itself?

Suppose an attacker (Doctor Evil) looked at the password file and noticed:

dr-evil:0eX1wGqCWEsk:525:525:Doctor Evil:/home/dr-evil:/bin/bash
....
root:0eX1wGqCWEsk:0:0:root:/root:/bin/bash

"Ah hah!" he chortles wickedly, "The encrypted form of root's password is the
same as the encrypted form of my password.  Therefore, root's password must
also be 'shhhh!', or at least something that hashes to the same value!"  Dr.
Evil then logs on as root and proceeds to perpetrate his dastardly deeds.

More usefully, an attacker could take a dictionary, hash every word in it and
then just match the resulting hashes against the contents of a system's
password file.  He's almost guaranteed to get a few hits.

The salts makes all of this significantly harder by ensuring that there are a
large number of different encryptions of a given password.  If users then
choose reasonably good passwords, the attacker will find it very difficult to
guess passwords.

Shawn.


------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Random Walk
Date: Mon, 22 Mar 1999 07:19:29 GMT

"R. Knauer" wrote:
> Oh, come on - read Feller, fer chrissakes. It's all there.

I did read Feller, certainly before you did, probably before you
were born.  It was a classic work, widely used as a general
reference for basic applications of probability theory in Agency
articles decades ago.  And I take exception to your distortion
of the legitimate point he was making.

------------------------------

From: [EMAIL PROTECTED] (Dean Povey)
Subject: Re: To break 40-bit DES
Date: 22 Mar 1999 07:16:44 GMT

[EMAIL PROTECTED] writes:


>> There is no 40 bit DES - It is defined to have a 56 bit key.
>>
>> --


SSL has a cipher mode with 40 bit DES.  It is just 56 bit DES, but 16 bits of 
key material is exchanged in the clear.


--
Dean Povey,         | e-m: [EMAIL PROTECTED]     | Cryptozilla:
Research Scientist  | ph:  +61 7 3864 5120       |  www.cryptozilla.org/
Security Unit, DSTC | fax: +61 7 3864 1282       | Oscar - PKI Toolkit:
Brisbane, Australia | www: security.dstc.edu.au/ |  oscar.dstc.qut.edu.au/

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Random Walk
Date: Mon, 22 Mar 1999 08:12:10 GMT

"Trevor Jackson, III" wrote:
> Could you please amplify your description of this or provide a
> reference?  It sounds extremely interesting.

I may write it up later, if I get my web site off the ground,
but there is a quick summary in a man page in my I-hat package
that you can find in various crypto archives around the net.
The classic book is "Information Theory and Statistics" by
Solomon Kullback, 416 pp. Dover paperback reprint July 1997,
ISBN 0486696847.  That basically develops the theoretical
foundation, and if you're not familiar with conjugate
distributions, exponential families, etc. you might prefer
"The Information in Contingency Tables" by D.V. Gokhale and
Solomon Kullback, 365 pp. Marcel Dekker 1978, ASIN 082477261X.

Quoting from the blurb on the latter:
Minimum discrimination information (MDI) estimation is an
approach to analyzing contingency tables and other types of
categorical or count data.  This approach leads naturally to
general log-linear models.  It enables one to find estimates
of cell-entries under various hypotheses or models and to
test the hypotheses or models.  This approach provides
indication of outlier cells and estimates of parameters and
their asymptotic covariance matrix.  It is a unified,
practical approach, and MDI estimates are, in general, best
asymptotically normal.  The analysis of information involves
a design matrix, a set of regression parameters, and
associated explanatory variables.

The formulation in terms of contingency tables isn't as
limiting as you might think, since most cryptanalytic
hypotheses can readily be set up in such terms.

The actual computation of an information statistic turns out
to be extremely simple, basically summing various n*log(n)
terms with appropriate signs depending on whether or not the
n represents a constraint among other variables summed with
opposing sign; the result is the information in favor of a
hypothesis over the alternative.  The d.f.s sum even more
directly, and are useful in relating twice the information
statistic to its asymptotic chi-square behavior.  (Note that
the sample size is taken care of automatically; as I said
before, the statistic definitely does not assume an infinite
sample.)  In ITS, Kullback proves that the statistic has
maximal discriminating power, which is one of its many
virtues.  Another is that info.stats. (and corresponding d.f.)
from independent sources are combined additively to produce
the info.stat. for the aggregate.  Thus, information from,
say, partial cryptanalysis can be combined with HUMINT,
weather patterns, a priori likelihood estimates, or other
dissimilar sources in order to correctly take into account
how all that information bears on the likelihood of some
hypothesis.  This is basically an evolution of methods
pioneered by the British during WWII and successively refined
by generations of cryptomathematicians.  MDI is finding much
application these days in fields such as pattern recognition.

> In fact (to paraphrase your leading sentence above) statistical
> tests evaluate *order*.

I prefer to say that such a test is a test of how well a specific
simple model fits the observed data (or vice versa).  "Lack of
discernable order" is not a testable model, which explains why
actual tests are for the alternate hypothesis "a specific form
of detectable order".  We can relate the results to confidence
levels a la Fisher, but that isn't necessary from the likelihood-
ratio point of view (which is nicely compatible with information
statistics, as it happens).

------------------------------

From: Boris Kazak <[EMAIL PROTECTED]>
Subject: Re: Common Modulus Attack
Date: 22 Mar 1999 08:05:00 GMT
Reply-To: [EMAIL PROTECTED]

_DEFAULT wrote:
> 
> Thanks for reading this post!
> 
> I was reading Bruce's book where he talked about Common Modulus Attack on
> RSA:
> c1 = m ** e1 mod n
> c2 = m ** e2 mod n
> e1 and e2 are relatively prime, the cryptanalyst knows n, e1, e2, c1, and
> c2.
> The extended Euclidean algorithm can find r and s:
> r * e1 + s *e2 = 1
> assume r is negative, extended Euclidean algorithm can be used to find c1
> ** -1, then
> [(c1 ** -1) ** -r] * [c2 ** s] = m mod n
> 
> My question is: does c1 ** -1 always exist?  If the cryptanalyst knows c1,
> why does he need to use extended Euclidean algorithm to find c1 ** -1 ?  I
> tried to use the following number but I can't recover the message m:
> m = 13, e1 = 2, e2 = 3, n = 5
> 
> then I had
> c1 = 4, c2 = 2
> r = -1, s = 1
> but it doesn't lead to the right m, if you know what mistake I made or what
> I mis-understand, please let me know, thanks!
==========================
If you are going to remain within the bounds of modular
arithmetic (mod n), forget about the negative numbers.
If a negative result emerges in subtraction, add modulus,
then -1 = n-1 .  Sic...
                Best wishes     BNK

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to