Cryptography-Digest Digest #295, Volume #9       Sun, 28 Mar 99 16:13:03 EST

Contents:
  --- sci.crypt charter: read before you post (weekly notice) (D. J. Bernstein)
  True Randomness & The Law Of Large Numbers (R. Knauer)
  Re: On Moduli that are not quite kosher... (Ted Kaliszewski)
  Re: Encryption and the Linux password files (Lincoln Yeoh)
  Re: RSA key distribution (DJohn37050)
  Re: Live from the Second AES Conference (Terry Ritter)
  Re: Live from the Second AES Conference (Terry Ritter)
  Re: compare RSA and D-Hellman ("Sassa")
  Rohatgi Presentation at AES2 (William hugh Murray)
  Re: Live from the Second AES Conference (Richard Outerbridge)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (D. J. Bernstein)
Crossposted-To: talk.politics.crypto
Subject: --- sci.crypt charter: read before you post (weekly notice)
Date: 28 Mar 1999 06:00:40 GMT

sci.crypt               Different methods of data en/decryption.
sci.crypt.research      Cryptography, cryptanalysis, and related issues.
talk.politics.crypto    The relation between cryptography and government.

The Cryptography FAQ is posted to sci.crypt and talk.politics.crypto
every three weeks. You should read it before posting to either group.

A common myth is that sci.crypt is USENET's catch-all crypto newsgroup.
It is not. It is reserved for discussion of the _science_ of cryptology,
including cryptography, cryptanalysis, and related topics such as 
one-way hash functions.

Use talk.politics.crypto for the _politics_ of cryptography, including
Clipper, Digital Telephony, NSA, RSADSI, the distribution of RC4, and
export controls.

What if you want to post an article which is neither pure science nor
pure politics? Go for talk.politics.crypto. Political discussions are
naturally free-ranging, and can easily include scientific articles. But
sci.crypt is much more limited: it has no room for politics.

It's appropriate to post (or at least cross-post) Clipper discussions to
alt.privacy.clipper, which should become talk.politics.crypto.clipper at
some point.

There are now several PGP newsgroups. Try comp.security.pgp.resources if
you want to find PGP, c.s.pgp.tech if you want to set it up and use it,
and c.s.pgp.discuss for other PGP-related questions.

Questions about microfilm and smuggling and other non-cryptographic
``spy stuff'' don't belong in sci.crypt. Try alt.security.

Other relevant newsgroups: misc.legal.computing, comp.org.eff.talk,
comp.org.cpsr.talk, alt.politics.org.nsa, comp.patents, sci.math,
comp.compression, comp.security.misc.

Here's the sci.crypt.research charter: ``The discussion of cryptography,
cryptanalysis, and related issues, in a more civilised environment than
is currently provided by sci.crypt.'' If you want to submit something to
the moderators, try [EMAIL PROTECTED]

---Dan

------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: True Randomness & The Law Of Large Numbers
Date: Sun, 28 Mar 1999 16:35:59 GMT
Reply-To: [EMAIL PROTECTED]

As Professor William Feller of Princeton University states in his book
on probability theory (op. cit), p.152ff:

[emphasis his]

+++++
*Warning*: It is usual to read into the law of large numbers things
which it definitely does not imply. If Peter and Paul toss a perfect
coin 10,000 times, it is customary to expect that Peter will be in the
lead roughly half the time. *This is not true*. In a large number of
*different* coin-tossing games it is reasonable to expect that at any
*fixed* moment heads will be in the lead in roughly half of all cases.
But it is quite likely that the player who ends at the winning side
has been in the lead for practically the whole duration of the game.
Thus, contrary to widespread belief, the time average for any
individual game has nothing to do with the ensemble average at any
given moment.
+++++

That last sentence gives the fundamental reason why I claim that
statistical testing is not valid for deciding either true randomness
or non-randomness of a TRNG process. Consider this example.

You build a TRNG and output an n-bit sequence. According to what is
being claimed, you cannot submit that sequence to statistical tests to
determine if the TRNG is either truly random or not truly random. The
best that you can hope for is that such statistical tests will give
you a diagnostic warning that the TRNG *might* be random or
non-random.

Here is what the law of large numbers is all about. It is not about
any one particular sequence from one particular TRNG - that is the
time average as the number of bits increases. The ensemble average, on
the other hand, which the law of large numbers addresses, is obtained
by outputting n-bit sequences from N separate but identical TRNGs.

Here is an experiment to illustrate that point: Imagine a piece of
square tubing with one side removed, which is much longer than the
dimension of the sides. IOW, you have a long narrow U-shaped trough.
Seal off the ends and put water in it. Now place a tiny drop of ink at
the center of the trough, and observe as the ink diffuses thru the
water.

Each ink molecule undergoes a random walk caused by the many random
collisions of water molecules. After a large number of steps the ink
has spead out - so stop the experiment when ink just is about to reach
the ends (you have to do that because otherwise the ink will "reflect"
off the ends and confuse the results prior to that event).

Now consider the equence of steps for one given ink molecule to be the
same as the output sequence of one particular TRNG, and the collection
of all ink molecules as the ensemble of many TRNGs. Here are two
observations:

1) Most individual sequences (i.e., random walk paths) end up a
considerable distance from the origin. IOW, the spatial distribution
is relatively flat although it is really a Gaussian. IOW, most
individual paths are "abnormal" in terms of the mean behavior of the
ensemble.

2) There is a symmety to the distribution about the mean, namely the
center where you placed the initial drop of ink. For each "abnormal"
path on the left, there is an identical "abnormal" path on the right,
which together give an ensemble average at the mean, i.e., the center.

In terms of random number generation, statisitical tests based on the
law of large numbers cannot tell you anything about any particular
sequence. All they can relate to is the ensemble average taken from
sequences of a large emsemble of identical TRNGs.

As a specific example, consider a TRNG that generates a run of 100
zeros. If you apply statistical tests to that run you might conclude
that the TRNG is not truly random, because there is only a 1/2^100
chance for that to happen. But such thinking is incorrect.

The correct thinking is as follows. If you were to generate N
sequences of 100 bits each using N identical TRNGs, the probability
that there is a run of 100 ones which balance a corresponding run of
100 zeros will tend to unity as N becomes large. That does not say
that there will be NO runs of 100 zeros (or 100 ones), it just says
that there will be an equal number of runs of each type as N gets
large.

That's the same as saying that the ink will spread out to the left as
much as it will be spread out to the right. It does not say that the
ink will stay close to the origin for all time.

Since for even 100 bit runs, the minimum ensemble is astromomically
large - of order 2^100, and since it is literally impossible to apply
statistical tests to that many sequences, I conclude that statistical
testing for randomness or non-randomness for a TRNG is impossible, and
the best we can hope for is to use statistical tests as diagnostic
warnings.

I realize that the Strong Law Of Large Numbers puts quantitative
limits on the confidence one can assume for given ensemble size. But
that does not preclude the existence of a large fraction of "abnormal"
sequences in that ensemble. All it tells you about is the
pseudorandomness of an ensemble, not the true randomness or
non-randomness.

Even for 10 bit sequences, where runs of 10 zeros is not all that
unlikely, emsemble averages to test for true randomness requires of
order 1 million sequences of 10 bits. And 10 bits is not very many
bits for a typical keystream in crypto.

Using statistical tests to characterize either the true randomness or
non-true-randomness of a true random number generator is snake oil for
any reasonably sized keystream.

I welcome any civilized comments on this. Flames will be ignored.

Bob Knauer

"What right does Congress have to go around making laws just because
they deem it necessary?"
- Marion Barry, Mayor of Washington DC

------------------------------

From: Ted Kaliszewski <[EMAIL PROTECTED]>
Subject: Re: On Moduli that are not quite kosher...
Date: Sun, 28 Mar 99 12:04:31 -0500

Thank you for the clarification. What I failed to note in the original
posting is that the constraining condition on the generation of psp
moduli is of the form: gcd(p-1, q-1) = 2^k*(p1, p2, p3....pj), where p's are
all odd primes. In the case when gcd(p-1, q-1) = 2^k, as in the case of
Fermat numbers, the modulus can still be identified as a psp but its factoring
is not easy. I can factor F6 quite easily with a multiplier of 1071 but the
next number is resisting my effort to factor it. Of course, I am impatient
and work with a modest processor (16 MHz!) and chances are that someone
with a more capable computer would zero in on the needed multiplier. Inci-
dently, I did try to factor your modulus but it took me a couple of hours
just to find out that it was not a psp for bases 2-29. Then I gave up.
That's life!

------------------------------

From: [EMAIL PROTECTED] (Lincoln Yeoh)
Subject: Re: Encryption and the Linux password files
Date: Sun, 28 Mar 1999 17:45:47 GMT
Reply-To: [EMAIL PROTECTED]

On Wed, 24 Mar 99 06:22:45 GMT, [EMAIL PROTECTED] (Chris Monico) wrote:

>Any box with a competent sys admin has a shadowed passfile, and not 
>simply the default /etc/password.

Maybe. IMO the standard passwd file thingy is only a problem when you have
users of different security privileges with access to the same system.

So I don't think it's really a problem if the box only has very few users
and all the users are already the administrators (e.g. a firewall).

Coz if someone can get a shell on the system remotely even with the telnetd
and most other local access services disabled I doubt a shadowed passfile
will help that much.

But on a multiuser shared system yeah, shadow passwords are a must.

Cheerio,

Link.
****************************
Reply to:     @Spam to
lyeoh at      @[EMAIL PROTECTED]
pop.jaring.my @ 
*******************************

------------------------------

From: [EMAIL PROTECTED] (DJohn37050)
Subject: Re: RSA key distribution
Date: 25 Mar 1999 14:01:06 GMT

As Bob said, the bankers chose to exclude the possibility of certain classes of
attacks, at a small cost, rather then allow a very small chance that they would
work.  This was a conservative choice that they should be allowed to make.
Don Johnson

------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Live from the Second AES Conference
Date: Sun, 28 Mar 1999 18:14:39 GMT


On Sun, 28 Mar 1999 11:21:37 GMT, in <[EMAIL PROTECTED]>,
in sci.crypt [EMAIL PROTECTED] (Bruce Schneier) wrote:

>[...]
>a 6805 used to be an expensive processor that
>powered your desktop computer, 

That was never true:  I was in the NMOS microprocessor design group at
Mot when the 6805 was designed.  

The 6805 was specifically intended to be the smallest (lowest cost)
6800-like processor that we wanted to build.  It was generally
targeted at the "cost-sensitive" automotive people who wanted an even
cheaper single-chip solution than the 6801.  Many different versions
included various forms of on-chip parallel I/O, timers, serial ports,
etc.  One smart-card design had a serial ALU.

The 6805 came out *after* both the 6809, which I helped design, and
the 68000, which started the 16-bit era for Mot.  Thus, the 6805 never
was "an expensive processor," and as far as I know, nobody was foolish
enough to use it as the heart of a desktop computer.  

The 6805 started out cheap.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Live from the Second AES Conference
Date: Sun, 28 Mar 1999 18:14:46 GMT


On Sun, 28 Mar 1999 11:21:37 GMT, in <[EMAIL PROTECTED]>,
in sci.crypt [EMAIL PROTECTED] (Bruce Schneier) wrote:

>[...]
>The one sloppy part of this analysis was that he assumed that XORs are
>easier to balance and hence more secure than ADD.  This is, of course,
>nonsense...since ADD can be built out of XORs.

Sorry.  ADD (integer addition) cannot be built out of XOR's.  

ADD requires (e.g.) an AND function as a carry.  AND cannot be built
from XOR.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: "Sassa" <[EMAIL PROTECTED]>
Subject: Re: compare RSA and D-Hellman
Date: Wed, 24 Mar 1999 21:26:49 +0200
Reply-To: [EMAIL PROTECTED]

hi

> It is still believed that the conjecture that D-H is a difficult to
> break as it is to do a discrete log is true. Therefore, the algorithm
> has no known weaknesses, but it is possible to pick a weak prime
> modulus.

_prime_ modulus? i thought, i could pick _any_ modulus, even not so
prime ;)

or should i pick prime modulus for more security? or is it easy to break
D-H if modulus is not prime?


> SKIP is one of many protocols that use the D-H key agreement protocol.

aha, i see.


> There are hundreds of public key algorithms, if you count all the
> varients. So, you could ask the same about any of them.

yes, i could ask question about any of them.

i thought, one should devise a new encryption standard if existing
algorithms do not fit his requirements.


> D-H is a key agreement algorithm and the basis of a lot of other
> algorithms. It did not provide a reversible encryption or signature
> capability.

why cannot someone generate, say, 56 bit D-H key, and use the shared secret
for DES ? so this way one can exchange keys in public, still secure enough.
and then use those keys for DES. or any other algorithm that demands
symmetrical keys.


> That's [D-H] what prompted the development of RSA and ElGamal.

and so i thought too: D-H prompted RSA.


> RSA and D-H simply serve very different purposes.

not so different, as to my sight of view. main purpose to create algorithm
for sharing keys publicly. although RSA also includes encryption
(and, what is even more essential, decryption :) algorithm.


>>>...RSA, so i'd like to know, how
>>>fast it enciphers.

> In general, the time required is roughlyt proportional to the number
> of bits in the exponent plus the number of those bits that are 1's.
> Since most implementations use a small public exponent, the public
> operation is very fast, while the private operation is slow.

ok, how long does it take, say, P-100 to encipher, say, 100K? in average.


> D-H uses the same math function, modular exponentiation. However, the
> first half of the calculation is often done with a small base, which
> makes that calculation a little faster.

i generate 128 _Byte_ D-H keys within ~20 seconds on AMD 5x (about 100MHz).

are these keys secure, if i take modulus n=2^(128*8) - as you see, not prime
at all?

i do not use any prime. does my implementation has a way to be broken?



--
   Sassa

Apiary Inc.
  ______
@()(_)
/\\

[EMAIL PROTECTED]



------------------------------

From: William hugh Murray <[EMAIL PROTECTED]>
Subject: Rohatgi Presentation at AES2
Date: Sun, 28 Mar 1999 20:21:28 GMT

This is a multi-part message in MIME format.
==============61A3291254877870E24FA899
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit



Bruce Stonier wrote:
> 
> On 23 Mar 99 09:32:56 GMT, [EMAIL PROTECTED] () wrote:
> >This reminds me of something I've noticed. Although Magenta had serious
> >problems, the cryptanalysis of it was based on a weakness in its key
> >schedule. The only other possible problem with it is that the S-box,
> >although nonlinear, seems a bit too simple.
> 
> There are a bunch of other problems the the security.  I believe
> one--the massive number of fixed points--is mentioned in our paper.
> Honestly, there are many other ways of attacking Magenta.  It's just
> nor worth spending the time writing it up.
> 
> Magenta now replaces McGuffin as my favorite toy cipher to give
> budding cryptanalysts.
> 
> >:     IBM's Pankaj Rohatgi explained how he got all 128 bits of
> >:     a Twofish key after only 50 (that is 50 not 2^50) uses of a smart
> >:     card!
> >
> >I wonder how secure some of the other ciphers would be, if the kind of
> >optimizations Bruce suggested for fitting Twofish on a smart card were
> >applied to them. That is, if it were possible.
> 
> He said in his talk that every cipher is vulnerable.  We've done this
> sort of work, too, and we have found that you can't defend against
> these types of attack with the algorithm.  You can do some things with
> the implementation and some things with the hardware, but basically
> you need to defend in the protocol layer.

Principle: Any phenomenon of the implementation (e.g., time or power
used) that is a function of the key, including its intended use, leaks
information about the key.  

I cannot speak for Dr. Rohatgi, but IBM understands this full well. 
They have always asserted that the algorithm is only a necessary, but
not sufficient, part of the encryption solution.  They have always
recognized the importance to security of the both the protocol and the
implementation.  That is one reason why they offered a suite of
protocols, mostly ignored, with the DES.  It is one reason that they
only offer tamper resistant encryption modules.  It is one reason why
they invented the control vector.  It is one reason they developed the
Common Cryptographic Architecture. It is one reason why their cards will
not talk in the clear or encrypt an arbitrary value.  

Given that, I wonder why this presentation or why, if it had to be
given, it was silent on this point.  I must confess to some
disappointment, not to say suspicion of the motive.  Perhaps it is only
an omission in an otherwise excellent report.

You and I both know that it is incredibly easy to make gross mistakes in
both protocols and implementations, that "encryption is harder than it
seems."  We also know that such mistakes are more vulnerable to
discovery and exploitation than those in algorithms.  However, good
practices in protocols and implementations are easier to codify, and
variances from those practices easier to detect and correct for.  

> 
> >If you compare all the ciphers on the same compiler ... and I remember at
> >a computer chess competition, someone suggested that it wouldn't be a bad
> >idea to require the entrants all to use C, so that it wouldn't be a
> >competition of assembly-language coding skills.
> 
> Then you're comparing compiler efficiency.  What we did in our
> comparison is assume best possible ASM.  We didn't implement
> everything, but we gave all algorithms the benefits of the doubt.
> 
> Bruce
> **********************************************************************
> Bruce Schneier, President, Counterpane Systems     Phone: 612-823-1098
> 101 E Minnehaha Parkway, Minneapolis, MN  55419      Fax: 612-823-1590
>            Free crypto newsletter.  See:  http://www.counterpane.com
==============61A3291254877870E24FA899
Content-Type: text/x-vcard; charset=us-ascii;
 name="whmurray.vcf"
Content-Transfer-Encoding: 7bit
Content-Description: Card for William hugh Murray
Content-Disposition: attachment;
 filename="whmurray.vcf"

begin:vcard 
n:Murray;William Hugh
tel;fax:800-690-7952
tel;home:203-966-4769
tel;work:203-966-4769
x-mozilla-html:FALSE
adr:;;;;;;
version:2.1
email;internet:[EMAIL PROTECTED]
fn:William Hugh Murray
end:vcard

==============61A3291254877870E24FA899==


------------------------------

From: [EMAIL PROTECTED] (Richard Outerbridge)
Subject: Re: Live from the Second AES Conference
Date: Sun, 28 Mar 1999 15:42:43 -0500

=====BEGIN PGP SIGNED MESSAGE=====

1999-03-28 15:40:59 EDT

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED]
(Bruce Schneier) wrote:

>Out of the 170 or so participants, 104 filled the survey out. 
>The first column is the number of people who left the space blank.
>Then, the number of people who said yes, this algorithm should
>go into the second round.  Then, the number of people who were
>unsure.  Then, the number of people who said no.  The next
>column contains the yes votes minus the no votes; this gives
>the algorithm a relative rank in the ordering.
>
>                blank   yes     ???     no      yes-no  rank    
>Rijndael          7     77      19       1     +76       1
>RC6               4     79      15       6     +73       2
>Twofish           9     64      28       3     +61       3
>Mars              5     58      35       6     +52       4
>Serpent           6     52      39       7     +45       5
>E2               11     27      53      13     +14       6
>CAST-256         12     16      58      18      -2       7
>SAFER+           13     20      47      24      -4       8
>DFC              12     22      43      27      -5       9
>Crypton          14     16      43      31     -15      10
>DEAL             10      1      22      71     -70      11
>HPC              12      1      13      78     -77      12
>Magenta           9      1      10      84     -83      13
>Frog             11      1       6      86     -85      14 tie
>Loki98           10      1       7      86     -85      14 tie
>
>What is interesting is that the algorithms divide pretty neatly
>into five categories:

On the basis of NIST's straw poll I think there are only three
categories.

Must consider:
Rijndael, RC6

Should consider:
Twofish, Mars, Serpent, E2, CAST-256, Safer+, DFC, Crypton

Don't Consider:
All the rest.

This isn't much different than your average Academic bell curve.

So what's interesting to me is the inverse relationship: as many of
the respondents felt that only Rijndael and RC6 should be seriously
considered as felt that everything beyond Crypton should yes-no be
seriously excluded.  Everything in between is wait-and-see, as
far as I can see.

outer

=====BEGIN PGP SIGNATURE=====
Version: PGP Personal Privacy 6.0.2

iQCVAwUBNv6UBNNcQg4O6q8hAQFicAP8CYA4SoMtZhjD76er2r4IB+4SPWIXsfKk
m3Yw+T88WQkNIE+HY80eyJdMNXnInVAZjetf6sU/GRkUbcYYvENfyJDdXRla9CoA
lmOoZA5Cdo3GkwOPaM34tbxoFO0yIOO8Owkl2dk9k2gHU34GsldrSCsd9MZgFTHd
WVRJ+Q28TqI=
=yfq/
=====END PGP SIGNATURE=====

-- 
"Just an eccentric soul with a curiosity for the bizarre."
PGPpubkey 1024/0EEAAF21 1994/07/23 <[EMAIL PROTECTED]>
Fingerprint = 6A89 D49F D3DA 12E4  040A 273B F383 0127

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to