Cryptography-Digest Digest #298, Volume #9       Mon, 29 Mar 99 17:13:03 EST

Contents:
  Re: Live from the Second AES Conference (John Savard)
  Re: Live from the Second AES Conference (John Savard)
  Re: Live from the Second AES Conference (John Savard)
  Re: What did Gilbert Vernam look like? (John Savard)
  Re: GOOD PRIME GENERATOR (GPG) (Thomas W. Kellar)
  Re: How do I determine if an encryption algorithm is good enough? ("Harv")
  Re: Is initial permutation in DES necessary? (DJohn37050)
  Re: Random Walk (R. Knauer)
  Re: Live from the Second AES Conference ("Trevor Jackson, III")
  Re: How do I determine if an encryption algorithm is good enough? ("Trevor Jackson, 
III")
  Re: GOOD PRIME GENERATOR (GPG) ("Trevor Jackson, III")
  Re: Encryption and the Linux password files (Sundial Services)
  Re: Live from the Second AES Conference (David Wagner)
  Re: Tripple DES key length. (Mickey McInnis)
  Re: GOOD PRIME GENERATOR (GPG) (Thomas W. Kellar)
  Re: How do I determine if an encryption algorithm is good enough? (R. Knauer)
  Re: True Randomness & The Law Of Large Numbers (R. Knauer)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Live from the Second AES Conference
Date: Mon, 29 Mar 1999 16:21:03 GMT

[EMAIL PROTECTED] (Bruce Schneier) wrote, in part:

>E2 is one
>of the ciphers I really like, and I will be disappointed if it does
>not make it into the second round.

I remember looking at E2, and I found the description understandable, and
the cipher (like LOKI-97) was quite DES-like.

But because the PDF document describing it had such large copyright notices
on every page, I was worried there would be an objection to my describing
it on my web page. (The other AES candidates I've omitted describing were
mostly because I found their descriptions hard to follow. MAGENTA I
eventually figured out, and described in a post, but I omitted because of
its misfortune. Since you've told me its design is instructive, I may
reconsider.)

John Savard (teneerf is spelled backwards)
http://members.xoom.com/quadibloc/index.html

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Live from the Second AES Conference
Date: Mon, 29 Mar 1999 16:15:16 GMT

[EMAIL PROTECTED] (Bruce Schneier) wrote, in part:

>There are a bunch of other problems the the security.  I believe
>one--the massive number of fixed points--is mentioned in our paper.
>Honestly, there are many other ways of attacking Magenta.  It's just
>nor worth spending the time writing it up.

>Magenta now replaces McGuffin as my favorite toy cipher to give
>budding cryptanalysts.

Clearly, I'll have to take another look. The design certainly seemed
impressive.

quoting me:
>>If you compare all the ciphers on the same compiler ... and I remember at
>>a computer chess competition, someone suggested that it wouldn't be a bad
>>idea to require the entrants all to use C, so that it wouldn't be a
>>competition of assembly-language coding skills.

>Then you're comparing compiler efficiency.  What we did in our
>comparison is assume best possible ASM.  We didn't implement
>everything, but we gave all algorithms the benefits of the doubt.

If everybody's C code is compiled on the same compiler, one may be
comparing optimizations or something, but one isn't comparing compilers.

John Savard (teneerf is spelled backwards)
http://members.xoom.com/quadibloc/index.html

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Live from the Second AES Conference
Date: Mon, 29 Mar 1999 16:29:55 GMT

[EMAIL PROTECTED] (Terry Ritter) wrote, in part:

>On Sun, 28 Mar 1999 11:21:37 GMT, in <[EMAIL PROTECTED]>,
>in sci.crypt [EMAIL PROTECTED] (Bruce Schneier) wrote:

>>[...]
>>The one sloppy part of this analysis was that he assumed that XORs are
>>easier to balance and hence more secure than ADD.  This is, of course,
>>nonsense...since ADD can be built out of XORs.

>Sorry.  ADD (integer addition) cannot be built out of XOR's.  

>ADD requires (e.g.) an AND function as a carry.  AND cannot be built
>from XOR.  

I can't think of a way to build ADD from XOR, but if one also allows a
table lookup, one can build XOR from ADD without using AND, NOT, or any
other logic operation.

John Savard (teneerf is spelled backwards)
http://members.xoom.com/quadibloc/index.html

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: What did Gilbert Vernam look like?
Date: Mon, 29 Mar 1999 16:33:03 GMT

monterey <[EMAIL PROTECTED]> wrote, in part:

>I am trying to find an image file of Gilbert Vernam and/or Joseph
>Mauborgne without any success. Has anyone ever run across a picture of
>Vernam? If so could you refer me to the book/website?

I'm going to have to look at my copy of Kahn's _The Codebreakers_: I
_thought_ there was a picture of Vernam in it.

John Savard (teneerf is spelled backwards)
http://members.xoom.com/quadibloc/index.html

------------------------------

From: [EMAIL PROTECTED] (Thomas W. Kellar)
Subject: Re: GOOD PRIME GENERATOR (GPG)
Date: 29 Mar 1999 17:37:29 GMT

[EMAIL PROTECTED] wrote:
>Issue 1) You claim that I discovered nothing.  Nothing is a strong word. 
>I'll grant you that I came up with nothing *mathematically* new, but I did
>come up with a new way of doing something old. You don't have to like it. 
>It doesn't even have to be better in any way.  It just has to be
>algorithmically *different*, which it is, unless you can point to someone's
>work performing the same steps.  If you can't produce such past work, and
>continue to claim I discovered nothing, you are then just as guilty of that
>which you claim cranks are guilty of.
>

There is a theorem that states if two proofs A and B prove
the same theorem T, (and presuming they are correct), then
proofs A and B are isomorphic.  I.e., identical.

Now if your algorithm produces the same output as another
algorithm then this is uninteresting.  While you might
give different while/gotos/ifthenelse/case/begin/end or
whatever you happen use in your language, the algorithms
would be isomorphic.  What mathematicians are looking for
are New and Different things.  While, perhaps, your
algorithm might run faster, that is not material.

Thomas
--
Freelance System Programming.  http://www.fsp.com


------------------------------

From: "Harv" <[EMAIL PROTECTED]>
Subject: Re: How do I determine if an encryption algorithm is good enough?
Date: Mon, 29 Mar 1999 11:43:14 -0800

Ludvig Strigeus <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Hi!
>
> > <<How can I test if this algorithm is easily crackable?>>
> > General rule: if you don't have years of experience making
> > cryptographic systems, then it is definitely easily crackable. (Even
> > then, odds are not so good. :-D  )
>
> Maybe this one I've created is hard to crack, even though I have no
> cryptographic experience.
>
> You can't assume that it's bad, without examining it first.

Why not? Extraordinary claims require extraordinary evidence. As the
inventor of the algorithm, it is your responsibility to produce that
evidence. Your algorithm may well be strong however...

1. There are many trusted strong algorithms.
2. Many people post weak algorithms, claim they are strong, and then demand
that other people prove it so.

Why should I spend my valuable time proving you wrong? I can save myself
weeks, buy using another algorithm that has been well analyzed is trusted.

Now, if you came up with an interesting algorithm, that would be different.
Say a good method of secure, anonymous, non-counterfitable cash, or an easy
method for public-keys, and digital signatures.


Harv
[EMAIL PROTECTED]
Remove the Some to send email.




------------------------------

From: [EMAIL PROTECTED] (DJohn37050)
Subject: Re: Is initial permutation in DES necessary?
Date: 29 Mar 1999 13:35:42 GMT

The latest FIPS 46 describing DES says it can be done in software or hardware. 
Previous versions did say only hardware.
Don Johnson

------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Random Walk
Date: Mon, 29 Mar 1999 20:04:45 GMT
Reply-To: [EMAIL PROTECTED]

On Mon, 29 Mar 1999 10:57:46 -0600, Jim Felling
<[EMAIL PROTECTED]> wrote:

>I would allow them to determine when my TRNG is malfunctioning -- however, they would 
>not be
>the sole piece of evidence considered, and would also not be regarded as being 100%
>authoritative.

I think you are saying that you would use statistical tests for
diagnostic purposes, not to make a determination of the proper
functioning of the TRNG.

OK, if that is the case, let's get down to specifics. I sell you a
TRNG that you plan to use to generate keystreams of length n bits. For
the sake of discussion lets say that n = 10^6. That is, you will
generate one million bits, and submit that sequence to statistical
tests before proceeding to build your one time pad.

Exactly what statistical tests do you plan to conduct on that
keystream that you believe will tell you that the TRNG is
malfunctioning?

Bob Knauer

"What right does Congress have to go around making laws just because
they deem it necessary?"
- Marion Barry, Mayor of Washington DC

------------------------------

Date: Mon, 29 Mar 1999 13:33:48 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Live from the Second AES Conference

LSI-11 uber alles! (esp. unibus junk)

Your attack would work only if it were unnecessary.  If the same key is
used for both the inner and outer encryption it may be weaker than
either cipher alone.  But given a message encrypted with the inner
cipher, one must use the original key for the outer encipherment in
order to get a weaker ciphertext.

But if you have the key used for the inner cipher, you don't need to
weaken the cipher text in order to get the plain text.  You'd just
decrypt the message with the inner cipher.


Matthias Bruestle wrote:
> 
> Mahlzeit
> 
> Bruce Schneier ([EMAIL PROTECTED]) wrote:
> > I agree with this, and I suspect that most cryptographers would.
> > While it cannot be proven that a cascade of A B is no more secure than
> > A, and could possibly  be weaker than A, in realilty that just won't
> > happen.  Your analysis is sound.
> 
> If A/B is weaker than A alone, doesn't that mean, that an attacker can
> weaken something which is encrypted by A by encrypting it again with B?
> Then A/B would again be at least as secure as A.
> 
> Mahlzeit
> 
> endergone Zwiebeltuete
> 
> --
> PGP: SIG:C379A331 ENC:F47FA83D      I LOVE MY PDP-11/34A, M70 and MicroVAXII!
> --
> Take my mind
> All the way
> The darkside calls
> I shan't resist



------------------------------

Date: Mon, 29 Mar 1999 13:45:03 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: How do I determine if an encryption algorithm is good enough?

Patrick Juola wrote:
> 
> In article <[EMAIL PROTECTED]>,
> R. Knauer <[EMAIL PROTECTED]> wrote:
> >On Fri, 26 Mar 1999 15:44:37 -0800, Andrew Carol <[EMAIL PROTECTED]>
> >wrote:
> >
> >>The odds greatly favor that people with experience in a field will
> >>typically do better than those who have none.
> >
> >I agree with you in principle, but disagree with you in practice.
> >
> >Science has progressed only on the basis of disagreement with those
> >who claim experience.
> 
> Re-read your Kuhn.  The vast majority of scientific development
> occurs in periods of "normal science" by skilled people refining
> the existing paradigm.  And for every successful new paradigm
> proposed, there are literally thousands that don't stand up because
> they can't account for the facts.
> 
> The gadflies lose.  Every so often, one gets lucky and is remembered.

Ben Franklin's observation is germane here: Reasonable adapt themselves
to their circumstances.  Unreasonable men adapt their circumstances to
themselves.  Thus all progress is due to unreasonable men.

I agree that the vast majority of scientific *activity* takes place as
refinements and incremental improvements of existing concepts.  But the
vast majority of *advancement* takes place in leaps.  For example,
Maxwell's equations unifying a vast collection of observations cannot
really be characterized as a refinement.

We can probably agree that these leaps occur most often in areas where
even the current paradigm in unable to account fo the facts.  In that
situation everyone becomes a gadfly beause a new paradigm is required as
a basis for any advancement.

As for being remembered, "when it's time to railroad, people will
railroad".  For example, Leibnitz and Newton both developed calculus
based on infinitessimals because "it was time to
integrate/differentiate".  Being remembered requires luck.  Advancing
science requires hard work.



------------------------------

Date: Mon, 29 Mar 1999 13:52:24 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: GOOD PRIME GENERATOR (GPG)

Thomas W. Kellar wrote:
> 
> [EMAIL PROTECTED] wrote:
> >Issue 1) You claim that I discovered nothing.  Nothing is a strong word.
> >I'll grant you that I came up with nothing *mathematically* new, but I did
> >come up with a new way of doing something old. You don't have to like it.
> >It doesn't even have to be better in any way.  It just has to be
> >algorithmically *different*, which it is, unless you can point to someone's
> >work performing the same steps.  If you can't produce such past work, and
> >continue to claim I discovered nothing, you are then just as guilty of that
> >which you claim cranks are guilty of.
> >
> 
> There is a theorem that states if two proofs A and B prove
> the same theorem T, (and presuming they are correct), then
> proofs A and B are isomorphic.  I.e., identical.
> 
> Now if your algorithm produces the same output as another
> algorithm then this is uninteresting.  While you might
> give different while/gotos/ifthenelse/case/begin/end or
> whatever you happen use in your language, the algorithms
> would be isomorphic.  What mathematicians are looking for
> are New and Different things.  While, perhaps, your
> algorithm might run faster, that is not material.

You write as if there were only on type of mathematician, that being the
theoretical kind.  There is another kind, cannled *applied*
mathematicians.  At the doctoral level they are generally not well
respected because their theses do not contain a ream of paper, more
likely 30-50 pages.

Speed is very definitely material to applied math.  And even in
theoretical math, performance characterizes algorithms or constraints
upon them by O() notation and class (P, NP, etc.).

The only place speed is truly irrelevant is in mathematical proofs.


------------------------------

Date: Mon, 29 Mar 1999 13:52:29 -0700
From: Sundial Services <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: Encryption and the Linux password files

Lincoln Yeoh wrote:
> 
> On Wed, 24 Mar 99 06:22:45 GMT, [EMAIL PROTECTED] (Chris Monico) wrote:
> 
> >Any box with a competent sys admin has a shadowed passfile, and not
> >simply the default /etc/password.
> 
> Maybe. IMO the standard passwd file thingy is only a problem when you have
> users of different security privileges with access to the same system.
> 
> So I don't think it's really a problem if the box only has very few users
> and all the users are already the administrators (e.g. a firewall).
> 
> Coz if someone can get a shell on the system remotely even with the telnetd
> and most other local access services disabled I doubt a shadowed passfile
> will help that much.
> 
> But on a multiuser shared system yeah, shadow passwords are a must.
>

>
> 
> I'm rather startled to find that Linux still uses a login password
> scheme that is based on a publicly-available password file that is
> easily replaced or substituted.  Systems have been "taken over" in that
> way.

>The shadow password system alleviates this, but wasn't copyright
>compatible with the GNU GPL copyright (Shadow Suite has a BSD style
>copyright) - so it was left out of Linux distibutions.


The various piecings-togethers (?) of newsgroup replies eventually
brings me back to my original comment.  Linux does not use the "good"
security.  And since Linux is used all over the Net to run Apache
servers...  "bzzt, Houston, we have a problem."

------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: Live from the Second AES Conference
Date: 29 Mar 1999 13:02:09 -0800

In article <[EMAIL PROTECTED]>, Jim Gillogly  <[EMAIL PROTECTED]> wrote:
> I don't know whose standard answer this is, but it has to be
> totally bogus for immediately obvious reasons: if A.B is
> E2.RC6 and it's weaker than either of them, then you
> can mount an effective attack on E2 by encrypting it with
> RC6, and vice versa.  This can be done by the cryptanalyst
> in possession of A-ciphertext, and doesn't need to have been
> done by the sender.

You might want to check out
  ``Cascade Ciphers: The Importance of Being First''
    in J. Cryptology vol. 6 1993 pp. 55--61.
The authors show that A.B (encrypting with A first, then B second)
is at least as secure as A, but not necessarily as secure as B (under
some threat models, anyway -- in particular, under ciphertext-only
probable-plaintext attack, A.B might be weaker than B, if you are
especially unlucky).

Also, one requires the assumption that A and B are keyed independently,
which raises the question: what key schedule should we use for A.B?
Should we use A's key schedule or B's key schedule, or something totally
new (and of unknown security)?  The best answer isn't entirely clear.

------------------------------

From: [EMAIL PROTECTED] (Mickey McInnis)
Subject: Re: Tripple DES key length.
Date: 29 Mar 1999 20:23:51 GMT
Reply-To: [EMAIL PROTECTED]

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] 
(DJohn37050) writes:
|> ...
|> Because of the meet in the middle attack, a 3 DES key use of TDES has an
|> effective strength of 112 bits to the known best attack..
|>...

I think it's a bit of an overstatement to refer to a meet-in-the-middle
attack as if it is practical.  The storage of 2**56 trial runs, and/or
searching through 2**56 trials to compare against makes
it highly impractical to actually do a meet-in-the-middle attack.

Or is there some implementation of a meet-in-the-middle attack
against even 2-DES that doesn't involve (2**56)+ storage elements?
That's 512 Million Gigabytes, if you store 64 bits per trial.

However, I do agree it's a good idea to act as if a meet-in-the-middle
attack is practical and make sure your key length is long enough, even
if a meet-in-the-middle attack is practical.


|> Don Johnson

------------------------------

From: [EMAIL PROTECTED] (Thomas W. Kellar)
Subject: Re: GOOD PRIME GENERATOR (GPG)
Date: 29 Mar 1999 21:20:26 GMT

I would not argue with what you say.  I remember
spending weeks optimizing the Sieve of Eratosthenes
in assembler and enjoying it.  As a programmer I 
find good and fast algorithms interesting. As a
person, I hate to see reader torres wasting
his time.

Thomas

Trevor Jackson, III ([EMAIL PROTECTED]) wrote:
: Thomas W. Kellar wrote:
: > There is a theorem that states if two proofs A and B prove
: > the same theorem T, (and presuming they are correct), then
: > proofs A and B are isomorphic.  I.e., identical.
: > Now if your algorithm produces the same output as another
: > algorithm then this is uninteresting.  While you might
: > give different while/gotos/ifthenelse/case/begin/end or
: > whatever you happen use in your language, the algorithms
: > would be isomorphic.  What mathematicians are looking for
: > are New and Different things.  While, perhaps, your
: > algorithm might run faster, that is not material.
: You write as if there were only on type of mathematician, that being the
: theoretical kind.  There is another kind, cannled *applied*
: mathematicians.  At the doctoral level they are generally not well
: respected because their theses do not contain a ream of paper, more
: likely 30-50 pages.
: Speed is very definitely material to applied math.  And even in
: theoretical math, performance characterizes algorithms or constraints
: upon them by O() notation and class (P, NP, etc.).
: The only place speed is truly irrelevant is in mathematical proofs.

--
Freelance System Programming.  http://www.fsp.com


------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: How do I determine if an encryption algorithm is good enough?
Date: Mon, 29 Mar 1999 20:11:49 GMT
Reply-To: [EMAIL PROTECTED]

On Mon, 29 Mar 1999 11:43:14 -0800, "Harv" <[EMAIL PROTECTED]>
wrote:

>Extraordinary claims require extraordinary evidence.

Occam's Razor states otherwise. It is usually the less extraordinary
claims that requires the more extraordinary evidence.

There are several experiments that students of modern physics can
perform at a university laboratory which are quite simple to set up
and do, yet the results of those experiments are very extraordinary.

Bob Knauer

"What right does Congress have to go around making laws just because
they deem it necessary?"
- Marion Barry, Mayor of Washington DC

------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Mon, 29 Mar 1999 20:30:09 GMT
Reply-To: [EMAIL PROTECTED]

On Mon, 29 Mar 1999 12:42:57 -0600, Jim Felling
<[EMAIL PROTECTED]> wrote:

>That statement of probability  is correct in the overall ( ensamble) view,
>but in the view of what actually happened it is in error.

Which is what I have been claiming all along, from the time I quoted
Li & Vitanyi to the most recent quotes from Feller.

Can you figure out why others disagree?

>However, If such a string came out of my TRNG I would immediately and
>thoroughly check my machine, because the odds of my machine having visited
>that corner of sample space by chance are substantially lower than the
>odds that somehow somewhere there is a glitch in my machine's physical
>componentry or my implementation of a TRNG.

As I pointed out in general terms earlier, "abnormal" strings actually
make up a very significant fraction of the ensemble. That's because
the Gaussian distribution become very flat for a large number of
steps.

Without doing actual calculations, consider the fraction of sequences
that are close to the "normal", namely close to the mean expectation
value. Let's say that you calculate the fraction of sequences that are
within +-  5% of the mean for a large number of steps. That fraction
is very small compared to the other 95% of sequences outside +- 5%.

Therefore the TRNG has a significant likelihood of outputing a
sequence that deviates from the norm. For sequences just outside +- 5%
of the norm, the 1-bit bias greater than 55 to 45, which is a
significant amount of bias. All other sequences farther outside +- 5%
of the norm have a 1-bit bias much greater than that.

Just remember the ink-water depiction of the random walk, and ask
yourself if the sequence you just generated is the path which resulted
in an ink particle that travelled far away from the origin. As you can
see from the diffusion picture, there are very many particles that
have travelled a significant distance from the origin, so it is quite
likely that the sequence you just generated is one of them. Such a
sequence has a high amount of 1-bit bias.

>> Even for 10 bit sequences, where runs of 10 zeros is not all that
>> unlikely, emsemble averages to test for true randomness requires of
>> order 1 million sequences of 10 bits. And 10 bits is not very many
>> bits for a typical keystream in crypto.

As I pointed out earlier, there is an error in the paragraph above. It
should be 20 bits, not 10 bits. That is, 2^20 is the same order as
10^6.

Bob Knauer

"What right does Congress have to go around making laws just because
they deem it necessary?"
- Marion Barry, Mayor of Washington DC

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to