Cryptography-Digest Digest #432, Volume #9       Wed, 21 Apr 99 01:13:03 EDT

Contents:
  Re: Question on confidence derived from cryptanalysis. (Jim Gillogly)
  Re: BEST ADAPTIVE HUFFMAN COMPRESSION FOR CRYPTO (SCOTT19U.ZIP_GUY)
  On Being Earnest (John Savard)
  Another TEA paper (GTEA and XTEA) ([EMAIL PROTECTED])
  Re: On Being Earnest ([EMAIL PROTECTED])
  Re: PGP=NSA (what is it about crypto?) ([EMAIL PROTECTED])
  Re: Thought question:  why do public ciphers use only simple ops like shift and XOR? 
(Boris Kazak)
  Re: Dynamic Data Dependant Key Schedule ([EMAIL PROTECTED])
  Re: Question on confidence derived from cryptanalysis. (Terry Ritter)

----------------------------------------------------------------------------

From: Jim Gillogly <[EMAIL PROTECTED]>
Subject: Re: Question on confidence derived from cryptanalysis.
Date: Tue, 20 Apr 1999 18:50:48 -0700
Reply-To: [EMAIL PROTECTED]

I think Terry Ritter's right to be concerned about having essentially
everyone move to a single new cipher.  If the danger isn't obvious,
consider the analogy with biological systems, where a species with no
genetic diversity can be wiped out by a single virus incident.  Or with
computer systems, where something like Melissa can cause widespread
annoyance and some down time because almost everyone is using the
same operating system and office software suite.

I also agree with him that a careful concatenation of ciphers can
help limit the damage.  I think we may disagree on what kinds of
ciphers would be most appropriate as choices for concatenation,
since I prefer ciphers that good analysts have tried and failed to
break over ciphers that nobody with cryptanalytical experience has
looked at.  I define a good analyst as someone who has broken a
difficult system.

However, I (like John Savard) think Terry overstates some issues.
Here's a case in point:

Terry Ritter wrote:
> Your position, dare I state it, is that you *can* estimate the
> capabilities of your Opponents.

In another article he wrote:
> But the only thing being "measured" here is the open, academic
> analysis.  The *real* experts do not play this way.  We thus have no
> way to understand their capabilities.  The strength value measured on
> academics cannot apply to the real problem.  

These and similar remarks suggest that a conservative threat analysis
must regard the opponents as god-like in their cryptanalytic
capabilities.  Of course in the limit this isn't useful, since we
would have no more confidence in a concatenation of ciphers against
an opponent like this than we would in a single cipher.

However, we do have ways to estimate the capabilities of the opponents.
I suggest that the government cryptologic agencies of the US and UK
represent conservative surrogates for the cryptological skills of the
strongest opponents, and we have seen several unclassified examples of
times when they were less than perfect.

In one case (factoring circa 1973) the UK agency was no further
advanced than the academic community, and academic advances in that
field were made shortly thereafter.  In two other cases the US agency
made embarrassingly public blunders (the Clipper checksum exploited
by Matt Blaze, and the SHA/SHA-1 botch that they noticed and fixed
themselves) that would not have been made if they were omniscient.
I don't include Biham's work suggesting SKIPJACK is not a conservative
design, since we don't know that it has to be -- for all we know, there
are wads of supporting theorems that it's precisely as strong as it needs
to be for its size.  We do have a couple of other cases of classified
discoveries and corresponding unclassified ones: IBM's differential
cryptanalysis (15 years) and CESG's non-secret encryption (4 years).
There are also training exercises (the Zendian Problem and a British
special intelligence course) which anyone can use to compare their skills
with advanced cipher school students of the 1960s.  The latter does not,
of course, give the peak strength of the best cryppies, but does suggest
a starting point for the curve.  Finally, we have retired NSA cryppie
Robert H. Morris's remarks at Crypto '95, where he said that by the
middle to late 1960's cryptanalysis had become less cost-effective than
other methods of gaining the information.  One may choose to disbelieve
him, but I don't.

In any case, we do have some data points on the capabilities of the
strongest potential opponents, and assuming they're perfect would be
overly conservative.

-- 
        Jim Gillogly
        30 Astron S.R. 1999, 00:51
        12.19.6.2.5, 1 Chicchan 13 Pop, Ninth Lord of Night

------------------------------

From: SCOTT19U.ZIP_GUY <[EMAIL PROTECTED]>
Subject: Re: BEST ADAPTIVE HUFFMAN COMPRESSION FOR CRYPTO
Date: Tue, 20 Apr 1999 21:57:51 GMT

In article <[EMAIL PROTECTED]>,
  Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> SCOTT19U.ZIP_GUY wrote:
>
> >  For those of you interested in the best in "Encryption" and
> > "Compression for use with Encryption" take a look at my site.
> > There are links to download all the software used. However until
> > someone posts the lastest version of scott19u.zip it will be
> > available only in the US or to those tricky enough to use
> > there brains to download it.
>
> Could you at least answer some questions of non-US people who
> can't legally look at US crypto stuffs? Here are my questions:
>

 You are right US made crypto stuff is not legal to give away
unless you are politically connected. But even scott16u is
available out side the US. Scott19u is very simalar except that
it is based on 19bit fields instead of 16bits. This is the
max field size that allows a key to fully fit on a 1.44 meg
floppy.

> I suppose scott19u.zip, if it is similar to a version having
> another number and available outside, did not have compression
> features. Where have you put the compression? Just a run before
> the previous version of the program or is compression tightly
> incorporated into the previous encryption algorithm? If the later,
> how (in principle) and why (is it advantage to do so)?
>
> M. K. Shen
>

  No I have not put compression in the encryption program yet.
My main interest is in encryption but the US government has
taken that freedom away from honest americans. I do plan to
learn Spanish and maybe move to Mexcio where one can still
freely give away what one codes. Or if a company hires me maybe
some of it would be sold or incorparated in other products.

 But since compression does not in general have a secret key
one is free to write and give away compression programs. Since
part of the security of encrypted text relies on the entropy
of the file being encrypted. If one is sending ascii data it
would be nice to compress the data first. The problem is that
most compression methods invovle headers or other tell tell
signs that give information to an attacker. So it is best to
use a method that leaves no traces in the compressed file.

 One way to help achive this goal is to do compression in
such away that "any file" could be the result of an actual
compression. That means for every finite file there is a one
to one mapping from the compressed file to the uncompressed
file. One misconception that many people have is that when
one compresss it does not always result in a smaller file.
 One chooses a method that usually results in a smaller file.
The method I choose to use initially was one based on
Huffman adaptive compression. I found some source code that
was public domain and fixed it to work with DJGPP GNU C for
a PC.
 But the method had headers(well trailers at end) that would
give an attacker trying to break the encryption method lots
of information so that one could easyly home in on correct
solution. The first thing I did was to chage the code so
that this header information was no longer used. I also
got rid of the not used yet token and started with a full
tree. This makes for better compression.
 There are several hex example at my site. I then suggested
and less aggressive huffman for a reverse pass and showed
how one could get something close to all or nothing encryption
the source code is all at my site. Since it is not illegal
yet. And is there for people to think so that one uses his
brain when uses compression before encryption.
 If I move to MExcio I will write from scratch a verson of
both 16U and 19u with this and maybe more options. So that
free people in the world can have something other that what
big brother wants you to use. Your files are your business
not theres. Why should we one use only the NSA approved
junk out there.

David A. Scott

--
http://cryptography.org/cgi-bin/crypto.cgi/Misc/scott19u.zip
http://members.xoom.com/ecil/index.htm
NOTE EMAIL address is for SPAMERS
to email me use address on WEB PAGE

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: On Being Earnest
Date: Tue, 20 Apr 1999 17:01:23 GMT

The lead article in the latest issue of Bruce Schneier's electronic
newsletter CRYPTO-GRAM (not to be confused with _The Cryptogram_, the
newsletter of the ACA) will, doubtless, be percieved as a red flag in the
debate raging in another thread (Thought question: Why do most public
ciphers...).

However much I may want to percieve that article as controversial, though,
I must also acknowledge the simple truth that, by and large, the article is
stating the truth.

There is an awful lot of "snake oil" out there. Sometimes it seems as if
every week brings news of someone who has just come up with a new
unbreakable cipher - and, naturally, while products incorporating this
revolutionary discovery are being offered for sale, the method has to be
kept a secret. Usually, of course, the people who do this are sufficiently
bad at inspiring confidence in themselves that they claim few victims.

On what basis, therefore, can I object to an article urging people to play
safe, and stick with designs that have a good reputation among the
recognized experts in the field? Can I stand up and say it's all right to
be reckless, or that I and my buddies know better than everyone else? Of
course not.

But the reason I think Bruce's comments, so undeniably true for the most
part, will still invite controversy is because I believe that he has
overstated his case in a few areas.

For one thing, I don't think we can say that triple-DES is a fully
satisfactory solution to all symmetric-key encryption needs. What else is
there that is entirely free of the taint of "newness"?

For another, I think there is a vast difference between someone going off
and designing a cipher based on an "entirely new principle", which is
supposed to be secure despite being 10-100 times faster than DES, and
designing a "new" cipher in a _conservative_ manner, making use of the
lessons learned and the constructions used in the well-established ciphers,
but adding additional constructs to further frustrate analysis. Of course,
standard precautions - like a key schedule that prevents one part of the
cipher from revealing information about subkeys in other parts of the
cipher - have to be taken in such designs.

Advising caution is only prudent: but to become too categorical is
dangerous as well.

John Savard ( teenerf<- )
http://members.xoom.com/quadibloc/index.html

------------------------------

From: [EMAIL PROTECTED]
Subject: Another TEA paper (GTEA and XTEA)
Date: Wed, 21 Apr 1999 02:20:30 GMT

Today I finished (well pretty much) my paper on GTEA (Generalized TEA) where I
explored the structure of TEA and variations.  Today I wrote a mini-paper
(XTEA) on implementations and design motivations of 3 variations of TEA.  In
each I used my 'Dynamic Key Scheduling' (? is it my idea? I dunno) and the TEA
ciphers.

The first proposal XTEA-1 is a 64-bit block cipher which resembles X-TEA (by
the TEA group) but uses the better key schedule.  The other two proposals are
variations of this.

If you have the time, I invite you to check out my mini (but detailed enough)
mini-paper on XTEA (not to be confused hopefully with X-TEA by the TEA group).

It's in RTF format (better then text and HTML...) at

http://members.tripod.com/~tomstdenis/xtea.rtf

I may write a mini paper on what my Dynamic Key Scheduling is, and how it
could be applied.  As Jon (Jon Savard? or is it Jack?  I am sorry!!!) pointed
out, it could be also similar to permutating the plaintext register in the
given round, but this design is to enhance the confusion sequence, not the
diffusion sequence.

Thanks for your time,
Tom

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: On Being Earnest
Date: Wed, 21 Apr 1999 02:14:32 GMT


> I think big companies will be far more conservative than little
> ones.  The big guys will opt for the old-tried-and-true ciphers,
> the little guys will take bigger risks and check out the new ones.
> A few little guys will grow to be big, and they'll keep on using
> the same ciphers.  That will make those "new" ciphers "old", and
> trusted.  I agree with you, but I think using something new and
> not fully proven requires more guts.  The more you have to lose,
> the more conservative you'll have to be.

I think not.  Counterpane has found many bugs with microsofts VPN systems.  I
think if the customer is really concern with security they should investigate
a little.  Products which are open source should generally be trusted more
then closed source products.

Generally there is nothing wrong with new ciphers.  Ciphers with good theory,
and proper design processes should be considered for what they are.  But these
'unbreakable' repeated XOR ciphers, should not.  A lot of times, good sound
ciphers are produced, which may have flaws, and fixing them may be simple (TEA
and PES for example).  If the authors honestly put some thought into it, give
them some time (read their papers, etc...) otherwise why bother?

Tom

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: PGP=NSA (what is it about crypto?)
Date: Wed, 21 Apr 1999 02:23:57 GMT


> The net is free for anyone to use.  Every newsgroup has it's
> lunatics, just like every town has its drunks, flakes and
> know-it-alls.  Rather than get mad the best approach is pity.
> Toss them a quarter every now and then.  If they pick it up,
> you know you're dealing with something smarter than a dog.
> If they know they can buy a clue, you've found someone smarter
> than a monkey.  Some are trainable, some aren't.
>
> Chill out man, stress shortens your life.  I find laughter
> a much simpler route.

I agree with him.  Sometimes you can help people.  If the user is a complete
newbie, they may make stupid (***) claims or questions because they don't know
better. Show them the way, and be happy.  If they are being rude or obnoxious,
then just ignore them.  They usually get kicks out of the negative attention,
and if they get none, they will leave.

Love, Peace and happiness (or something like that, I am a kid of the eighties
not the sixties.... :) )

Tom

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: Boris Kazak <[EMAIL PROTECTED]>
Subject: Re: Thought question:  why do public ciphers use only simple ops like shift 
and XOR?
Date: Tue, 20 Apr 1999 19:53:16 -0400
Reply-To: [EMAIL PROTECTED]

Jerry Coffin wrote:
> If it's written in Forth, I'll pass, thanks anyway.  It's been many
> years since the last time I tried to work in Forth at all, and from
> what I remember, it's probably something that you have to either use a
> lot, or you might as well forget it completely.
===================
No, it's plain conventional C, even without Assembler. It is one of my 
"essays" on the subject of *drunken* ciphers, where you set up a lot
of S-boxes deriving them from the key, and then encrypt using the 
plaintext-dependent path through these S-boxes. so that each plaintext
will follow the maze along its own unique path. Quite entertaining...
BTW, key scheduling uses the same modular multiplication already present
in the program.
> 
> Then again, I suppose many people would say the same about C, C++ and
> Scheme, all of which I use fairly regularly.  Scheme (or almost any
> LISP-like language) supports working with large integers, which tends
> to be handy when you're dealing with factoring and such.
> 
> >   Thanks for your courtesy  Best wishes        BNK
> 
> Likewise, especially when I posted something as boneheaded as I did...

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Dynamic Data Dependant Key Schedule
Date: Tue, 20 Apr 1999 23:42:45 GMT


> Rotating the subkey based on the other half of the block isn't really that
> much different than rotating the data; the two could be made almost
> equivalent through a suitable analysis in all likelihood.
>
> But I *absolutely* agree that making the key schedule dynamically variable
> is a good idea.

Because there is no preset order for the use of the actual keys.  Check out my
paper (GTEA).  I am gonna write a paper on how to expand TEA to use various
techniques and avoid the already known attacks.

It also means you can get lots of subkeys without actually storing them.

Tom

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Question on confidence derived from cryptanalysis.
Date: Wed, 21 Apr 1999 05:04:20 GMT


On Tue, 20 Apr 1999 18:50:48 -0700, in <[EMAIL PROTECTED]>, in
sci.crypt Jim Gillogly <[EMAIL PROTECTED]> wrote:

>I think Terry Ritter's right to be concerned about having essentially
>everyone move to a single new cipher.  If the danger isn't obvious,
>consider the analogy with biological systems, where a species with no
>genetic diversity can be wiped out by a single virus incident.  Or with
>computer systems, where something like Melissa can cause widespread
>annoyance and some down time because almost everyone is using the
>same operating system and office software suite.

I think I have a right to cheer at this agreement with my major point.



>I also agree with him that a careful concatenation of ciphers can
>help limit the damage.  

And then I cheer again at this agreement with part of my proposed
solution package.  


>I think we may disagree on what kinds of
>ciphers would be most appropriate as choices for concatenation,
>since I prefer ciphers that good analysts have tried and failed to
>break over ciphers that nobody with cryptanalytical experience has
>looked at.  I define a good analyst as someone who has broken a
>difficult system.

Then I assume you are willing to make the services of such an analyst
available free of charge and without delay.  The way it is now, one
cannot get such analysis unless one is a particular type of person,
working in a few selected environments, and with particular types of
design.  Having inherited a democracy, I am unwilling to give that up
for supposed advantages which, in the limit, do not give us what we
want anyway.  I think people should be able to select their own
ciphers based on any criteria they want, including superstition and
innuendo.  


>However, I (like John Savard) think Terry overstates some issues.
>Here's a case in point:
>
>Terry Ritter wrote:
>> Your position, dare I state it, is that you *can* estimate the
>> capabilities of your Opponents.
>
>In another article he wrote:
>> But the only thing being "measured" here is the open, academic
>> analysis.  The *real* experts do not play this way.  We thus have no
>> way to understand their capabilities.  The strength value measured on
>> academics cannot apply to the real problem.  
>
>These and similar remarks suggest that a conservative threat analysis
>must regard the opponents as god-like in their cryptanalytic
>capabilities.  

If that is what you take from these comments (in their proper
context), I am not surprised that you call my position overstated.
However, you have exaggerated my position.  

In particular, I doubt I have ever said the Opponents are "god-like."
As far as I can recall, the only people I have accused of being
"god-like" are the crypto gods who seem to be able to predict:  1) the
future strength of a cipher, based on past tests; and  2) the
capabilities of unknown Opponents, based on the capabilities of known
academics.  

>Of course in the limit this isn't useful, since we
>would have no more confidence in a concatenation of ciphers against
>an opponent like this than we would in a single cipher.

And so, clearly, I do not so assume.  Since I do not assume that an
Opponent has unlimited capabilities, this comment strongly
misrepresents my arguments.  

But what *are* we to assume?  Even a *modest* "value" for Opponent
capabilities is also "not useful" to us.  This is because it is
(virtually) impossible to *measure* knowledge, experience, and
innovation.  And then it is impossible to *measure* cipher strength.
So we first don't know the difficulty of the problem, and then don't
know the capabilities our Opponents can bring to the solution.  This
naturally leave us in a quandary, even *without* assuming unlimited
capabilities.  The problem is *not* that we should assume reasonable
value for Opponent capabilities, the problem is that *any* such values
and their implications are unknown, uncalibrated, and unuseful.  

I suggest that this whole line of inquiry (into cipher strength and
Opponent strength) is a waste of time.  Since we know that
single-cipher failures are possible, we can work to fix that.  Since I
assume the triple-cipher scheme will work, it is clear that I do not
assume unlimited Opponent capabilities.  I do assume that whatever
capabilities they do have will be stressed far harder with
multi-ciphering than single ciphering.  I think this is a reasonable
assumption.  

Moreover, by using a wide variety of ciphers, we act to limit the
amount of data disclosed by any break that does occur.  I do assume
that this will reduce the attraction of cryptanalysis, by limiting the
eventual payoff.  Again, I think this a reasonable assumption.  


>However, we do have ways to estimate the capabilities of the opponents.
>I suggest that the government cryptologic agencies of the US and UK
>represent conservative surrogates for the cryptological skills of the
>strongest opponents, and we have seen several unclassified examples of
>times when they were less than perfect.
>
>In one case (factoring circa 1973) the UK agency was no further
>advanced than the academic community, and academic advances in that
>field were made shortly thereafter.  In two other cases the US agency
>made embarrassingly public blunders (the Clipper checksum exploited
>by Matt Blaze, and the SHA/SHA-1 botch that they noticed and fixed
>themselves) that would not have been made if they were omniscient.
>I don't include Biham's work suggesting SKIPJACK is not a conservative
>design, since we don't know that it has to be -- for all we know, there
>are wads of supporting theorems that it's precisely as strong as it needs
>to be for its size.  We do have a couple of other cases of classified
>discoveries and corresponding unclassified ones: IBM's differential
>cryptanalysis (15 years) and CESG's non-secret encryption (4 years).
>There are also training exercises (the Zendian Problem and a British
>special intelligence course) which anyone can use to compare their skills
>with advanced cipher school students of the 1960s.  The latter does not,
>of course, give the peak strength of the best cryppies, but does suggest
>a starting point for the curve.  Finally, we have retired NSA cryppie
>Robert H. Morris's remarks at Crypto '95, where he said that by the
>middle to late 1960's cryptanalysis had become less cost-effective than
>other methods of gaining the information.  One may choose to disbelieve
>him, but I don't.
>
>In any case, we do have some data points on the capabilities of the
>strongest potential opponents, and assuming they're perfect would be
>overly conservative.

First, none of this tells us about the future.  Yet all operation of a
cipher takes place in the future, after that cipher is designed.
Unless we have a reasonable way to predict future capabilities, we are
necessarily forced into conservative measures.  

Next, I think it is dangerous to assume our Opponents are the
intelligence services we know.  In another message I suggested that if
the problem was only NSA (the way it is now), we would not have much
of a problem.  But NSA is only an *example* of an Opponent, and not
necessarily even the most advanced example in particular areas of the
technology.  We having intractable problems in making any serious
extrapolations from this data.  Again I suggest that this avenue is
both unfruitful and dangerous.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to