Cryptography-Digest Digest #366, Volume #9       Sat, 10 Apr 99 02:13:02 EDT

Contents:
  Re: Wanted: Why not PKzip? ([EMAIL PROTECTED])
  What is the best whitening algorithm ? (David Kuestler)
  Re: Announce - ScramDisk v2.02h (Terry Ritter)
  Re: Stupid Question (Nathan Kennedy)
  Re: Announce - ScramDisk v2.02h ("Harvey Rook")

----------------------------------------------------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Wanted: Why not PKzip?
Crossposted-To: comp.security.misc
Date: 10 Apr 99 04:35:58 GMT

In sci.crypt David A Molnar <[EMAIL PROTECTED]> wrote:

> Does your .ZIP contain a MS word or other Office file? Then if I know
> your Ethernet address, I know the ID Office attaches to it to render it
> 'unique'. More than eight bytes. Whoops.

> Does your .ZIP contain an .EXE file which happens to include a well-known
> C library?  Even a predictable sequence of instructions? 

> Do you have a plain text file, no headers or anything, in the ZIP ?
> Well, say it's a letter you begin with "Dear Ali"  and you sent the 
> zip to [EMAIL PROTECTED]

It's not that bad. First the file is compressed, then encrypted. The
"plain text" is the compressed (before encryption) version of the file (if
you have the original file, you can compress it using PKZip to get the
"plain text" -- one has to be sure one uses the same compression, though
-- the default for PKZip 2.6 (Windows) is different from that for PKZIP
2.04g, for example).

An original file:

Dear Ali,
Here is...

and one:

Dear Ali,
Thanks for...

will compress with different characters at the start of the file (so you
may not have "plain text" even if you know the beginning of an original 
text file).

Of course, if there is a known graphic or executable which had so little
redundancy that it is STORED (uncompressed) in the archive, then ...

------------------------------

From: David Kuestler <[EMAIL PROTECTED]>
Subject: What is the best whitening algorithm ?
Date: Sat, 10 Apr 1999 13:53:38 +1000

What is the best whitening algorithm for use in enhancing ECB mode ?

My apologies for rewording a previous post but it is apparent from
direct email that I didn't explain my intentions well enough.

I wish to enhance the random access properties of ECB mode encryption
using the block number (BN) for use in database and disk partition
applications.

The weaknesses of ECB mode that I am trying to cover are :
1) the same PT blocks encrypt to the same CT blocks.
2) CT block manipulation ( substitution, insertion, removal etc.. )

The minimal :
E( BN ^ PT )
is vulnerable to sequences, e.g.. where BN = PT the CT would be the
same.

Possibly the strongest whitening would be :
E( E(BN) ^ PT )
However this double the processing.

I have been experimenting with :
E( ( ( LT[BN%251] ^ BN ) * 9223372036863164417ULL ) ^ PT )
where lookup table (LT) is a 2kbyte array of 251 long long int ( 64 bits
).
Initially LT is loaded by encrypting a predefined set of real random
numbers. The encryption forces LT to be key dependent and makes a chosen
PT attack ( therefore CT block manipulation ) impossible ( assuming the
attacker does not have access to LT in memory ).

9223372036863164417ULL is a three bit prime and so multiplication could
actually be reduced to 3 shifts and 3 adds.

The idea behind the lookup table is to efficiently change many bits.
The rest of the algorithm is designed to avoid  repetitions.

I believe this would be very efficient, adding only about 10 operations
to the encryption cycle, more than adequately cover any patterns and
force blocks to be position dependent.



------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Announce - ScramDisk v2.02h
Date: Sat, 10 Apr 1999 03:56:08 GMT


On Fri, 9 Apr 1999 18:58:49 -0700, in
<7emb5m$[EMAIL PROTECTED]>, in sci.crypt "Harvey Rook"
<[EMAIL PROTECTED]> wrote:

>My comments withing...
>Terry Ritter <[EMAIL PROTECTED]> wrote in message
>news:[EMAIL PROTECTED]...
>>
>> On Fri, 9 Apr 1999 11:31:40 -0700, in
>> <7elhhc$[EMAIL PROTECTED]>, in sci.crypt "Harvey Rook"
>> <[EMAIL PROTECTED]> wrote:
>>
>> >>
>> >This is incorrect. By Kerchhoff's maxim, you have to assume your attacker
>> >has a copy of your deciphering machine. If he has a copy of your
>deciphering
>> >machine, the attacker can figure out the algorithm you use to select
>> >ciphers.
>>
>> That is no more true than saying that the attacker can figure out the
>> message key or the initialization vector.  We assume the system has
>> some reliable random source to make such values.  Similar values
>> select what ciphers to use in what order.  And this should change
>> frequently.
>>
>
>The reliable random source must be communicated to both the encryptor,
>and the decryptor. Because it's transmitted or shared, you must assume
>the attacker has intercepted it. Because the attacker has intercepted the
>it, the attacker knows what ciphers you are using.

That is false.  It is always necessary to transport keys in some way.
But whatever way this is done -- by courier, multiple channels, or
authenticated public key -- random message key value are known by both
ends.  That keying value can be used to select ciphers.  

In practice, I would expect background negotiation between the cipher
programs to occur with each message transmission, and produce a new
cipher set with each message or every couple of messages.


>> >Once he knows the algorithm used to select ciphers, super-encipherment
>> >only doubles or triples the amount of time needed to brute force. You'd
>> >be much better off adding an extra byte to your key.
>>
>> The issue is far beyond increasing the keyspace, since that would take
>> a huge number of ciphers.  The issue instead is about spreading
>> messages among many different ciphers, thus forcing The Opponents to
>> "keep up" by identifying, acquiring, attacking, and breaking a
>> continuing flow of new ciphers.  This forces The Opponents to invest
>> far more, and breaking any of these gains them far less.  Note the
>> contrast to the current situation, where only a few main ciphers are
>> used, so only those need be broken.
>>
>
>Good ciphers cannot be instantly generated. They must be individually
>scrutinized and tested. If you don't do that, you are setting
>yourself up for failure.

Look around:  My guess is that we could probably point to hundreds of
cipher designs which now exist.  This is in an environment where there
is no recognition of the value of multiple ciphers, and no common
interface to handle plug-in cipher components.  Were the environment
to change, I think we could easily see 500 ciphers in a few years, and
tens of thousands of ciphers in a couple of decades.


>> >>[...]
>> >> Yes.  Even if each cipher used has known weaknesses, those may not be
>> >> exploitable in the multi-ciphering case.
>> >>
>> >It's only harder by one or two orders of magnitude.
>>
>> Well, let's see:  Suppose we have about 4K ciphers.  By selecting 3 at
>> random so we have about 4k**3 selections.  This is somewhat larger
>> than "two orders of magnitude."
>>
>> But, again, the issue is not really the increase in keyspace.  The
>> issue is that The Opponents have to increase their analysis budget by,
>> say, a factor of 1k.  And even when they are successful, they get at
>> most 1 message out of 1k.  And we reduce that to 1 in 1k*1k*1k by
>> using 3 layers of cipher.
>>
>
>Why is this not the same as increasing key space? The algorithms
>you use to choose the ciphers, must be transmitted to the decriptor.
>Once the attacker intercepts this, he can deduce the ciphers used.

Certainly, the cipher selection amounts to a sort of key, but this
would be a one-time message key or initialization vector, as opposed
to a repeatedly-used user key.  


>Which would you rather trust, one well analyzed cipher with 140 bits of
>key, or 4096 ciphers that probably aren't well analyzed, 128 bits of key,
>and some mechanism to flip between the ciphers.

I take the 4096 ciphers, and I have explained why:  First, the
multi-cipher situation forces any Opponent whatsoever to keep up in a
process which is vastly more expensive for them then for us.  Next, we
divide our total traffic among many different ciphers.  So even if an
Opponent breaks a cipher, the best they can hope for is 1/n of the
traffic.  And in the multi-ciphering case, they don't even get that.

As to "trust," you should be aware that there *is* no way to prove
strength, to measure strength, to build strength or test strength.
Any "trust" of a cipher is a delusion only.  For example, the best
cryptanalysis can do is find weakness, and this only helps if weakness
is actually found.  If no weakness is found, cryptanalysis does not
testify to strength.  

Personally, I think we are better off with user selection (e.g., "I
use what my friend uses," or "We bought a custom cipher subscription")
than any standard picked by a central authority.  


>> >Computers built 3 years
>> >from now
>> >will have enough power to compensate for this difference. Adding an extra
>> >byte
>> >to your key makes the problem harder by 4 to 8 orders of magnitude. This
>is
>> >much
>> >harder to attack, yet it's simpler and cleaner to analyze. Simple clean
>> >systems, are
>> >much less likely to have holes.
>>
>> Using a layered structure is far, far simpler than trying to analyze a
>> cipher to the degree of predicting that one cipher is stronger than
>> another by n orders of magnitude.  Neither value is known, so we can
>> hardly compare them.
>>
>>
>> >Unexpected weaknesses in ciphers designed by good cryptographers (Say
>> >Rivest, or Schneier)
>> >are very unlikely to appear.
>>
>> Sorry.  That is an insupportable statement (and of course unprovable).
>> Being a crypto god does not prevent mistakes.
>>
>
>Correct. However, because many ciphers have a similar structure.
>An advance in discreet mathematics needed to crack one well
>designed cipher would most likely apply to many ciphers. If you
>have some kind of scheme that allowed you to generate thousands of
>ciphers, then it is very likely that the weakness would apply to all.

Certainly any cipher designer will have his bag of tricks.  One might
expect that some of those tricks would be risky, but hardly all.  And
the vast number of designers would be using many different approaches.
It seems extremely *un*likely that a single breakthrough would affect
them all.  Indeed, the unexpected break would be far, far more likely
(and far, far more serious) if we have just one standard cipher.  


>I think it's very unlikely that some one could generate 4096 ciphers that
>don't share a common weakness, and still properly analyzed all of them.
>Even if they did, the selection of which ciphers to use, is part of the key.
>Brute forcing an N bit key plus K bits of which-cipher materical  takes
>O(2^(N+K)) Bruit forcing an N+K bit key also takes O(2^(N+K+b))
>
>See, for every cipher you add, there is an equivalent scheme with only
>one cipher that has the same keys size.

Only in the sense that the meta cipher has various components, but
these are unknown until they are written.  That meta cipher does not
exist until nobody is writing new ciphers anymore.  

>I dare you to name a relevent security breakdown in the past
>20 years that was not the result poor key managment, or poor protocol
>design. 

DES keyspace.

>We have good algorithms. Good security comes not form flipping
>between the algorithms, but from using the algorithms properly.

In the present tense, it is certain that this rarely happens now.  But
the opportunity is available for the future.


>> >Remember DES, after 25 years of analysis, is
>> >still only
>> >vulnerable to a brute force attack.
>>
>> Also insupportable:  DES may have a fast break right now, and if it
>> does, would those who have it have told you?  Yet you feel free to
>> assume that there is no break.  Good for you.  Now prove it.
>
>In the same vein, prove to me that a significant percentage of your
>4096 ciphers will be strong. Unless a cipher is widely scrutinized, by
>many intelligent people, you must assume it is weak. A solid analysis
>of 4096 ciphers would take decades. We need good security now.

First of all, ciphers should be selected by users, and your so-called
strength is up to them.  If a cipher is found weak in some academic
paper, the users may choose to disable that cipher from further use,
or not.  In contrast, if the one standard cipher is found weak, there
is no real alternative, and no process for making that change.  

Significant security is provided simply by partitioning the message
space into many different ciphers; by having a continuing flow of new
ciphers; and by making multi-ciphering an expected process.  This is
security added to any set of ciphers which may already exist.  

Any widely scrutinized cipher could be used in this process, and would
not be weakened by it.  The many-cipher situation can only improve
things, not weaken what is already there.  


>It usually impossible to prove an impossibility. In the absense of
>proof you have to bet on the evidence. We have evidence that a
>few ciphers are strong. We also have evidence that most ciphers are
>weak.

In the absence of proof of strength, it is foolish to bet that a
particular cipher is strong.  The best bet is to use multiple ciphers
at the same time, change ciphers often, and be prepared to disable
particular ciphers at the first sign of trouble.  


>> >I'd be willing to bet that RC-6 and
>> >TwoFish will withstand
>> >the same scrutiny.
>>
>> But that is hardly science or even rational reasoning, is it?
>>
>
>The scientific prinipal is observe, hypothesize, experiment, repeat.
>I can observe that one cipher looks strong. I can form hypothesis about
>it's strengths and weaknesses. I can test these hypothesis, and I can repeat
>until I trust.

No, you cannot test strength.  Neither can anyone else.  *That* is the
problem.  


>With thousands of ciphers, I probably could not observe that they all
>look strong. There are too many of them. The analysis would be
>over whelming. How can I go from this state, to hypothesizing that
>combining them would be secure?

By observing that any one cipher that you do accept can be part of
stack.  That would give you 2 changing ciphers, the ability to switch
to another if you get bad news, and the knowledge that any hidden
break to your favorite cipher is protected by the rest of the stack.

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: Nathan Kennedy <[EMAIL PROTECTED]>
Subject: Re: Stupid Question
Date: Sat, 10 Apr 1999 12:59:55 +0800

matthew gauthier wrote:
> 
> Cubicdog <[EMAIL PROTECTED]> wrote:
> > This doesn't exactly fill me with warm fuzzy feelings. Digging
> > around, I see that NA doesn't bother with publishing the source
> > code for their products, so the newer PGP is no longer
> > subject to objective review. This sorta makes me glaze over
> > with a feeling not unlike inertia.
> 
> The pgp licenses are, frankly, a mess. However, there is still a
> freeware version, and the source is published, if only as a book.
> I've personaly found www.pgpi.com to be a much better source of
> information.
> 
> On the bright side, pgp 5 is much better designed, and alot more
> useful than 2.6.x. On the other hand, it's apparantly impossible to
> get a source-code freeware version inside the us, and waiting for the
> book scanning keeps unix users perpetually behind.
> 
> > What the heck is going on? Is there some other product that
> > is not wholly owned by suits with a vested interest in appeasing
> > tyrants?
> 
> I doubt it, pgp 2.6 and earlier is basically an albatros around the
> neck of a free program, because of the use of RSA and IDEA. If NA would
> release timely freeware source, and loosen the license up some it would
> be a big help.

The solution, of course, is to ditch pgp for gpg, which is as close to free
of licensing issues as crypto software can be (GPLed, developed by
citizen-residents of free countries (I hate to say that as an American)). 
Compatible with PGPs >=5.0, or <5.0 with the rsa and idea patches.  see
www.gnupg.org

You can't trust software with a forbidding license run by a
proprietary-oriented company, whether or not the source is available.

Of course, waiting for binary releases leaves Windows users perpetually
behind ;).

The only possible reason not to switch to gpg is if you need 100% legacy
PGP compatibility.  And not even international PGP does that with the
legacy RSA/IDEA stuff.

Or you could use my own GPLed pubcrypt suite, which I wrote in an hour and
a half last night (Diffie-Hellman and TEA-32), but I don't anyone is
interested in that. =)

Nate

------------------------------

From: "Harvey Rook" <[EMAIL PROTECTED]>
Subject: Re: Announce - ScramDisk v2.02h
Date: Fri, 9 Apr 1999 18:58:49 -0700

My comments withing...
Terry Ritter <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
>
> On Fri, 9 Apr 1999 11:31:40 -0700, in
> <7elhhc$[EMAIL PROTECTED]>, in sci.crypt "Harvey Rook"
> <[EMAIL PROTECTED]> wrote:
>
> >>
> >This is incorrect. By Kerchhoff's maxim, you have to assume your attacker
> >has a copy of your deciphering machine. If he has a copy of your
deciphering
> >machine, the attacker can figure out the algorithm you use to select
> >ciphers.
>
> That is no more true than saying that the attacker can figure out the
> message key or the initialization vector.  We assume the system has
> some reliable random source to make such values.  Similar values
> select what ciphers to use in what order.  And this should change
> frequently.
>

The reliable random source must be communicated to both the encryptor,
and the decryptor. Because it's transmitted or shared, you must assume
the attacker has intercepted it. Because the attacker has intercepted the
it, the attacker knows what ciphers you are using.

>
> >Once he knows the algorithm used to select ciphers, super-encipherment
> >only doubles or triples the amount of time needed to brute force. You'd
> >be much better off adding an extra byte to your key.
>
> The issue is far beyond increasing the keyspace, since that would take
> a huge number of ciphers.  The issue instead is about spreading
> messages among many different ciphers, thus forcing The Opponents to
> "keep up" by identifying, acquiring, attacking, and breaking a
> continuing flow of new ciphers.  This forces The Opponents to invest
> far more, and breaking any of these gains them far less.  Note the
> contrast to the current situation, where only a few main ciphers are
> used, so only those need be broken.
>

Good ciphers cannot be instantly generated. They must be individually
scrutinized and tested. If you don't do that, you are setting
yourself up for failure.

> >>[...]
> >> Yes.  Even if each cipher used has known weaknesses, those may not be
> >> exploitable in the multi-ciphering case.
> >>
> >It's only harder by one or two orders of magnitude.
>
> Well, let's see:  Suppose we have about 4K ciphers.  By selecting 3 at
> random so we have about 4k**3 selections.  This is somewhat larger
> than "two orders of magnitude."
>
> But, again, the issue is not really the increase in keyspace.  The
> issue is that The Opponents have to increase their analysis budget by,
> say, a factor of 1k.  And even when they are successful, they get at
> most 1 message out of 1k.  And we reduce that to 1 in 1k*1k*1k by
> using 3 layers of cipher.
>

Why is this not the same as increasing key space? The algorithms
you use to choose the ciphers, must be transmitted to the decriptor.
Once the attacker intercepts this, he can deduce the ciphers used.

Which would you rather trust, one well analyzed cipher with 140 bits of
key, or 4096 ciphers that probably aren't well analyzed, 128 bits of key,
and some mechanism to flip between the ciphers.

>
> >Computers built 3 years
> >from now
> >will have enough power to compensate for this difference. Adding an extra
> >byte
> >to your key makes the problem harder by 4 to 8 orders of magnitude. This
is
> >much
> >harder to attack, yet it's simpler and cleaner to analyze. Simple clean
> >systems, are
> >much less likely to have holes.
>
> Using a layered structure is far, far simpler than trying to analyze a
> cipher to the degree of predicting that one cipher is stronger than
> another by n orders of magnitude.  Neither value is known, so we can
> hardly compare them.
>
>
> >Unexpected weaknesses in ciphers designed by good cryptographers (Say
> >Rivest, or Schneier)
> >are very unlikely to appear.
>
> Sorry.  That is an insupportable statement (and of course unprovable).
> Being a crypto god does not prevent mistakes.
>

Correct. However, because many ciphers have a similar structure.
An advance in discreet mathematics needed to crack one well
designed cipher would most likely apply to many ciphers. If you
have some kind of scheme that allowed you to generate thousands of
ciphers, then it is very likely that the weakness would apply to all.

I think it's very unlikely that some one could generate 4096 ciphers that
don't share a common weakness, and still properly analyzed all of them.
Even if they did, the selection of which ciphers to use, is part of the key.
Brute forcing an N bit key plus K bits of which-cipher materical  takes
O(2^(N+K)) Bruit forcing an N+K bit key also takes O(2^(N+K+b))

See, for every cipher you add, there is an equivalent scheme with only
one cipher that has the same keys size.

I dare you to name a relevent security breakdown in the past
20 years that was not the result poor key managment, or poor protocol
design. We have good algorithms. Good security comes not form flipping
between the algorithms, but from using the algorithms properly.

> >Remember DES, after 25 years of analysis, is
> >still only
> >vulnerable to a brute force attack.
>
> Also insupportable:  DES may have a fast break right now, and if it
> does, would those who have it have told you?  Yet you feel free to
> assume that there is no break.  Good for you.  Now prove it.

In the same vein, prove to me that a significant percentage of your
4096 ciphers will be strong. Unless a cipher is widely scrutinized, by
many intelligent people, you must assume it is weak. A solid analysis
of 4096 ciphers would take decades. We need good security now.


It usually impossible to prove an impossibility. In the absense of
proof you have to bet on the evidence. We have evidence that a
few ciphers are strong. We also have evidence that most ciphers are
weak.

> >I'd be willing to bet that RC-6 and
> >TwoFish will withstand
> >the same scrutiny.
>
> But that is hardly science or even rational reasoning, is it?
>

The scientific prinipal is observe, hypothesize, experiment, repeat.
I can observe that one cipher looks strong. I can form hypothesis about
it's strengths and weaknesses. I can test these hypothesis, and I can repeat
until I trust.

With thousands of ciphers, I probably could not observe that they all
look strong. There are too many of them. The analysis would be
over whelming. How can I go from this state, to hypothesizing that
combining them would be secure?

Harv
RedRook At Some Yahoo Dot Com
Remove the Some to send email.






------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to