Cryptography-Digest Digest #885, Volume #11      Mon, 29 May 00 09:13:00 EDT

Contents:
  Re: Math problem (P=NP) prize and breaking encryption ("Axel Lindholm")
  Re: Math problem (P=NP) prize and breaking encryption (David A Molnar)
  Re: Is OTP unbreakable?/Station-Station (Tim Tyler)
  Visual Cryptography--Naor and Shamir article (John Bailey)
  Re: Hill's algorithm (Mark Wooding)
  Re: Smart TC1 optimizations? (Mark Wooding)
  Re: Math problem (P=NP) prize and breaking encryption (David A Molnar)
  Re: Smart TC1 optimizations? (tomstd)
  Re: RIP Bill 3rd Reading in Parliament TODAY 8th May (Mark Evans)
  Re: Magnetic Remenance on hard drives. (jungle)
  Re: Math problem (P=NP) prize and breaking encryption ("Scott Fluhrer")
  Re: RIP Bill 3rd Reading in Parliament TODAY 8th May (Mark Evans)
  Re: On dynamic random selection of encryption algorithms (Mok-Kong Shen)
  Re: No-Key Encryption (Mok-Kong Shen)

----------------------------------------------------------------------------

From: "Axel Lindholm" <[EMAIL PROTECTED]>
Subject: Re: Math problem (P=NP) prize and breaking encryption
Date: Mon, 29 May 2000 13:21:33 +0200

"root" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Hi;
>
> In the paper last week there was an article about 7 math
> problems for the 21st century. One of them (P = NP) was
> supposed to allow easy breaking of encryption.
>
> I went to the site (www.claymath.org) and looked at the
> problem description. They suggested that if someone could
> solve a problem like Hamilton Circuit (path through nodes
> visiting each one only once, ending at starting point), then
> the safety of encryption is gone.
>
> Is this hype?  Would it still take a lot of work?  How would
> someone use a solution for Hamilton Circuit to break RSA
> for instance?
>
> I'm certainly not an expert in encryption, but I was curious
> how such an unrelated problem could have anything to do with
> breaking encryption.
>
> Thanks,
>
> stan
>

There are, today, 2 categorys of problems. P and NP. To understand why RSA
gets useless we look into what category the RSA problem belongs to. It is
believed that factoring huge numbers into huge primeparts is NPC (NP
complete).

What is NPC then?
These are the problems that we don't know how to solve quick, the only
algorithms we know to solve these type of problems have so called
exponential worst-case complexity. This means that there is no way to
calculate the solution, but you can check if a guessed solution is correct
in polynomial time (which is VERY bad). This tells us that NPC actually is
part of the NP even though all NP problems doesn't have to have worst-case
complexity.

So what does this have to do with RSA?
To understand this we must know what P is. P is the types of problems which
can be solved within polynomial time, these are our common problems and
they're quite easily solved. It's a fact that P, just as NPC, is a part of
NP. Now say that P = NP, what does that tell us? Since NPC is part of NP
then NPC would be part of P, thus all NPC problems would have an algorithm
to solve them within polynomial time!!

Now the problem is to find out if P = NP, this can be done by trying to
solve certain problems. Concider SAT problem which has the property that all
other NP problems can be reduced and translated into SAT within polynonial
time, now if the SAT problem itself can be solved within polynomial time
then all NP problems can be too! That means that if you can solve the SAT
problem within polynomial time you've shown that P infact is equal to NP,
showing that there is a way to solve RSA problems within polynomial time. To
solve a problem within polynomial time is easy, that makes the RSA system
useless.

// Axel Lindholm




------------------------------

From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: Math problem (P=NP) prize and breaking encryption
Date: 29 May 2000 11:20:40 GMT

Douglas A. Gwyn <[EMAIL PROTECTED]> wrote:
> David A Molnar wrote:
>> It's more than that. A constructive proof that P = NP which
>> took the form of a very efficient (i.e. low polynomial time)
>> algorithm for solving any problem in NP would change the world.
>> Not just in cryptography, either.

> I don't think a nonconstructive proof of P=NP would make much
> difference in practice, and I rather doubt that a single

That's the point I thought I was making -- a non-constructive proof of P =
NP would leave us with the theoretical result, but no algorithm which
we didn't already have. So in practice we would have more or less what
we do now, but with a lot less optimism. 

> algorithm could be "very efficient" as you have defined it for
> *all* problems in NP.  (In fact I think I could prove not --
> basically that would imply that all P problems are low-P.)

OK. I suppose I should have written that then no algorithm is 
found for factoring and discrete log which is very efficient,
despite a proof of P = NP. Thanks. 


> Remember that this sort of "complexity" is only in the limit
> of infinitely large N, and is worst-case, and requires exact
> solutions. 

Yes, thanks for pointing that out. The Impagliazzo paper I noted
addresses possible riffs on "worst-case. The other two are still
well worth considering. Especially the fact that these results
tend to apply in the limit (something I know you've pointed out before).

There was a remark in a class I was taking in fall to the effect of
"you can figure out how large 'sufficiently large' has to be if you 
recast everything in boolean circuits." Having seen a little bit of
boolean circuit complexity last term, that seems plausible but not 
exactly straightforward. 


> We already know of an efficient algorithm for
> *closely approximating* the solution for at least one NPC
> problem.  For some applications, that is good enough.

Yeah. I have seen a few approximation algorithms and non-approximability
results, but what is approximable and what is not doesn't make any sense
to me yet. That is, I don't have a good idea why some NPC problems are
much more approximable than others. Nor do I have a good notion of how
approximable factoring is, although I'm not sure what an approximation
algorithm would mean exactly -- spits out numbers which are within
epsilon of the real factors?

Thanks,
-David

------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Is OTP unbreakable?/Station-Station
Reply-To: [EMAIL PROTECTED]
Date: Mon, 29 May 2000 11:30:49 GMT

Joaquim Southby <[EMAIL PROTECTED]> wrote:
: Guy Macon, [EMAIL PROTECTED] writes:

:>[...] If I use any of the standard authentication protocols,
:>someone who knows my plaintext but not my key and who can
:>intercept my ciphertext and replace it with his own cannot
:>send a message that looks like I sent it.  In the case of
:>checksum followed by OTP encryption, he can.  This is the
:>classic man-in-the middle attack combined with the classic
:>known/chosen plaintext attack.  Good security systems resist
:>these attacks, singly or in combination.  OTP doesn't.
:
: Perhaps the arguments against your statements are springing from the fact
: that you denigrate OTP using attack scenarios that are somewhat unusual. 
: The attack you described on OTP entails finding plaintext that matches a
: particular cyphertext that you have managed to intercept and also prevent
: from reaching the intended receiver.  That's some set of circumstances. 

There's nothing terribly unusual about chosen plaintext attacks - or
man-in-the-middle attacks.

The ability to process information by intercepting all communication
through a cable provides room for a man-in-the-middle.

...and known-plaintexts have been used by cryptanalysts extensively since
the art was born.

I don't see any coherent arguments against Guy's summary.
-- 
__________  Lotus Artificial Life  http://alife.co.uk/  [EMAIL PROTECTED]
 |im |yler  The Mandala Centre   http://mandala.co.uk/  UART what UEAT.

------------------------------

From: [EMAIL PROTECTED] (John Bailey)
Subject: Visual Cryptography--Naor and Shamir article
Date: Mon, 29 May 2000 11:45:16 GMT

Visual Cryptography
Moni Naor and Adi Shamir. given at Eurocrypt '94
http://www.wisdom.weizmann.ac.il/~naor/PAPERS/vis.ps
(quoting)
In this paper we consider a new type of cryptographic scheme, which
can decode concealed images without any cryptographic computations.
The scheme is perfectly secure and very easy to implement. We extend
it into a visual variant of the k out of n secret sharing problem, in
which a dealer provides a transparency to each one of the n users; 
any k of them can see the image by stacking their transparencies, but
any k minus 1 of them gain no information about it.
(end quote)

A web page set up earlier to demonstrate the technique described by
Naor and Shamir in their 1994 article has been improved to include
reference links, a description of the exact image manipulation
technique which produces the encrypted result, and a reference to
Spread Spectrum Modulation as an alternative theoretical basis.

http://www.frontiernet.net/~jmb184/demo.html
John

------------------------------

From: [EMAIL PROTECTED] (Mark Wooding)
Subject: Re: Hill's algorithm
Date: 29 May 2000 11:29:05 GMT

Benjamin Goldberg <[EMAIL PROTECTED]> wrote:
> Mok-Kong Shen wrote:
>
> > The problem with Hill's method lies more in the fact that, if a pair
> > of plaintext and ciphertext is available, then the matrix can be
> > recovered.

My brain's not been working well recently, but, assuming it's accurate
right now, this is wrong: you need n linearly independent plaintext/
ciphertext pairs, where n is the size of your plaintext and ciphertext
vectors; if you have less than this, then the system of equations isn't
fully determined.  (Of course, if you've recovered k < n pairs, you can
decrypt any message which is spanned by the k vectors you've already
got.)

However, what Mok-Kong Shen doesn't seem to understand is the difference
between a matrix multiplication as a cipher by itself, which is what
Hill did, and as a component in a modern block cipher.

Use of a matrix multiplication as a component in a block cipher is not
Hill's cipher, just as use of modular addition as a component in a block
cipher isn't Caesar's cipher.

> How?  Please keep in mind that I'm not using a "pure" Hill's cipher.
> I'm doing 8 rounds of whitening, matrix multiplication, and diffusion.

Indeed.  Where did you get that shift/xor diffusion step from?

-- [mdw]

------------------------------

From: [EMAIL PROTECTED] (Mark Wooding)
Subject: Re: Smart TC1 optimizations?
Date: 29 May 2000 11:47:09 GMT

tomstd <[EMAIL PROTECTED]> wrote:

> As you can see a p=1/64 goes to another p=1/64 and onto a p=6/256.
> What I want is p=>2/256 to move to a lower p=<5/256 value, and to
> avoid anything going upwards for example p=<5/256 to p=>5/256.
> 
> If my bit perm does this it should be resistant to differential
> cryptanalysis right?

It looks like you're trying to design your S-box and permutation
separately, or at least in order.  I think that's the wrong strategy:
you should design both together, so that they complement[1] and
strengthen each other.

>From the point of view of differential cryptanalysis, the real danger in
a substitution/bit-permutation structure is low-Hamming-weight output
differences.  Once you have a one-bit output difference, that can only
affect one S-box in the next round -- the permutation has no ability to
spread differences on its own.

This is why MDS matrices are such a good idea.  If you have an n x n MDS
matrix, any nontrivial difference through the matrix will have at least
n + 1 elements active on both sides.  You can use this fact to prove
useful things about the number of active S-boxes which must be active in
any r-round characteristic.

Anyway, back to the point in hand: if you're really just playing with
bit permutations, I think you want to design your S-box so that
low-Hamming-weight output differences have low probability, and
particularly ensure that differentials with low-weight input and output
differences have low probability.

Another thing you could consider is a DES-like expansion permutation,
which would enable a single bit change to affect multiple S-boxes in the
next round.

[1] That's not like DES's complementation property.  By the way, did
    anyone mention that TC1 has one of these?  In fact, it's worse than
    that.  Let d be any 32-bit XOR difference.  Then applying d to each
    word of a plaintext and key causes the ciphertext to have a
    difference of d too.  This improves brute-force key search by a
    factor of 2^32.  XORing in round constants won't help here; adding
    them might.

-- [mdw]

------------------------------

From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: Math problem (P=NP) prize and breaking encryption
Date: 29 May 2000 11:40:34 GMT

Axel Lindholm <[EMAIL PROTECTED]> wrote:
> There are, today, 2 categorys of problems. P and NP. To understand why RSA
> gets useless we look into what category the RSA problem belongs to. It is
> believed that factoring huge numbers into huge primeparts is NPC (NP
> complete).

Actually, it is not known that factoring is NP-complete. 
It is known that factoring is in NP and in coNP, which leads some people
to conjecture that factoring is *not* NP-complete, but instead somehow
easier. Although no one knows how easy, of course. 

> What is NPC then?
> These are the problems that we don't know how to solve quick, the only
> algorithms we know to solve these type of problems have so called
> exponential worst-case complexity.

The best known algorithm for factoring is subexponential. 

Thanks, 
-David

------------------------------

Subject: Re: Smart TC1 optimizations?
From: tomstd <[EMAIL PROTECTED]>
Date: Mon, 29 May 2000 04:59:45 -0700

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED]
(Mark Wooding) wrote:
>tomstd <[EMAIL PROTECTED]> wrote:
>
>> As you can see a p=1/64 goes to another p=1/64 and onto a
p=6/256.
>> What I want is p=>2/256 to move to a lower p=<5/256 value,
and to
>> avoid anything going upwards for example p=<5/256 to p=>5/256.
>>
>> If my bit perm does this it should be resistant to
differential
>> cryptanalysis right?
>
>It looks like you're trying to design your S-box and permutation
>separately, or at least in order.  I think that's the wrong
strategy:
>you should design both together, so that they complement[1] and
>strengthen each other.

The problem is finding SAC fullfilling non-linear sboxes takes a
while, so i decided to make them first then find good bit
permutations.

>From the point of view of differential cryptanalysis, the real
danger in
>a substitution/bit-permutation structure is low-Hamming-weight
output
>differences.  Once you have a one-bit output difference, that
can only
>affect one S-box in the next round -- the permutation has no
ability to
>spread differences on its own.

You mean strict avalanche?  The sboxes fullfill the property (at
least they should) that flipping one input bit causes four bits
to change on the output with a prob of 1/2.

However given you make two bit differences to attack it, I
should change that to 2 bit differences cause at least 3 bits
output differences.  That should help prevent diff attacks.

>This is why MDS matrices are such a good idea.  If you have an
n x n MDS
>matrix, any nontrivial difference through the matrix will have
at least
>n + 1 elements active on both sides.  You can use this fact to
prove
>useful things about the number of active S-boxes which must be
active in
>any r-round characteristic.
>
>Anyway, back to the point in hand: if you're really just
playing with
>bit permutations, I think you want to design your S-box so that
>low-Hamming-weight output differences have low probability, and
>particularly ensure that differentials with low-weight input
and output
>differences have low probability.

I want to use bit permutations to keep the algorithm simple for
all platforms.  In smart cards I can just store the 8x8 sboxes
instead of the 8x32 ones.

>Another thing you could consider is a DES-like expansion
permutation,
>which would enable a single bit change to affect multiple S-
boxes in the
>next round.

Expansion takes too much time.  I want this to be as fast as say
Blowfish.

>[1] That's not like DES's complementation property.  By the
way, did
>    anyone mention that TC1 has one of these?  In fact, it's
worse than
>    that.  Let d be any 32-bit XOR difference.  Then applying d
to each
>    word of a plaintext and key causes the ciphertext to have a
>    difference of d too.  This improves brute-force key search
by a
>    factor of 2^32.  XORing in round constants won't help here;
adding
>    them might.

I can just as easily add them.

Thanks for the heads up.

Tom


* Sent from RemarQ http://www.remarq.com The Internet's Discussion Network *
The fastest and easiest way to search and participate in Usenet - Free!


------------------------------

From: Mark Evans <[EMAIL PROTECTED]>
Crossposted-To: 
uk.media.newspapers,uk.legal,alt.security.pgp,alt.privacy,uk.politics.parliament,uk.politics.crime,talk.politics.crypto,alt.ph.uk,alt.conspiracy.spy,alt.politics.uk,uk.telecom
Subject: Re: RIP Bill 3rd Reading in Parliament TODAY 8th May
Date: Mon, 29 May 2000 13:01:54 +0100

David Boothroyd <[EMAIL PROTECTED]> wrote:
> In article <[EMAIL PROTECTED]>, Bob
> <[EMAIL PROTECTED]> wrote:
>> > The proposals in the Bill are exactly the same as the ones Labour suggested
>> > before the election so there really isn't anything for anyone to get
>> > worked up about. 
>> 
>> They didn't exactly go out of their way to publicise this gross human
>> rights violation though, did they.

> It is not a human rights violation. The s.19 certificate states that the
> Bill complies with all the UK's human rights obligations.

Could this simply the the equivalent of a piece of spam stating "This
is not spam" though.

-- 
Mark Evans
St. Peter's CofE High School
Phone: +44 1392 204764 X109
Fax: +44 1392 204763

------------------------------

From: jungle <[EMAIL PROTECTED]>
Subject: Re: Magnetic Remenance on hard drives.
Date: Mon, 29 May 2000 08:16:15 -0400

you are creating another environment for the challenge ...

the file copy to floppy & wipe occurred on same drive ...
in my company no one stored ASCII text files on floppy ...
the user had 1 mb scramdisk container ...
recovery of 1% of data for 1 mil is just funny proposition ...

the acceptable bet would be :
================================
you will recover floppy data, you get 1 mil ... 
you WILL NOT recover it you will pay 1 mil ...

the next point is, how much time you need for this TINY FLOPPY ?

[EMAIL PROTECTED] wrote:
> 
> Try this one.
> 
> Let me choose the make and model of floppy drive used.  Both will be
> commercial off-the-shelf models.  We'll put one drive in Computer A and
> put the other drive in Computer B.  Allow me to "age" either drive
> by running it through duty cycles equivalent to mormal usage over time.
> Allow me to format a floppy on either drive.  Now put the floppy in
> drive A and fill it up with several ASCII data files.  Now take that
> floppy to Computer B and run a 3 pass PGP wipe.
> 
> Give the disk to me.  If I can retrieve one percent of the data on the
> disk, you give me a million bucks.  If I fail, then the cost of retrival
> is on me.
> 
> The requirement to protect state secrets is orders of magnitude more
> stringent than the ability to recover lost data.  In the former, any
> small piece of data can be very damaging to national security.  In the
> later, the recovery usually must be a significant percentage of the
> loss.
> 
> Furthermore, I am certain that if the military can retrieve a 3 pass PGP
> wipe, the intelligence windfall from the capability will FAR OUTWEIGH
> any commercial benefit.
> 
> If the information on that floppy allows your enemy to take
> millions from you, would you really feel safe with just a 3 pass PGP
> wipe?  If your answer is yes, how about taking me up on my
> offer?  Floppies cost pennies.  If you really want to protect the
> information on that floppy, burn it.
> 
> In article <[EMAIL PROTECTED]>,
>   jungle <[EMAIL PROTECTED]> wrote:
> > in the past, several of the market data recovery companies refused to
> attempt
> > to recover data wiped by pgp 3 pass from floppy disk ...
> >
> > without even trying to attempt it ...
> >
> > this is just A PAPER, PAPER FANTASY ... or academic dislocation ...
> >
> > I have to this day the floppy that has been wiped [ by accident ] by
> one of the
> > company employers ...
> >
> > do you like to help to recover data from it ?
> > it is just a floppy disk ... wiped only with 3 passes under pgp ...
> >
> > Jonathan Thornburg wrote:
> > >
> > > In article <[EMAIL PROTECTED]>,
> > > Trevor L. Jackson, III <[EMAIL PROTECTED]> wrote:
> > > >Do you have a reference handy?
> > >
> > > It's not a commercial service, but
> > >
> > > > ## http://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html
> > > >
> > > >         Secure Deletion of Data from Magnetic and Solid-State
> Memory
> > > >
> > > >                                Peter Gutmann
> > > >                        Department of Computer Science
> > > >                            University of Auckland
> > > >                          [EMAIL PROTECTED]
> > > >
> > > >    This paper was first published in the Sixth USENIX Security
> Symposium
> > > >             Proceedings, San Jose, California, July 22-25, 1996
> > >
> > > explains how/why data recovery *is* both possible and practical,
> > > in some detail.
> >
> >
> 
> Sent via Deja.com http://www.deja.com/
> Before you buy.



------------------------------

From: "Scott Fluhrer" <[EMAIL PROTECTED]>
Subject: Re: Math problem (P=NP) prize and breaking encryption
Date: Mon, 29 May 2000 05:02:07 -0700


Boris Kolar <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
>
>
> David Blackman wrote:
>
> > Scott Contini wrote:
> > >
> >
> > > Here's my 2 cents:
> > >
> > > Factoring is certainly not NP-complete (though a proof of this
> > > does not exist).  We can factor in sub-exponential time, but the
> > > fastest algorithms for solving general NP-complete problems take
> > > exponential time.
> > >
> > > Scott
> >
> > I think there is a proof that if factoring is NP-complete, then either
> > P=NP, or the extended Riemann hypothesis fails. I vaguely remember this
> > is referenced in Garey and Johnson (but i've lost my copy). A lot of
> > people would consider this to prove that factoring is not NP-complete,
> > although technically it is not quite such a proof.
> >
> > It hinges on factoring being (almost certainly) in co-NP.
>
> Factoring is in fact in the intersection of NP and co-NP.
To make this correct statement more explicit, both these decision problems
have polynomial verifiable proofs:

  Given two integers n and a, does there (not) exist integers b>1, c such
that b*c=n and b<a

In the positive case, a proof can consist of (b, c)

In the negative case, a proof can consist of the factorization of n, along
with primality proofs of all the factors.

> All problems that are relevant to public-key cryptography are there.
Question: is that true for all known public-key cryptosystems, or is that
known to be inherent to public-key cryptography?

> The problem:
>     NP and co-NP =? P
> is even more important, than P =? NP. It's widely believed, that the
answer
> is NO (the NO answer would also imply P /= NP).
>
> Finding a problem, that is (NP and co-NP)-complete, would also be very
> important for cryptography.

Actually, I didn't think we knew of a problem that was known to be both NP
and co-NP, but not P, even if you made the initial assumption that NP!=coNP

--
poncho




------------------------------

From: Mark Evans <[EMAIL PROTECTED]>
Crossposted-To: 
uk.media.newspapers,uk.legal,alt.security.pgp,alt.privacy,uk.politics.parliament,uk.politics.crime,talk.politics.crypto,alt.ph.uk,alt.conspiracy.spy,uk.telecom
Subject: Re: RIP Bill 3rd Reading in Parliament TODAY 8th May
Date: Mon, 29 May 2000 13:11:50 +0100

Adrian Kennard <[EMAIL PROTECTED]> wrote:
> David Boothroyd wrote:
>>...
>> I thought you said you were too young. The Poll Tax was replaced because
>> Conservative MPs realised it was too unpopular. The idea that the police
>> being able to demand that encrypted data (about which they have a reasonable
>> suspicion) be decrypted is in some way unreasonable is absurd.

> The idea that the police may have unfounded suspicion.

Or that the police may be corrupt, attempting to cover up
a mistake or criminal action, etc.

> The idea that the individual may not wish to disclose a key
> which can then be used to decode everything they have ever
> recevied regardless of relevance, and sign things with their name, etc.

Thus also rendering digital signatures potentially useless.

There is also the same issue as comes up with key escrow.
What level of security will be applied to the data? 

> The idea that (a) if you dont have it then you can simply
> say that - i.e. anyone gets off and the law is stupid, or (b)
> that you have to somehow prove you dont have it, which is
> just about impossible, and so anyone can easily be framed
> by anybody else.

One of the "protests" is to attempt to do this to Jack Straw.

> I cannot see it helping anything. For a criminal, if you are asked
> to hand over a key that means that you know they dont have better
> evidence - all the more reason not to incriminate yourself (basic
> right, isn't it?)

But a good reason to lead them of a wild goose chase.

The supposed targets of this legislation are organised gangs
and terroists. Excalty the sort of people to consider they are
fighting a "way" against the police and thus know all about
centuries old tactics.

-- 
Mark Evans
St. Peter's CofE High School
Phone: +44 1392 204764 X109
Fax: +44 1392 204763

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: On dynamic random selection of encryption algorithms
Date: Mon, 29 May 2000 14:45:15 +0200



Dan Kaminsky wrote:

> "Mok-Kong Shen" <[EMAIL PROTECTED]> wrote:
>
> > Let there be a set of n block ciphers that are deemed
> > appropriate for being used in multiple encryptions with
> > m constituent ciphers (m1 <= m <= m2). Let a PRNG be
> > chosen by the communication partners and a secret session
> > key be given. We'll use the key as the seed to the PRNG
> > to generate pseudo-random number sequences required below.
> >
> > For each block of plaintext to be encrypted, first
> > generate a random number m in [m1, m2]. Then randomly
> > choose m algorithms from the available set of n
> > algorithms. Subsequently perform a random permutation
> > of the ordering of these to determine the sequential order
> > with which the m algorithms are to be placed in the stack
> > (eventually avoiding, if desired, the case that two
> > consecutive algorithms in the stack are the same). Finally
> > generate m keys that are required by these algoritms to do
> > the encryption of the given block of plaintext.
>
> Is your system more robust, in that a compromise of any given algorithm only
> defeats blocks encoded with that algorithm when used in ECB mode, and only
> defeats n/m messages when used in any form of chained mode?  Or is it less
> secure, since the increased complexity reduces the security of the total
> message to that of the weakest algorithm?

There are a number of issues addressed. First is the use of multiple
encryptions. Terry Ritter has recently aptly argued for that. Using a
PRNG to determine the stack has the convenience of not needing a
separate negotiation of its composition. If you don't like cipher
stacks, i.e. you use m=1, you still have a diversity and reduce the
amount processed by any one algorithm. Because the key is variable,
those techniques of attack that need huge amounts of materials
processed by the same key are no longer feasible. In case the block
sizes are equal, you can use whichever mode you want (simply
consider a cipher stack as a single algorithm). I assume that the
algorithms (and the stack of algorithms) are equally strong or that
the difference in strength is not an issue for your application, i.e. you
can be satisfied with the weakest of the ensemble of algorithms that
you have selected and are consequently certainly not unhappy
to occassionally use some of the stronger ones.

> In thinking about it, I believe I've come to the root of your problem, which
> you try to avoid by specifying the generation of m keys for m ciphers:  If
> each cipher uses the same key, a compromise on *any* cipher will compromise
> *every* cipher.  So you get around this by having additional keying
> material--essentially, more bits.
>
> If I'm going to have more bits, it's generally presumable that it's more
> secure to have those bits thoroughly mixed rather than segmented by
> subcipher.  Cracking one DES key does not grant you a third of a 3DES
> encrypted message--for a reason!

You can use more key bits by designing block ciphers that utilize the
larger amount of key bits (compare AES with DES) or you can do
in the way like 3DES, i.e. build up on block ciphers that are already
available without new designs. You can view my proposal as a
generalization of the idea underlying 3DES, which is a very special
case of a cipher stack employing constant keys. Note that my
proposal is flexible in two respects. First, you can choose the
ensemble of n algorithms. Second, you can choose your PRNG; in
particular, it is rather easy to use longer seeds at any time, if
requirement of that arises.

M. K. Shen





------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: No-Key Encryption
Date: Mon, 29 May 2000 14:46:52 +0200



Mok-Kong Shen wrote:

> The scheme is probably what initiated the idea of current public key
> systems. It apparently has no specific name and is described on p.63 of
>
>      A. Salomaa, Public-Key Crpytography. Springer-Verlag, 1990.

It occurs to me that there might be something that could be further
discussed. If one could generate high quality bit sequences and
could avoid manipulations, e.g. through appending an encrypted
hash, and the trasmission cost were low enough, couldn't the
scheme be a useful one in practice? (Note that the problem of
tranport of OTP is here circumvented.) What are the vulnerable
points? Thanks in advance.

M. K. Shen


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to