Cryptography-Digest Digest #791, Volume #10      Sat, 25 Dec 99 03:13:00 EST

Contents:
  Re: compression & encryption (Guy Macon)
  Re: compression & encryption (SCOTT19U.ZIP_GUY)
  Re: DES Problem! (John Savard)
  Re: financial crypto & e-cash milis/newsgroup (David A Molnar)
  Re: Are PGP primes truly verifiable? (David A Molnar)
  Re: Are PGP primes truly verifiable? (Greg)
  Re: ElGamal Opinions, Please (lcs Mixmaster Remailer)
  Re: compression & encryption ("John E. Gwyn")

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Guy Macon)
Subject: Re: compression & encryption
Date: 24 Dec 1999 15:54:46 EST

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (John E. Gwyn) 
wrote:
>
>Kenneth Almquist wrote:
>> If your encryption is strong enough to resist a known plaintext
>> attack, then it doesn't matter if your compression algorithm is
>> bijective or not.
>
>That's almost right.  It is in principle *possible* for some
>compression scheme to "resonate" with the outer encryption and
>weaken it, but that is most unlikely.

Correct me if I am wrong (Clueless Newbie alert) but it would seem
to follow that blindly running the same encryption scheme (especially
with the same key) twice might very well have such "resonance".
I would imagine that this varies according to encryption method.
It also seems to me that a OTP would be immune to such "resonance".
Am I understanding thisd correctly?  Has anyone ever analysed running
popular encoding schemes 3 times?  I thought I had read about "Triple
DES" and Cyphersaber-2 having this, but my memory may be flawed...


------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: compression & encryption
Date: Fri, 24 Dec 1999 22:12:08 GMT

In article <83v0a5$f2p$[EMAIL PROTECTED]>, [EMAIL PROTECTED] (Kenneth Almquist) 
wrote:
>[EMAIL PROTECTED] (SCOTT19U.ZIP_GUY) wrote:
>> In article <83ofas$omq$[EMAIL PROTECTED]>, [EMAIL PROTECTED] (Kenneth
> Almquist) wrote:
>>> So in most applications an attacker can get sufficient information that
>>> additional information provided by a compression algorithm will not make
>>> an appreciable difference.  The solution is to use an encryption algorithm
>>> which can resist a known plaintext attack.
>
>>   Sure one should try to use secure methods which resist a known plan
>> text attack.  But there are newer plaintext attacks everyday and many
>> that are not published in the open literature.  How do you protect
>> against them.  You can't.
>
>You can encrypt twice, using two different algorithms.  This will
>protect you if one of the algorithms is broken.  It may even protect
>you if both algorithms are broken, because it does not allow an
>attacker to apply a known plaintext attack to either algorithm.
>
>Compressing using a nonbijective algorithm and then encrypting twice
>may be faster than compressing using a bijective algorithm and then
>compressing once.  That is because some of the compression algorithms
>which provide the best combination of speed and compression ratio
>(such as the one used by gzip) are not bijective.
     The only reason some of the best methods at present are not
bijective is becasue designers have not given this much thought.
Compression is not the same as encrption so there has been
little work in this area.
>
>I believe that except in special circumstances, it is a mistake to
>assume that an opponent will not be able to mount a known plaintext
>attack.  For this reason, I believe that the apparent improvement
>in security offered by bijective compression is an illusion.  If
>your encryption is strong enough to resist a known plaintext attack,
>then it doesn't matter if your compression algorithm is bijective or
>not.  If your encryption algorithm is vulnerable to a known plaintext
>attack, then the chances are that a determined attacker will obtain
>a known plaintext an break your cipher, so it is not likely to matter
>whether you use bijective encryption.  Sure, it's *possible* for
>bijective compression to stop an attack on a cryptosystem, just
>as it's possible that prefixing the encrypted text with an obscene
>message will stop an attacker with delicate sensibilities.  My point
>is that "possible" is not the same as "probable."  Designers of
   But what you seem  to lack knowledege of is that if a cypto system is 
vulnerable to some sorts of plain text attack and your dumb enough to use
compression that is not bijective then you open the system up
to a cipher only attack. Which is far worse. Once you learn more
about crypto maybe you will be able to understand the differenece.

>cryptographic systems do better to concentrate their efforts on
>other approaches.
>                                Kenneth Almquist
>
>
>P.S.  Now that I've posted this, is it too late for me to apply
>      for a patent on the use of obscenity as a cryptographic
>      technique?  :-)

   It might be why don't you try to get a patent on it. I am sure the
patient office has gone through a dumbing down of persnnenl like
most of the rest of government since Clinton got in power.





David A. Scott
--

SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
                    
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm

Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm

Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm

**NOTE EMAIL address is for SPAMERS***

I leave you with this final thought from President Bill Clinton:

   "The road to tyranny, we must never forget, begins with the destruction of the 
truth." 

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: DES Problem!
Date: Fri, 24 Dec 1999 16:57:28 GMT

"Daniel Suberviola" <[EMAIL PROTECTED]> wrote, in part:

>Round 15: 00001101011110100100111101000100 00111100101110100000001100000011
>USING KEY 000101100110110101010101011110101011010011101101

>Round 16: 00111100101110100000001100000011 11011111101010111010100101100011
>USING KEY 010111100110110101010101011010101111010010101111

>*** DECRYPTION

>Round 0: 00111100101110100000001100000011 11011111101010111010100101100011
>(Round 0 is the original ciphertext after the initial permutation.)

>Round 1: 11011111101010111010100101100011 11000101101111100001110001010111
>USING KEY 010111100110110101010101011010101111010010101111

The Round 16 output and the input to Round 1 of decipherment are the
same, so the IP and IIP are working OK.

After Round 1, the LH is unchanged, and the RH is modified.

And after Round 16, again the LH is unchanged, and the RH is modified,
of the swapped Round 15 output.

So the trouble is that

11000101101111100001110001010111 =
f(subkey16,11011111101010111010100101100011) xor
00111100101110100000001100000011

in decipherment, while

11011111101010111010100101100011 =
f(subkey16,00111100101110100000001100000011) xor
00001101011110100100111101000100

is what happened in the last encipherment step.

Well, that shows what the problem is. You're using the wrong half of
the block as the input to the f-function in your first decipherment
round. Maybe you forgot that swapping halves is omitted after round 16
in DES.

John Savard (jsavard<at>ecn<dot>ab<dot>ca)
http://www.ecn.ab.ca/~jsavard/crypto.htm

------------------------------

From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: financial crypto & e-cash milis/newsgroup
Date: 25 Dec 1999 01:35:49 GMT

Arrianto Mukti Wibowo <[EMAIL PROTECTED]> wrote:
> Hello,

> Does anybody know any mailing list or newsgroup for people whose interested
> in financial cryptography, especially e-cash?

Bob Hettinga runs the "digital commerce society of boston" list.
Despite the title, it may be useful even if not living in
Boston/Cambridge. 

http://www.shipwright.com/dcsb.html

contains info for signing up. 

Sometimes it has interesting original articles; at other periods it just 
forwards financial crypto posts from cypherpunks and coderpunks. 

-David Molnar

------------------------------

From: David A Molnar <[EMAIL PROTECTED]>
Crossposted-To: talk.politics.crypto;
Subject: Re: Are PGP primes truly verifiable?
Date: 25 Dec 1999 02:12:45 GMT

Greg <[EMAIL PROTECTED]> wrote:
> Since PGP primes are getting to be very large now (to keep
> up with demand for strength against newer computers), it
> seems to me that they are becoming more impossible to
> verify as true primes.

> Is this a reasonable assumption?

It's not clear to me whether you're worried about verifying primes 
for use in key generation, or whether you're worried about certifying the
"strength" of a given public key. Each situation has a different notion of
"verify."

Other posts in this thread have brought up the first situation. From what
I can tell, longer primes means that the primality tests will run longer,
but not too much longer. It will still be possible to find 2048-bit RSA
keys if we need them. 

In the second situation for RSA, it's not nearly as straightforward. There
are ways to show that a given number "n" really is the product of two
primes. I'll list some papers below. As far as I can tell, this does
take some time (although still polynomial, so "feasible."). I don't know
of anyone having implemented these proofs, so I can't be as quantitative
as I want to be with the performance. 

> With ECC, a private key is simply an integer (any value will do)
> and the public key is always a valid result of a point multiply.
> To be specific, ECC keys are all valid.  Yet, PGP and other IFC
> keys are getting to the point where verifying their validity
> of strength is impossible.

It does seem to take work to show positively that an RSA key is "well
formed." Not impossible, but possibly more work than you or I want
to go through.  

Just a few things (if you don't mind me poking a bit) :

Do we need to make a distinction between "valid -- crypto works"
and "valid -- not a weak key"? 

Do you consider the choice of curve part of the choice of key ?
Is there a difference between my publishing an RSA public key
which has a special, easy-to-factor form , and my registering an
ECC curve which is supersingular or otherwise flawed?
Does it not matter because I should be using a NIST curve anyway?

Where can I find the proof that all ECC keys are equally strong?
(for which system?)

What do you need to do to convince me that your ECC key posesses enough
"validity of strength" ?

That being said, here's some papers I've seen on "How To Prove This
RSA Key Really Is Of The Right Form" :

 A Statistical Limited-Knowledge Proof for Secure RSA Keys by 
Moses Liskov and  Robert D. Silverman 
http://grouper.ieee.org/groups/1363/contrib.html (scroll down)

 R. Gennaro, D. Micciancio, and T. Rabin, An Efficient Non-Interactive
 Statistical Zero-Knowledge Proof System for Quasi-Safe Prime Products,
   Proceedings of the Fifth ACM Conference on Computer and Communications
   Security, 1998
link to paper also at 
http://www.counterpane.com/biblio/all-authors-M.html

Same page also has :
 J. Camenisch and M. Michels, Proving in Zero-Knowledge that a Number
   is the Product of Two Safe Primes [.ps] [.ps.gz], EUROCRYPT '99, LNCS
   v. 1592, pages 106-121, Springer Verlag, 1999. 


There are probably more. I'd love to hear about such papers, and
especially if anyone's implemented these proofs.

Anyway, hope this sheds some light on how "impossible" or not verifying
RSA keys is. It is a problem, but looks solvable. The question is whether
you're willing to pay for the solution. :-)

Thanks, 
-David Molnar

------------------------------

From: Greg <[EMAIL PROTECTED]>
Crossposted-To: talk.politics.crypto;
Subject: Re: Are PGP primes truly verifiable?
Date: Sat, 25 Dec 1999 06:28:00 GMT


> It's not clear to me whether you're worried about
> verifying primes for use in key generation, or whether
> you're worried about certifying the "strength" of a given
> public key. Each situation has a different notion of "verify."

In my opinion, and please interject if you feel I am in error,
IFC requires true primes to gain the benefits of full strength
of the key size.  If a supposed prime is found not to be a true
prime, then the key can be cracked significantly faster than
otherwise expected by the person using it.

Since the whole issue of PGP, for example, is safety in
privacy, in my mind, I keep asking myself (and now all of
you), "Why would anyone trust an encryption system they
cannot verify is working as advertised?"

> I can tell, longer primes means that the primality tests
> will run longer, but not too much longer. It will still
> be possible to find 2048-bit RSA keys if we need them.

>From what you and others have said thus far, the answer is,
"No we cannot guarantee it 100%."  So then I have to ask,
"What is the tangible risks if it is not truly a prime?"

> Just a few things (if you don't mind me poking a bit) :

But I appreciate that you do...

> Do we need to make a distinction between "valid -- crypto works"
> and "valid -- not a weak key"?

My understanding is that if the key is weak then the crypto does
not work as well- it does not work as advertised, so I believe
the answer is yes.

> Do you consider the choice of curve part of the choice of key ?

No.  The curve is chosen before software deployment, usually.
PGP keys, on the other hand, are chosen, like ECC keys, on the
fly.  The only difference is, PGP keys are not 100% guaranteed
even with extensive effort, while ECC keys are 100% guaranteed
with zero effort.

> Is there a difference between my publishing an RSA public key
> which has a special, easy-to-factor form , and my registering an
> ECC curve which is supersingular or otherwise flawed?

Two answers:

Yes.  Unlike the RSA public key, the curve can have years of
testing before it is published.  RSA public keys developed on
the fly are only as good as the short tests they receive.

And no.  There is the possibility that both are weak and only
found in time after use.  The damage can be incredibly extensive.

> Does it not matter because I should be using a NIST curve anyway?

I would have to say that it should matter less as (1) the curve
has been challenged for a lengthy period of time without success
(e.g.- Certicom challenges) and (2) ECC itself becomes better
understood as a science.

> Where can I find the proof that all ECC keys are equally strong?
> (for which system?)
> What do you need to do to convince me that your ECC key
> posesses enough "validity of strength"?

I have two answers here.  First, the validity of strength of
any ECC key is based upon demonstrating the cyclic nature of the
curve in the field space itself.  In my mind, that demonstrates
the raw key space to be searched and how every integer must be
tested to find the one corresponding to the point in question.
This is the science as we know it today.  To say, "We just found
that the curve Greg published has a flaw and we can use a much
shorter search of the key space!" can have a similar claim in
another time and place for IFC in general.  That is playing,
"what if".

Second, if you need more proof, mathematical proof, then this
whole debate can come down to mathematically proving the
intractibility of IFC and ECC, which cannot be done for either.
Therefore, I would never assume that someone would think that
I was trying to lead the conversation to that point.  My
question was simply about how easy it is to guarantee the
validity of a key to be as strong as the crypto system advertised
between IFC and ECC and is it a significant issue (significant
risk)?  I think the key word here is "advertised".  In that
context, consider what is advertised for IFC and then for ECC
and tell me if you think that IFC keys at the 2k range are
really delivering as advertised every time for PGP or an RSA
product.

> Anyway, hope this sheds some light on how "impossible"
> or not verifying RSA keys is. It is a problem, but looks
> solvable. The question is whether you're willing to pay
> for the solution. :-)

I disagree.  In my mind, what I see is a vast difference in
effort of finding a valid key- nearly impossible v. immediate
guarantee- ON THE FLY. I am not convinced this is not an
issue for IFC.  In my mind, IFC has no real future with
the key sizes becoming what they are, but I am open to
debate on this.


--
The only vote that you waste is the one you never wanted to make.
RICO- we were told it was a necessary surrender of our civil liberties.
Asset Forfeiture- the latest inevitable result of RICO.
http://www.ciphermax.com/book


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

Date: 25 Dec 1999 06:40:45 -0000
From: lcs Mixmaster Remailer <[EMAIL PROTECTED]>
Subject: Re: ElGamal Opinions, Please

Daniel Bleichenbacher writes, regarding ElGamal:

> Using OAEP is definitively better than PKCS-1 v.1.5 to prevent chosen-cipher
> text attacks. Note that an attacker can take any ElGamal ciphertext
> (g^k, M*y^k) and compute (g^k, (c*M)*y^k) without knowing M. 
> Hence, this is multiplicative property similar to the one in RSA,
> and the same method can be used to attack ElGamal with a chosen-ciphertext
> attack as can be used to attack RSA.

One question is, how "optimal" is OAEP when used in this application?
The proof of optimality was done in the context of RSA encryption.

More fundamentally, when using ElGamal to encrypt a session key, is it
wise to use OAEP on the key, or is it sufficient to just use the raw
session key (which will typically be < 256 bits).

> I'd strongly recommend to use the IEEE P1363 standard. 
> (http://grouper.ieee.org/groups/1363/).

Does this discuss ElGamal encryption?  It does not seem to address
issues specific to this purpose, for example possible leakage with
small subgroups.

There seems to be not nearly as much "lore" and advice available to the
ElGamal implementor as there is with RSA.


------------------------------

From: "John E. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: compression & encryption
Date: Sat, 25 Dec 1999 01:23:17 -0600

Guy Macon wrote:
> [EMAIL PROTECTED] (actually Douglas A. Gwyn) wrote:
> >It is in principle *possible* for some
> >compression scheme to "resonate" with the outer encryption and
> >weaken it, but that is most unlikely.
> Correct me if I am wrong (Clueless Newbie alert) but it would seem
> to follow that blindly running the same encryption scheme (especially
> with the same key) twice might very well have such "resonance".

Indeed.

> I would imagine that this varies according to encryption method.

Yes.

> It also seems to me that a OTP would be immune to such "resonance".

An *ideal* OTP doesn't weaken any system it is concatenated with.

> Am I understanding this correctly?  Has anyone ever analysed running
> popular encoding schemes 3 times?  I thought I had read about "Triple
> DES" and Cyphersaber-2 having this, but my memory may be flawed...

3DES (with 112-bit key) is definitely stronger (in the minimum
expected work factor sense) than DES (56-bit key), but not as
strong as a plain DES analogue with 112-bit key.  Its main
practical advantage is that it is equivalent to plain DES when
used with a doubled 56-bit key.

If DES "formed a group" then 3DES would be much weaker.

I haven't studied Cyphersaber-2.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to