Cryptography-Digest Digest #64, Volume #13        Wed, 1 Nov 00 02:13:01 EST

Contents:
  Re: Is RSA provably secure under some conditions? (David A Molnar)
  Re: RSA Multiprime (David Hopwood)
  Re: Is RSA provably secure under some conditions? (Roger Schlafly)
  Re: End to end encryption in GSM (matt weber)
  Re: XOR based key exchange protocol - flawed? (Dan Oetting)
  Re: Is RSA provably secure under some conditions? (David Wagner)

----------------------------------------------------------------------------

From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: Is RSA provably secure under some conditions?
Date: 1 Nov 2000 04:53:05 GMT

John Feth <[EMAIL PROTECTED]> wrote:
> theorems, and even the definitions they give for use in their theorems are
> opaque.  I think the opacity is due to the notation they use and the lack
> of an appendix that makes clear how the notation works.

A few years back I wrote an expository paper for a tutorial here which
covered a tiny bit of the Random Oracle, Revisited. You may find it at 
http://www.hcs.harvard.edu/~dmolnar/crypt-tutorial/tutorial.ps

My paper includes a section which defines some of the notation and tries
to make it more concrete. The notation actually does have motivation in
most instances, I think, even if the motivation is not stated in the
paper itself. 

Comments on the paper are appreciated...I'm sure that there are a few
things unclear still. Also, it's a little out of date by now in the
references and so on. 

> It seems to me that including in the paper an actual example of a
> cryptographic system that passed the random oracle test but failed to be
> secure in implementation would go a long way to providing insight into the
> perils of implementation.  Otherwise, I'm left feeling that I'm reading an
> abstraction about an abstraction.

Well, they *do* have an example. As you'll see, however, it's an
extremely contrived example because one of the steps in the system is
"output the secret key." 

It goes like this:

We're going to start out in the random oracle model. There is some random
oracle O() which gives random, but consistent, answers to all questions. 
Now let's build a signature scheme...

1) Pick a function ensemble F. The "function ensemble" can be thought of
   as a way of talking about all the different ways we could use SHA1 or
   some other hash function to make a "random-looking function."
   It's just a collection of functions. 
   The important parameter is a "seed" value, which uniquely picks out
   ONE of the functions in the function ensemble. 

   So concretely, a function ensemble F could consist of all functions
   which look like

        F(x) = SHA1(s || x)

   where s is the "seed" value. 
   Actually, this is a little misleading; there's no reason the functions
   in the ensemble have to look anything like each other. All that we
   care about is that we can pick out a function we want and evaluate
   that function (and do all this efficiently, of course).     

   Call the function picked out by the seed value s  f_s(). 

   Now define a relation R(x, y) = TRUE if and only if
                                        x is a function's seed
                                        y is f_x(x). 

   That is, R() is true only if you feed it the seed of a function
   and the function evaluated on its own seed. 

2) Take any old signature scheme with signing algorithm S you like. 
   The new signature scheme has algorithm S'. 

        So for S' on message M:
                1. Check to see if R(M, O(M)) is true,
                where O(M) is the output of the random oracle on M. 
                
                2. If R(M, O(M)) is true, OUTPUT SECRET KEY.

                3. Else, sign the message normally using S(M). 


The point to notice is that for a "real" random oracle, R(M, O(M)) will
be true almost never. The chance that O(M) will just _happen_ to be
the same as the value of f_M(M) is negligible. This is because O(M)
returns some value picked completely at random, while f_M() is some
static object and f_M(M) is thus some single particular value. 

Now let's move away from the random oracle model and into the real world. 
Suppose you decide to use the function  f_s from ensemble F to implement
your random oracle. (why? Maybe you didn't look carefully enough at the
signature scheme).

Now suppose someone tries to sign the seed s; i.e. evaluate S'(s). 

What happens at step 2? Because S' is using the function ensemble F, it
uses M as the seed to pick out a particular function f_s.
It evaluates f_s(s). 
It checks R(s, f_s(s)).

But this is exactly when R(s, f_s(s)) is defined to be true! That's how
R() was set up back in step 1. So S' will now output the secret key of
the signature scheme. Oops.


This shows us that 
                1) If you propose some function ensemble F
                for implementing a "random oracle"
                for example:  f_s(x) = SHA1(s || x)

                2) I can come up with a signature scheme which is
                totally insecure for that ensemble F. 

Extremely annoying. But you could then change the function ensemble 
F and be OK, because R() was tailored for one particular F.
Much of the rest of the Random Oracle, Revisited paper is dedicated to
fixing that problem -- creating a signature scheme which will fail
no matter _what_ F you throw at it. 
I won't try to describe that now. Mostly because I don't fully understand
it now.


Anyway, you asked about "what dangers are there in implementation?"
Unfortunately, this example won't help much with that. Most people are
smart enough not to include an instruction in their signature scheme
which says "output my own secret key."

The paper casts various aspersions on the Fiat-Shamir family of signature
schemes, because they depend heavily on hash functions looking *very*
random. They never ended up making that precise, though. Ross Anderson,
oddly enough, in a paper on "The Classification of Hash Functions" 
addressed some of the same ideas. In fact, it seems to me that if
Anderson had pushed a little bit harder in the right direction, he might
have ended up with a similar result to Random Oracle Revisited -- in
1993.
(this does not in ANY way take away from the genius and creativity of
CGH, of course, in doing the hard work to make the construction work.
It is only an observation about what might have been. Plus as far as I
can tell, Anderson and CGH arrived at their conclusions independently)

(on the other hand, is it possible that we could build a signature scheme
for which the "output my own secret key" isn't obviously part of the
scheme...and only shows up in the real world? that is, how do we know
that an analogue of step 1) isn't 'hidden' in our favorite signature
schemes, and has escaped notice because we haven't been looking for it?)

What the example *does* do is show that the Random Oracle model has
limits. Previously it was a potential position that a proof in the RO
model was just as good as one in the "standard" model. Now we know that
isn't always true - but we don't know when it's true and when it isn't. 
All we have is this extreme example. Figuring that out is going to be
interesting. 

In some sense, this seems like logic just after the discovery that the
halting problem is unsolvable. Indeed, the proof is reminiscient of the
halting problem proof. We have a strange result, and we don't know how
far down it goes. Maybe we could try to fix our model by forbidding
explicitly self-referential R() or "unnatural" signature schemes like the
S' above...but then what does that leave us? and how do we deal with it?
and will this ever have an impact on schemes we care about?

Lots of open questions. which I, at least, find oddly compelling. Still,
it seems unlikely that we'll get from here to a "don't do this while
implementing RSA" kind of result any time soon. 

-David

------------------------------

Date: Wed, 01 Nov 2000 03:40:40 +0000
From: David Hopwood <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: RSA Multiprime

=====BEGIN PGP SIGNED MESSAGE=====

Francois Grieu wrote:
> [EMAIL PROTECTED] (Scott Contini) wrote:
> > The only thing more ridiculous than Compaq patenting this is
> > RSA Security licensing the patent.
> 
> Agreed, if true.

Absolutely.

> > While (multiple primes) gives some speedups, it opens up the
> > (RSA) algorithm to possible new attacks: if a faster special
> > purpose algorithm than the elliptic curve method is invented,
> > then multi-prime RSA easily could become insecure.
> 
> Well, GNFS and even MPQS are faster than ECM for pratical purpose,
> and all three are equaly efficient against two-prime and
> multi-prime RSA. The product of 2 random 288 bit primes is just
> as hard to factor as the product of 3 random 192 primes, and this
> situation has not evolved in the last 20 years.

192-bit primes are definitely too short. Here's an excerpt from a
previous post of mine (which was considering a different cryptosystem
that relied on moduli with relatively small factors):

=====
The problem is that ECM can now factor numbers where the smallest factor
is about 54 decimal digits (~180 bits), and is very close to being able
to find factors of 60 digits (~200 bits).

According to a post by Paul Zimmerman, with title "New ECM record: up
to 60 digits", in sci.crypt on 5th January:

# On December 26, 1999, Nik Lygeros and Michel Mizony, two math researchers 
# from Lyon (France), found a prime factor of 54 digits of a 127-digit
# composite number with GMP-ECM, a free implementation of the Elliptic
# Curve Method (ECM). According to the table maintained by Richard Brent [1]
# this is the largest prime factor ever found by ECM. The previous record was
# hold by Conrad Curry with a 53-digit prime found in September 1998.
# [...]
# In a recent paper [8], Richard Brent extrapolates the ECM record to be
# of D digits at year about 9.3*sqrt(D)+1932.3. This would give a record
# of D=60 digits at year Y=2004. We strongly believe 60 digits will be
# reached before, perhaps already in 2002 or even this year!
#
# [1] ftp://ftp.comlab.ox.ac.uk/pub/Documents/techpapers/Richard.Brent/
champs.txt
# [...]
# [8] Some Parallel Algorithms for Integer Factorisation, Euro-Par 99, cf
#     ftp://ftp.comlab.ox.ac.uk/pub/Documents/techpapers/Richard.Brent/
rpb193.dvi.gz

I reckon that something like 6.9*sqrt(D)+1948 is a more realistic fit
(for public distributed attacks, not taking into account dedicated
hardware). That gives 77 digits (~256 bits) in about 2008.
[...] this means that 256 bits is not really long enough.
=====

IMHO, as a conservative choice for long-term security, about 400 bits is
needed for the smallest factor of a modulus for cryptosystems based on
factoring.

- -- 
David Hopwood <[EMAIL PROTECTED]>

Home page & PGP public key: http://www.users.zetnet.co.uk/hopwood/
RSA 2048-bit; fingerprint 71 8E A6 23 0E D3 4C E5  0F 69 8C D4 FA 66 15 01
Nothing in this message is intended to be legally binding. If I revoke a
public key but refuse to specify why, it is because the private key has been
seized under the Regulation of Investigatory Powers Act; see www.fipr.org/rip


=====BEGIN PGP SIGNATURE=====
Version: 2.6.3i
Charset: noconv

iQEVAwUBOf+QaDkCAxeYt5gVAQGYpAgAqB6d/VXXe4CqU3BLynrZOz8acJTVrSz/
Yuc+DAa82Qbh4VMGW4JBsOb1opURtpGip2gu2jdo9liPQBCYFurJHdJRWIOLQj6+
KE2493vVy4FMFo9UmJzfsRHQOMjuxbn/kNMwNzSTCRXB1o8B7kPSx5x4uuBLluYl
5QNRZfVTY1jav1j59B+auCKfKcEW+kjGr0HxxlT/UJi31RghGLqbpYdN5X/qPXEz
F2TgpIutpCj1oNxEJAh5iYCSAAJn7aW2dP8i1NRbresTb71RIyKPRC6J7mqhRBDh
9sBcmM+29IT3jVKDqN66GCU9uOaiJFmxCxoZpv7ufi2mU1VUPpGxPg==
=LMcJ
=====END PGP SIGNATURE=====


------------------------------

From: Roger Schlafly <[EMAIL PROTECTED]>
Subject: Re: Is RSA provably secure under some conditions?
Date: Tue, 31 Oct 2000 21:12:05 -0800

David Wagner wrote:
> Roger Schlafly  wrote:
> ?Which is why these folks shouldn't be claiming that they have proofs.
> 
> Is it acceptable to claim you have a proof when what you really mean
> is that you can prove it secure under the assumption that factoring is
> hard?

I think that there is a big difference. The random oracle folks are
much worse.

If I claim that a system is secure based on a factoring assumption,
then the assumption is something like assuming that it is infeasible
to factor 1024-bit numbers. That assumption is either true or false.
If someone figures out how to (feasibly) factor 1024-bit numbers,
then you'll know that my proof is worthless. If someone breaks my
system (and my proof is valid), then someone must have acquired a
factoring capability.

Likewise, if I assume that SHA-1 is collision-free, then someone
could look for collisions in order to test the validity of my
assumptions.

But if I assume that SHA-1 is a random oracle, then is no way for
anyone to tell whether or not that is a reasonable assumption.
Because in a literal sense, SHA-1 is not a random oracle, and
it doesn't even make much sense to talk about how close it is to
becoming a random oracle.

Consider this analogy. Suppose I am selling a safe. I could say:

1. It is secure because there are a zillion combinations.
2. No one can get in unless you reveal the combination.
3. Only someone with a blowtorch could get in.
4. Martians could not get in.

In this analogy, (1) is snake-oil and (2) is beyond what anyone
knows how to prove today. (3) is like a factoring assumption.
It doesn't guarantee security, but it tells you something about
who might break in. (4) is like a random oracle proof. It might
be interesting, but it doesn't relate to the threat very directly.


> - If yes, then why not under other assumptions, too?
> 
> - If no, then note that there are no encryption systems with proofs
>   (excluding the one-time pad), so your criticism is not specific to
>   the random oracle model but applies to a vast field of theoretical
>   work in cryptography.
> 
> My first reaction is that you may be over-simplifying a situation which
> is more complicated than you have given people credit for.  Proofs
> (perhaps better called "reductions", but let's stick with the common
> terminology) are useful, even if they require unproven assumptions, so
> long as the assumptions are reasonable and give us useful insights into
> the properties of the cryptosystems of interest.

------------------------------

From: matt weber <[EMAIL PROTECTED]>
Crossposted-To: alt.cellular.gsm
Subject: Re: End to end encryption in GSM
Date: Wed, 01 Nov 2000 05:36:51 GMT

On 1 Nov 2000 03:54:04 GMT, [EMAIL PROTECTED] (David Wagner)
wrote:

>Marcus AAkesson  wrote:
>>You are much better and cheaper off buying the product off the shelf.
>>There is a GSM phone with military-class end-to-end encryption
>>available, the Swedish Tiger. Cost is about USD 4000 / handset.
>
>Yes, but:
>
>1. How do you know whether it is secure or not?
>
>   The GSM consortium also claimed that their cellphones were secure,
>   but things didn't turn out quite so well for them.
>  
>   I see no technical details on the web site, nor any security reviews,
>   nor indeed anything else to boost confidence in their product, so why
>   should we believe the Sectra folks?  Is there any reason to believe
>   the Tiger should be treated as anything other than a device of unproven,
>   unknown security?
>   
>2. Lest you point to the purchases by the US DoD, let me first say this:
>   When I looked ago, the web site said that there are two different
>   products, one for military users and one for non-military users.
>  
>   Even if we accept for the sake of argument that the military product
>   has excellent security, that says nothing about the security of the
>   commercial product.

The commercial product's encryption capability will be severly
constrained by US export law. However at the end of the day you have
to ask what are you attempting to protect against. 
It is like 40 bit versus 128 bit encryption.  In theory 128 bit
encryption is much more secure. The problem is when you think about
who would care, you realize the people who can easily defeat 40 bit
encryption, are also going to be able to defeat 128 bit encryption. 



------------------------------

From: Dan Oetting <[EMAIL PROTECTED]>
Subject: Re: XOR based key exchange protocol - flawed?
Date: Tue, 31 Oct 2000 23:07:38 -0700

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] 
wrote:

> 
> Mike Connell wrote:
> > Pa, Pb are RSA public keys. (X)Pi means data X encrypted under key Pi.
> > Xa, Xb are random blocks of data of the same size as the public keys.
> > 
> > 1. a -> b : Pa
> > 2. a <- a : Pb
>           ^ I assume this should be b
> 
> > 3. a -> b : (Xa)Pb
> 
> The number of passes could be reduced to 3 by eliminating message 1
> and making this "Pa, {Xa}Pb". (A can still go first by swapping A and B.)
> 
> > 4. a <- b : (Xb)Pa
> 
> In the other responses to this thread, there seems to be some confusion
> because the original description did not say whether public keys were
> certified or known before the protocol starts. There are three cases:
> 
> 1. If the public keys are certified, then messages 1 and 2 should be
>    explicitly sending certificates, not unsigned public keys.
> 
> 2. If the public keys are not certified, but they are known to be correct
>    for some other reason (for example because key fingerprints, i.e.
>    hashes, have been exchanged over an authentic channel), then it is
>    redundant to send them in the protocol. It would be more efficient
>    and clearer to send the fingerprints instead in messages 1 and 2
>    (since a mapping from a fingerprint to a public key is assumed to be
>    known).
> 
> 3. If the public keys are not certified or otherwise known to be correct,
>    then there is obviously a MITM attack.
> 
> Also note that the protocol does not provide forward secrecy, or explicit
> key confirmation. IMHO this makes it a bad choice for new applications
> even in cases 1 and 2.
> 

You also leave open the ability of B to control the outcome of the 
shared secret. This can be easily fixed and reducing the protocol to a 2 
step process with bidirectional communication: [Assuming the public keys 
are certified or already known and trusted]

1. a -> b : Pa,(Xa)Pa
   a <- b : Pb,(Xb)Pb

2. a -> b : (Xa)Pb
   a <- b : (Xb)Pa

In the first step A and B exchange which keys to use and proof of the 
seeds to be exchanged. In the second step the seeds are exchanged. This 
two step protocol seems so basic it probably already has a name. The 
protocol also expands easily to handle more than 2 parties.

-- Dan Oetting

------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: Is RSA provably secure under some conditions?
Date: 1 Nov 2000 06:27:20 GMT
Reply-To: [EMAIL PROTECTED] (David Wagner)

David A Molnar  wrote:
>Well, they *do* have an example. As you'll see, however, it's an
>extremely contrived example because one of the steps in the system is
>"output the secret key." 

Yes.  But, IMHO, this is not the weird part of the construction.
You can find this type of step in counter-examples all over.

What's _really_ noteworthy (IMHO) is the following feature of their
system.  They use a description of the hash function in the system,
and not just to implement the hash function: in fact they take the
description of the hash function, which is just a bit-string, and
hash it, a very weird-looking thing to do!

This is (IMHO) the extremely clever feature of their counter-example.
(and very contrived and unnatural, of course, but also brilliant)

So it suggests a principle: Don't use the description of the hash
function for doing anything other than computing the hash function
itself.

In hindsight, this principle is maybe not too surprising.  It is
already known that when doing pseudorandom-ness proofs, keys should
not be used to encrypt themselves (even if F_k(x) is a secure PRF,
F_k(k) might leak k).  Here we have some sort of an analog of that
idea, but now for hash functions: we have the principle that
descriptions of hash functions should not be used to hash themselves.
So this principle is not too shocking, giving the previous experience
in pseudorandom-ness proofs -- but hindsight is always 20-20.

This matches the intuition behind random oracle schemes.  The random
oracle assumption models an intuitive feeling that the instantiation
of the hash function is chosen "independently" of the rest of the
scheme.  The counter-example they give somehow violates this intuitive
"independence" feature of real-life schemes, even if it doesn't
violate the precise statement of the random oracle assumption.

Of course, the problem the paper raises still remains: We have no
precise statement of what to do to make the random oracle assumption
work correctly 100% of the time (if indeed there is any such statement).
I don't mean to diminish this fact.

Still, despite its problems, the random oracle appears to be a powerful
way to provide strong evidence that there are no structural flaws in a
cryptographic scheme is intact, and the intuition for why this might be
so (as I described above) seems still roughly intact, IMHO.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to