Cryptography-Digest Digest #81, Volume #13 Fri, 3 Nov 00 08:13:01 EST
Contents:
Re: RSA vs. Rabin (Francois Grieu)
Re: Hardware RNGs (Paul Crowley)
Re: ECC choice of field and basis ("Michael Scott")
Re: Hardware RNGs (Paul Crowley)
Re: On introducing non-interoperability (Mok-Kong Shen)
Re: On introducing non-interoperability (Mok-Kong Shen)
Re: On introducing non-interoperability (Mok-Kong Shen)
Re: Really Strong Cipher Idea? (Mok-Kong Shen)
Re: Give it up? (Mok-Kong Shen)
Re: Q: Computations in a Galois Field (Mok-Kong Shen)
Re: End to end encryption in GSM ([EMAIL PROTECTED])
Re: Really Strong Cipher Idea? (John Savard)
Rijndael question ("Manuel Pancorbo")
Re: Detectable pattern in encoded stegaanographic images ("Peter Thorsteinson")
Re: Rijndael question ("Brian Gladman")
Re: Rijndael question (Tom St Denis)
Re: Rijndael question (Panu =?iso-8859-1?Q?H=E4m=E4l=E4inen?=)
Re: RSA vs. Rabin (Tom St Denis)
----------------------------------------------------------------------------
From: Francois Grieu <[EMAIL PROTECTED]>
Subject: Re: RSA vs. Rabin
Date: Fri, 03 Nov 2000 10:39:45 +0100
[EMAIL PROTECTED] (Jan Fedak) asked questions on Rabin vs RSA.
I suggest reading of the Handbook of Applied cryptograhy;
Rabin is explained in section 8, this is all online for free at
<http://www.cacr.math.uwaterloo.ca/hac/>
Tom St Denis <[EMAIL PROTECTED]> wrote:
> If you can find square roots, i.e a^2 = b^2 mod N (a != b) then
> you can factor N. Thus solving the square root (well I think
> this is how the proof goes, of course Bob will correct me) is as
> difficult as factoring.
The condition that will let one factor N knowing a,b
with a^2 = b^2 mod N is more complex than just a != b.
For a start, Tom meant a != b (mod N).
But there is also a+b != 0 (mod N).
> [EMAIL PROTECTED] (Jan Fedak) wrote:
>> RSA with low exponents is found insecure today.
I disagree. There is indeed progress towards showing RSA with
low exponent and without formatting may not be reducable to
factoring. But RSA without formatting is insecure for signature
applications and/or when the key is allowed to be reused often
enough, so every sensible application use formatting.
And it appears a high public exponent e mostly help strengthen
a bit some otherwise questionable formatting.
Tom St Denis <[EMAIL PROTECTED]> answered:
> Rabin is insecure for various other reasons I would imagine.
> RSA is more convenient as well. You can easily perform either
> operation and you can do signatures, etc..
Bob would second me: an opinion on security is only as good as
the advisor's understanding of the problem.
One can sign with Rabin, and verifying the signature is almost
twice as fast as with RSA, which is convenient when verifying
with an 8 bit processor (e.g. portable terminal).
Rabin is secure when used with proper message formatting, similar
to the ones necessary with RSA. Assuming some properties of the
formatting scheme, that factoring is hard, and a maybe few other
details, Rabin is even provably secure.
But security of Rabin tend to be "all-or-nothing": bad formatting
[such as the one previously prescribed by the ISO/IEC 9796
standard, recently withdrawn, and not to be confused with
ISO/IEC 9796-x when x>1] will allow factoring the public key,
when with RSA it will "only" compromise some of the security
[often, an attack on either system also assume things that do not
occur in practice, such as an attacker beeing able to choose the
messages].
Francois Grieu
------------------------------
From: Paul Crowley <[EMAIL PROTECTED]>
Subject: Re: Hardware RNGs
Date: Fri, 03 Nov 2000 09:46:49 GMT
Greggy wrote:
> You can never guess the bits from the RDTSC instruction.
>
> But I don't know why he would suggest Yarrow. The LSBs from the RDTSC
> are truly random in themselves.
In general, I'd recommend everyone always use Yarrow unless there's some
very good reason not to.
* you can have all the random bits you need, fast
* you can use all the sources you like
* you don't have to preprocess the source data to remove bias
* you can thus use *all* the entropy from the sources
* you get some robustness against one of your assumptions about one of
the sources being wrong
(eg your assertion about the LSBs of the RDTSC - there are clearly
circumstances where they are *not* truly random, for example if you read
them in a tight loop)
Read the Yarrow paper (http://www.counterpane.com/yarrow.html) for more
on why this sort of thing is the Right Thing.
cheers,
--
__
\/ o\ [EMAIL PROTECTED]
/\__/ http://www.cluefactory.org.uk/paul/
------------------------------
From: "Michael Scott" <[EMAIL PROTECTED]>
Subject: Re: ECC choice of field and basis
Date: Fri, 3 Nov 2000 09:49:54 -0000
"Scott Contini" <[EMAIL PROTECTED]> wrote in message
news:8tq5pl$off$[EMAIL PROTECTED]...
>
> My personal experience is that GF(p) and GF(2^m) are about the same
> speed: depending on the operation (public key/private key) and some
> other factors which I should not discuss, you may get one faster than
> the other but the times (for me) have always been within a factor of 2
> of each other. BTW my comparisons were done on a Pentium pro where the
> GF(p) implementation had assembly code, but the GF(2^m) implementation
> did not since we were unable to improve on the compiler's optimisation
> for this case.
>
Well I wouldn't disagree with that, only I would regard a factor of 2 as
significant. Check out the relative timings on http://indigo.ie/~mscott and
http://www.eskimo.com/~weidai/benchmarks.html
Mike Scott
------------------------------
From: Paul Crowley <[EMAIL PROTECTED]>
Subject: Re: Hardware RNGs
Date: Fri, 03 Nov 2000 09:55:06 GMT
Alan Rouse wrote:
> Hashing does not increase entropy, whether one pass or multiple.
No, of course not. However, at least it doesn't *reduce* entropy until
you already have enough for your state to be unguessable, unlike many
preprocessing techniques suggested for entropy sources. It's also much
harder to get wrong. If you use Yarrow, all you have to get right is
the expected entropy per bit of each of your sources, and then it will
safely produce only unguessable bits.
Now that AES and SHA-256 have been announced, I'm hoping that a
specification for Yarrow-256 is in the works and that we can encourage
everyone who needs unguessable bits for keys etc to use it.
--
__
\/ o\ [EMAIL PROTECTED]
/\__/ http://www.cluefactory.org.uk/paul/
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: On introducing non-interoperability
Date: Fri, 03 Nov 2000 10:52:42 +0100
David Schwartz wrote:
>
> Mok-Kong Shen wrote:
>
> > The idea is to obtain a system that differs from the
> > one the opponent has at hand. If the key scheduling
> > is modified, he wouldn't be able to do brute force,
> > for example.
>
> Let's start with a few definitions:
>
> 1) Key (for symmetric crypto): Whatever is kept secret between the two
> parties using the cryptosystem
>
> 2) Brute Force (for symmetric crypto): An attach wherein one tries
> every possible key.
>
> Now, do you still believe that modifying the key scheduling prevents a
> brute force attack?
To brute force you have to setup the algorithm. To do the
EFF kind of work, you have to have the hardware
implementation. But since you don't know the modification,
you can't start your job. If you consider the additional
secret material for doing the modifiction as part of
the (effective) 'key' and do brute force, then you have a
longer key to brute force and you have to spend
correspondingly more effort. Brute force cannot in general
be (absoulutely) 'prevented' as such. It is however
rendered much more difficult through the modification that
is unknown to the attacker.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: On introducing non-interoperability
Date: Fri, 03 Nov 2000 10:52:29 +0100
wtshaw wrote:
>
> Is not a permutation of the round keys something to do with keyspace?
You have a one-to-one mapping from the encryption key
to the set of round keys. That mapping remains the same.
The keyspace is neither enlarged nor diminished. If
you want to do brute force, you have the same effort,
if you know the permutation. For applying other analysis
techniques, you also have the same effort, if you know
the permutation. Note that it is implicitly assumed
that the round keys are equivalent in quality (a
consequence of good design) so that the permutation
evidently cannot reduce the security. The enhancement
in security comes from the fact that the opponent
doesn't know the permutation. This is natural, since
the permutation means in effect adding some bits
to the proper encryption key.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: On introducing non-interoperability
Date: Fri, 03 Nov 2000 10:52:34 +0100
Scott Fluhrer wrote:
>
> Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> >
> > Bryan Olson wrote:
> > >
> > > Mok-Kong Shen wrote:
> > > >
> > > > Interoperability is the generally acknowledged benefit of
> > > > standardization and is commonly to be strived at as best
> > > > as possible. However, for a non-trivial amount of crypto
> > > > applications, where there is a fixed communication path
> > > > between two given partners, the interoperability needs
> > > > only to exist between these alone, without requiring the
> > > > same desirable property being extended to any third party.
> > > > In fact, to the contrary, it is evidently very much to
> > > > their benefit, if the opponent's system turns out to be
> > > > not interoperable with theirs.
> > >
> > > That's what cryptosystems do. They're designed so that
> > > control of the keys provides exactly the interoperability
> > > and non-interoperability desired. No hack-on obscurity
> > > required.
> >
> > The idea is to obtain a system that differs from the
> > one the opponent has at hand. If the key scheduling
> > is modified, he wouldn't be able to do brute force,
> > for example.
>
> Ahhh... security through obscurity... What a concept.
Suppose that instead of the original encryption key K
the designer uses a longer key K1 which is a concatenation
of K and S and he uses K just as in the original design but
uses S to do the modification to the keyscheduling I
described, resulting in a new keyscheduling, do you still
call this (modified) new design security through obscurity?
(Note that design of keyscheduling is his business.)
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Really Strong Cipher Idea?
Date: Fri, 03 Nov 2000 10:52:49 +0100
Simon Johnson wrote:
>
> Its unlikely that a 256-bit cipher will ever be broken in this
> universe, but it may be with highly parrelled processors across
> universes in some obsecure time in the future :P
My figure 256 is just for example purpose. You could
substitute it with 64 or any other number. In other words,
the topic is intended to be in a different direction than
the minimum key length of ciphers which is certainly also
interesting topic by itself.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Give it up?
Date: Fri, 03 Nov 2000 10:52:21 +0100
Tom St Denis wrote:
>
>
> > But known-plaintext attack (the zero bytes mean that
> > part of the plaintext is known) is one of the commonly
> > considered attacks, isn't it? Note also that my contrived
> > example was meant only to give some intuitive feeling to
> > conceive why a compressed sequence with reduced redundancy
> > is better for crypto processing.
>
> Yes, but known-plaintexts attacks normally require much more then a
> single block. Just because you have 1,000,000 known plaintext blocks
> in something like Rijndael doesn't mean you are one step closer to
> learning the key or the message (again assuming the message blocks are
> not part of the known space).
Another poster has already answered to this point. Do
you agree that this known information does render the
chance of analysis a bit higher, even if in practical
cases that chance may still not be sufficient to effect
a break? Note that DES has been broken by brute force,
not by sophisticated, theoretically interesting,
techniques in practice, as far as I know.
>
> > This is in my view not the right type of argument against
> > 1-1. If you have an extremely secure block cipher, then
> > by definition you don't have to care about anything else
> > in the processing, whether you use compression or not. But
> > that's not the point. The proponents of 1-1 claim (if I
> > understand correctly) that, if you use compression followed
> > by a cipher that is susceptible to brute force, then 1-1
> > helps.
>
> But if it is a finite algorithm it is breakable in a finite number of
> steps. So I can *ALWAYS* brute force a 1-1 or non-1-1 system with
> virtually the same ease.
Whether the word 'same' exactly applies I am not quite sure.
(But see also my words at the end of this post.) As to your
first sentence, yes, anything other than the ideal OTP can
be broken by brute force in principle, I believe, including
a PK with one million digits.
>
> > Please note that I don't consider myself on the side of
> > proponents of 1-1 but rather on the opposite side. So
> > I am not defending their position, nor, for reasons
> > already mentioned, countering their position here.
> > (To be honest, I got sort of fed up with discussions
> > on the issue and haven't yet recovered from that.) I
> > suggest that you, if you really want to discuss 1-1 in
> > depth, create a new thread with a clear and explicit
> > title line about 1-1 to attract the attention of those
> > discussion partners that are properly interested in the
> > issue. Good luck!
>
> To be honest I agree that using compression is a good idea. It makes
> the messages smaller, and in some contrived cases can reduce the number
> of alike blocks (which is why we use block chaining anyways). I doubt
> alot of "security" is provided by a good codec.
>
> At anyrate the proponents of 1-1 are mindless anyways. If they claim
> reducing redundancy is a key to security (in the plaintext, obviously
> we want todo the same to the ciphertext) then a better codec is needed,
> not some contrived huffman codec.
I have said that I am neither defending nor countering 1-1
here. (I don't know when I'll eventually again join a
discussion on that topic.) So I'll leave others interested
to comment on your opinions.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Q: Computations in a Galois Field
Date: Fri, 03 Nov 2000 11:11:24 +0100
Dumb question: For GF(2^m) with m sufficiently large, are
there specific tricks in programming that could speed up
multiplication/division? Thanks.
M. K. Shen
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: End to end encryption in GSM
Date: Fri, 03 Nov 2000 10:30:07 GMT
<snip>
Also, you can use the
> cipher in cipher-feedback mode. This allows the cipher to
automatically
> re-synchronise on errors.
A good value for error dispersion before synchronisation would be e.g.
64 bits. Probably not a problem regarding voice communication. Second
opinions anyone?
<snip>
> You can also convert a strong block cipher to a stream cipher and
configure
> it operate in cipher feedback mode (bit-by-bit feedback).
If I understand this correctly, Rijndael will be able to perform like
this by applying the CFB mode.
Kind regards,
/Marcus
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Really Strong Cipher Idea?
Date: Fri, 03 Nov 2000 10:50:54 GMT
On Thu, 02 Nov 2000 21:13:56 GMT, Simon Johnson
<[EMAIL PROTECTED]> wrote, in part:
>In article <[EMAIL PROTECTED]>,
> Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
>>If I have 256 truly random bits, I can use
>> it as OTP to process 256 bits of plaintext and obtain
>> perfect security as defined by Shannon. If I use the same
>> 256 truly random bits as key to AES to process a few million
>> blocks of plaintext, how much diminution in security do I
>> have?
>256-bit keys are likely to be secure from brute-force for of this time
>the universe has to run.
Well, a quantum computer - which is, essentially, parallel computers
running in multiple universes - would make 256-bit keys look like
128-bit keys.
But using 256-bit keys to encipher 256-bit plaintext gives perfect
security of a particular kind: information-theoretic security.
Using a 256-bit key for a block cipher doesn't give you that
particular kind of security. Your security depends on the "work
factor" of cracking the block cipher. If there is no way of somehow
testing for _part_ of the key, then, as you've said, a 256-bit key
that can only be cracked by brute-force search is good enough for any
practical purpose.
What has happened though, is that while I can say you've lost one
_kind_ of security, but still have another _kind_, I can't put
quantitative numbers on them. I can't say 'this block cipher is as
hard to crack as X' in an absolute sense, I can only say that we don't
currently know a way to cryptanalyze it faster than brute force.
I can't even say when, if, or how soon such a way will be found.
So in leaving the one-time-pad, I can only judge my security by a
subjective impression that the cipher I am using seems very
complicated, and the only objective fact I have is that it wasn't
broken _yet_.
But while that is the fact, it may not justify despair.
John Savard
http://home.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
From: "Manuel Pancorbo" <[EMAIL PROTECTED]>
Subject: Rijndael question
Date: Fri, 3 Nov 2000 12:35:11 +0100
On Rijndael doc especifications is pointed out that decryption is somewhat
slower than encryption, above all in 8-bit enviroments.
I can imagine scenarios where this speed difference can be significant. For
example, in server-client applications, the server machine is often a
state-of-art computer whether client machine is an "out-of-market" computer.
In this case we are taking the spoon on the wrong side: the fastest machine
uses the fastest algorithm and the slower machine performs the slower
algorithm, moreover on the client side speed is more critical than on the
server side.
So I wonder if the server machine could use the "decryption" algorithm as
encrypter and thus the client machine should use the "encryption" algorithm
as decrypter, in order to compensate both working speeds. Does this yield to
security lacks in the process?
Finally, I suppose that algorithms other than Rijndael has this property:
decryption a bit slower than decryption. These algorithms also mind this
question.
Comments?
Manuel Pancorbo
------------------------------
From: "Peter Thorsteinson" <[EMAIL PROTECTED]>
Subject: Re: Detectable pattern in encoded stegaanographic images
Date: Fri, 03 Nov 2000 12:34:15 GMT
> This is a fairly obvious failure to understand stego. The purpose of
> steganography is to hide a seperate message inside another message
> (encrypted text inside a bmp in this case). If you have access to the
> unaltered original, of course you can detect the presence of either
> corruption or steganography. The point comes from the fact that your
> assumed attacker does not have access to the cover message without the
> stego (the original bmp in this case). Hope this helps.
He didn't just say that he could detect a difference (duh). He said he
"found a very clear pattern" that showed up when he compared them. I guess
the question is "what is the pattern". If it is indeed a pattern, then
perhaps it could be leveraged to detect steganographic usage. That would be
cool, because the steganographic methods are only useful in cases where
their use are not being suspected.
------------------------------
From: "Brian Gladman" <[EMAIL PROTECTED]>
Subject: Re: Rijndael question
Date: Fri, 3 Nov 2000 12:46:26 -0000
"Manuel Pancorbo" <[EMAIL PROTECTED]> wrote in message
news:8tu4dc$nl5$[EMAIL PROTECTED]...
> On Rijndael doc especifications is pointed out that decryption is somewhat
> slower than encryption, above all in 8-bit enviroments.
>
> I can imagine scenarios where this speed difference can be significant.
For
> example, in server-client applications, the server machine is often a
> state-of-art computer whether client machine is an "out-of-market"
computer.
> In this case we are taking the spoon on the wrong side: the fastest
machine
> uses the fastest algorithm and the slower machine performs the slower
> algorithm, moreover on the client side speed is more critical than on the
> server side.
>
> So I wonder if the server machine could use the "decryption" algorithm as
> encrypter and thus the client machine should use the "encryption"
algorithm
> as decrypter, in order to compensate both working speeds. Does this yield
to
> security lacks in the process?
There are also modes of data encryption where the underlying algorithm is
only used in the 'encryption' direction for both encryption and decryption.
Provided such modes are used the speed difference will not be a problem.
In any event the load is often on the server since the client is only doing
processing for one connection whereas the server may be handling a very
large number of concurrent connections. In consequence it will not always
be a good idea to put the bigger load on the server.
It is also worth noting that the difference in cost of encryption and
decryption for Rijndael is only present when it is implemented in a
particular way. In low end environments it is much more likely that it will
be implemented in a way where the two speeds are the same.
Brian Gladman
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Rijndael question
Date: Fri, 03 Nov 2000 12:41:52 GMT
In article <8tu4dc$nl5$[EMAIL PROTECTED]>,
"Manuel Pancorbo" <[EMAIL PROTECTED]> wrote:
> On Rijndael doc especifications is pointed out that decryption is
somewhat
> slower than encryption, above all in 8-bit enviroments.
>
> I can imagine scenarios where this speed difference can be
significant. For
> example, in server-client applications, the server machine is often a
> state-of-art computer whether client machine is an "out-of-market"
computer.
> In this case we are taking the spoon on the wrong side: the fastest
machine
> uses the fastest algorithm and the slower machine performs the slower
> algorithm, moreover on the client side speed is more critical than on
the
> server side.
>
> So I wonder if the server machine could use the "decryption"
algorithm as
> encrypter and thus the client machine should use the "encryption"
algorithm
> as decrypter, in order to compensate both working speeds. Does this
yield to
> security lacks in the process?
Actually Rijndael decryption is "slightly" slower not a zillion times
slower. I would therefore recommend that your idea is not a good one
for this sole reason.
1. The server computer has way more data to process then a single
dopey computer. Therefore the dopey computer can afford to use the
slightly slower decryption routine.
> Finally, I suppose that algorithms other than Rijndael has this
property:
> decryption a bit slower than decryption. These algorithms also mind
this
> question.
>
> Comments?
Focus on the priorities?
Tom
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: Panu =?iso-8859-1?Q?H=E4m=E4l=E4inen?= <[EMAIL PROTECTED]>
Subject: Re: Rijndael question
Date: Fri, 03 Nov 2000 14:51:07 +0200
Manuel Pancorbo wrote:
> So I wonder if the server machine could use the "decryption" algorithm as
> encrypter and thus the client machine should use the "encryption" algorithm
> as decrypter, in order to compensate both working speeds.
I think it is probable that after making this change the client is still slower
than the server. There is quite a big gap in performance between the
"out-of-market" processors and "state-of-art" processors.
-- Panu
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: RSA vs. Rabin
Date: Fri, 03 Nov 2000 12:45:37 GMT
In article <[EMAIL PROTECTED]>,
Francois Grieu <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] (Jan Fedak) asked questions on Rabin vs RSA.
>
> I suggest reading of the Handbook of Applied cryptograhy;
> Rabin is explained in section 8, this is all online for free at
> <http://www.cacr.math.uwaterloo.ca/hac/>
I have read those chapters, I have the collection on my computer
already.
> Tom St Denis <[EMAIL PROTECTED]> wrote:
> > If you can find square roots, i.e a^2 = b^2 mod N (a != b) then
> > you can factor N. Thus solving the square root (well I think
> > this is how the proof goes, of course Bob will correct me) is as
> > difficult as factoring.
>
> The condition that will let one factor N knowing a,b
> with a^2 = b^2 mod N is more complex than just a != b.
> For a start, Tom meant a != b (mod N).
> But there is also a+b != 0 (mod N).
Yeah, but given a != b (mod N) you use a^2 - b^2 = 0 (mod N) to factor
N. I would imagine for the OP this is much more straightforward. We
could always crack out the GNFS and explain that too... but I don't
think that's on topic (and I don't understand it anyways :))
> > [EMAIL PROTECTED] (Jan Fedak) wrote:
> >> RSA with low exponents is found insecure today.
>
> I disagree. There is indeed progress towards showing RSA with
> low exponent and without formatting may not be reducable to
> factoring. But RSA without formatting is insecure for signature
> applications and/or when the key is allowed to be reused often
> enough, so every sensible application use formatting.
> And it appears a high public exponent e mostly help strengthen
> a bit some otherwise questionable formatting.
Um... perhaps you should reply to the OP and not me?
> Tom St Denis <[EMAIL PROTECTED]> answered:
> > Rabin is insecure for various other reasons I would imagine.
> > RSA is more convenient as well. You can easily perform either
> > operation and you can do signatures, etc..
>
> Bob would second me: an opinion on security is only as good as
> the advisor's understanding of the problem.
>
> One can sign with Rabin, and verifying the signature is almost
> twice as fast as with RSA, which is convenient when verifying
> with an 8 bit processor (e.g. portable terminal).
> Rabin is secure when used with proper message formatting, similar
> to the ones necessary with RSA. Assuming some properties of the
> formatting scheme, that factoring is hard, and a maybe few other
> details, Rabin is even provably secure.
> But security of Rabin tend to be "all-or-nothing": bad formatting
> [such as the one previously prescribed by the ISO/IEC 9796
> standard, recently withdrawn, and not to be confused with
> ISO/IEC 9796-x when x>1] will allow factoring the public key,
> when with RSA it will "only" compromise some of the security
> [often, an attack on either system also assume things that do not
> occur in practice, such as an attacker beeing able to choose the
> messages].
I got that notion from Knuth V2. I can get the exact page where he
says "The SQRT Box" is insecure for some ops. Perhaps Knuth is wrong
or I just can't read.
Tom
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************