Cryptography-Digest Digest #168, Volume #12       Thu, 6 Jul 00 07:13:00 EDT

Contents:
  Re: Implementing RSA................ (Dido Sevilla)
  Re: TRIPLE DES BENCHMARK (=?ISO-8859-1?Q?H=E4m=E4l=E4inen?= Panu)
  Re: Hash and Entropy (David A Molnar)
  RC4 source code (=?ISO-8859-1?Q?H=E4m=E4l=E4inen?= Panu)
  Re: Hash and Entropy (Benjamin Goldberg)
  Re: RPK ([EMAIL PROTECTED])
  Re: cray and time needed to attack (Jerry Coffin)
  Re: AONT (Mok-Kong Shen)
  Re: A simple all-or-nothing transform (Mok-Kong Shen)
  Re: Hash and Entropy (Mark Wooding)
  Re: Prime Numbers? (David Lancashire)
  Re: Compression & Encryption in FISHYLAND (Tim Tyler)
  Re: Prime Numbers? (Quisquater)

----------------------------------------------------------------------------

From: Dido Sevilla <[EMAIL PROTECTED]>
Subject: Re: Implementing RSA................
Date: Thu, 06 Jul 2000 15:00:59 +0800

"Daniele (Neo)" wrote:
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Hi,
> I'm implementing RSA in JavaScript (yea, the same scripting language
> used in HTML pages.......).
> 
> Both Netscape and Microsoft documentation say that numbers supported
> in javascript are included in this range: 1,79E+308 and 1,79E-308 (a
> tipical *double* type specification in VB, or Java, or other
> programming language.............)
> 

This will *NOT* work!  Floating point arithmetic, although it has a
range of 1e308, will not give you 308 digits worth of precision.  You'll
get at most 15 or 16 digits, which is woefully inadequate.  In order to
properly implement RSA or any other cryptographic algorithm which makes
use of large numbers, you have to create your own data type to
manipulate these big numbers.  None of the built-in data types for
ordinary languages have enough precision to do what has to be done. 
Even an int on a 64-bit computer like an Alpha or Itanium can only give
numbers up to 19 digits, whereas RSA needs monstrous numbers with 200 or
300 digits...  You'll have to cook up your own routines for the
manipulation of such immense numbers.

--
Rafael R. Sevilla <[EMAIL PROTECTED]>         +63 (2)   4342217
ICSM-F Development Team                         +63 (917) 4458925
University of the Philippines Diliman

------------------------------

From: =?ISO-8859-1?Q?H=E4m=E4l=E4inen?= Panu <[EMAIL PROTECTED]>
Subject: Re: TRIPLE DES BENCHMARK
Date: 6 Jul 2000 07:13:21 GMT

Karim A <[EMAIL PROTECTED]> wrote:

: Where can I found on the internet TripleDES benchmark results ?

http://www.esat.kuleuven.ac.be/~bosselae/fast.html
http://people.qualcomm.com/karn/code/des/index.html

                                -Panu-

------------------------------

From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: Hash and Entropy
Date: 6 Jul 2000 07:08:17 GMT

zapzing <[EMAIL PROTECTED]> wrote:
> The word "oracle" is certainly getting alot of
> use these days. I wish people would use it for
> its original use, which was a theoretical (and
> probably impossible) device which could
> determine if a given program would terminate
> at some time or run forever.

Hm. The use I've seen which is closest to that is in the context of
"oracle Turing Machines," which are TMs equipped with a special "oracle
for <something>" which allows them to decide membership in a language
<something> in one single step. What that <something> is depends on the
oracle. So you're saying the original and the use you wish it had is the
case where <something> is restricted to be just <halting>  ?

Once you define the notion of "oracle Turing Machine" in that kind of
generality, then you can investigate cases when the <something> is 
impossible in practice (such as <halting> language encoding all halting
TMs) and also the cases which are "ok" in practice, too. Yet another way
to think of a "Random Oracle" seems to be to consider it as an oracle
whose <something> is drawn uniformly at random from all the possible
<something>s (i.e. a "randomly chosen oracle") (this begs the question
of whether the collection of all possible languages is countable,
and what drawing from it means, of course).  

I agree that the terminology is a little strange, especially at first.
I'm not sure what else to call them, though. Well, "impossible," maybe,
but I mean, something descriptive without being snide.

-David

------------------------------

From: =?ISO-8859-1?Q?H=E4m=E4l=E4inen?= Panu <[EMAIL PROTECTED]>
Subject: RC4 source code
Date: 6 Jul 2000 07:34:06 GMT

Hi!

Does anyone know where I could find fast RC4 source
code (C-Code, portable or optimized for Pentiums)?

                                -Panu-

------------------------------

From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Hash and Entropy
Date: Thu, 06 Jul 2000 08:47:10 GMT

David A Molnar wrote:
[snip]
> Yet another way
> to think of a "Random Oracle" seems to be to consider it as an oracle
> whose <something> is drawn uniformly at random from all the possible
> <something>s (i.e. a "randomly chosen oracle") (this begs the question
> of whether the collection of all possible languages is countable,
> and what drawing from it means, of course).

I thought that a "random oracle" was an oracle that had a way of getting
Truly Random bits from somewhere in a single step.

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: RPK
Date: Thu, 06 Jul 2000 08:53:02 GMT

In article <[EMAIL PROTECTED]>,
  tomstd <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] wrote:
> >This public key cryptographic system seems to fit audio and
> video
> >application very well.
> >Does anyone know about "real" applications that use it ?
> >Does anyone work on trying to break it ? ( Just to know how
> robust this
> >system is and how reasonnable it is to use it).
> >Thank you
>
> why do you suggest it is suited for music?
My answer seems to have been lost, so I will try again to reply to you.
Based on what the authors say, its main advantage is to work very well
with a connectionless protocol like UDP, you can miss bits of your
message and be able to decode the rest. It is based on a "on-the-fly"
cipher/decipher synchronisation of the encryptor source and the
decryptor destination.
The fact is UDP is more faster than TCP but not reliable. For sound or
video, we do not mind to lose a bit or two from time to time. Probably
the user won't notice it. But it is important to have a synchronisation
method between the encrypter and the decrypter. RPK authors claim they
can do that with a very fast network protocol. Whether it is true or
not, I have not yet any idea.
Cheers


>
> Tom
>
> Got questions?  Get answers over the phone at Keen.com.
> Up to 100 minutes free!
> http://www.keen.com
>
>


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Jerry Coffin <[EMAIL PROTECTED]>
Subject: Re: cray and time needed to attack
Date: Thu, 6 Jul 2000 03:19:26 -0600

In article <8jv5t7$1ck$[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...

[ ... ] 

> (1) Even assuming we can scrape together enough hardware/memory,
> solving
> the matrix *mod 2* will be 6-7 million times harder than that for
> RSA-512.  RSA-512 took ~10 days on a CRAY C90.  This problem does
> not run well in parallel.  So solving the matrix mod 2 will take
> 60 million days at a minimum.

...on a C90.  The C90 was a replacment for the Y-MP series, and if 
memory serves came out around the mid 1980's or so.  It's now 
basically obsolete.  If you want to make a marginally valid 
comparison, feel free to re-run your job on a T90 and report the 
results.  Then keep in mind that while T90's are at least reasonably 
current, they're still basically production machines and do NOT use 
the absolute state of the art today.

It appears that the MO4 version of the T90 (which appears to be the 
most recent version) uses something like 15 ns SRAMs for its main 
memory.  At the present time, you can get 3 ns SRAMs off the shelf 
from Motorola, and the L2 cache of a 1 GHz Pentium III or Athlon runs 
at a 2 ns cycle time.  Though it'll undoubtedly be a few years before 
it's produced, technology is currently known to reduce that still 
further -- by close to two more orders of magnitude if money was no 
object at all.

Even though I realize this job doesn't parallelize particularly well, 
my understanding is that it IS possible to use parallel processing to 
reduce the time somewhat, so something like 16 or 32 processors would 
probably approximately quadruple the speed.

When all those factors are taken into account, we're left with a 
result of around 4 to 5 years.  In case you'd forgotten, that's just 
about _exactly_ what I said to start with: the job would take years, 
but not decades or centuries.  You're making two fundamental 
mistakes: first, even though the C90 was about as fast as was 
practical at the time, it wasn't absolutely state of the art (Seymour 
was known to comment that he used roughly decade-old technology).  
Second, the C90 was produced around 15 years ago, and rather than 
being state of the art, is now obsolete.

> I suggest you do the arithmetic.  I also suggest you not make further
> pronouncements until you study what is really involved.

Doing one's homework is always good advice.  This is the rare case 
where it would be better for you to receive (and take to heart) than 
to give.

-- 
    Later,
    Jerry.
 
The universe is a figment of its own imagination.

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: AONT
Date: Thu, 06 Jul 2000 11:36:02 +0200



Benjamin Goldberg wrote:

> David Hopwood wrote:

> [snip]
>
> Erm, you're right that the efficiency is bad, but I don't know how to
> make it better.  I should point out that the hash function need not be
> cryptographically secure, but merely good ... I think CRC might even be
> sufficient.

As I mentioned in another thread, one good way of forcing the
opponent to work with the whole file is to use a good algorithm
that deals with the whole file from first principles. Otherwise, one
is in some sense doing 'patching-up' work on a scheme employing
a given block cipher whose strength one desires to improve. There
could be different ideas of doing that. Rivet has given one. Yours
turns out to be quite computationally expensive. I gave one in
a recent thread using non-linear block chainings. Hopefully other
readers will be stimulated to post their good or much better schemes.

M. K. Shen



------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: A simple all-or-nothing transform
Date: Thu, 06 Jul 2000 11:36:14 +0200



David Hopwood wrote:

> Mok-Kong Shen wrote:
> > David Hopwood wrote:
> > > Mok-Kong Shen wrote:
> > > > David Hopwood wrote:
> > > >
> > > > > If the requirement for the function to be unkeyed is dropped, then
> > > > > "all-or-nothing transform" is the wrong term to use; instead use
> > > > > either "pseudo-random permutation" (for deterministic functions) or
> > > > > "semantically secure encryption scheme" (for non-deterministic
> > > > > functions).
> > > >
> > > > I don't share your view. All-or-nothing transform means that one
> > > > applies a procedure to make the encryption (component) processes
> > > > of a message so, that the opponent is forced to deal with the whole
> > > > of the ciphertext material and can't simply concentrate his efforts
> > > > to a part of it and thus achieve economy in his attack (which may even
> > > > be critical for his success).
>
> If the encryption scheme is secure then there is no feasible attack,
> regardless of how much of the ciphertext is available. I therefore don't
> see why it's useful to consider an "all-or-nothing encryption scheme"
> (even if it were adequately defined) to be better than any other secure
> scheme, for which there is *theoretically* sufficient information to
> determine part of the plaintext even if not all of the ciphertext is
> available, but for which no part of the plaintext can be found in
> practice.
>
> (Frankly, I don't see the point of the scheme in Rivest's 1997 paper; if
> the key length is artificially limited to be too small, it is naive to
> assume that multiplying the attacker's work by the message length will
> improve that sufficiently. Messages simply aren't long enough to make such
> a scheme secure, and even if they were, the attacker's work is only
> multiplied by the same factor as the user's. Also, since the paper was
> written, the premise behind limits on key sizes has been rendered moot by
> the relaxation of U.S. export regulations. There are other useful
> applications of AONTs, but most of them require an unkeyed function.)

If you have an encryption scheme that (you believe) is sufficiently
secure, then it is indeed THE rational behaviour that you do nothing
more, for else it is a contradiction (of the premise). In the other case
you may desire improvements, if technically and economically
feasible. As I understand it, the idea of an all-or-nothing transform
is to introduce some processing steps to a scheme employing a
GIVEN block cipher such that virtually the whole message becomes
a (large) block so that the task of the opponent can become more
difficult. The block size of the given cipher may be felt to be too
small for providing adequate security, yet a cipher with double or
triple block size may not be available or one may not like to do
multiple encryptions etc. (for whatever reasons, including very
subjective ones). Thus in my view the issue of an all-or-nothing
transform is to provide a helping or add-on method. If you have
good encryption schemes that are designed from the start to
provide higher security, e.g. ciphers of large block size or ciphers
that  (from first design principles) treat the message as a whole
block, then you certainly don't need all-or-nothing transform.

> > > For *any* secure encryption scheme, an adversary can't simply concentrate
> > > his efforts on a part of the ciphertext. Roughly speaking, if an
> > > adversary can feasibly recover any information about the plaintext given
> > > all of the ciphertext, the encryption scheme is not semantically secure.
> > > This obviously also holds if the adversary is given only part of the
> > > ciphertext (if a scheme is considered insecure when an adversary can break
> > > it given some amount of information, it must also be considered insecure
> > > when he/she can break it given a strict subset of that information).
> > >
> > > This means that what you appear to be requiring of a keyed AONT can't be
> > > objectively distinguished from what is required of any other semantically
> > > secure encryption scheme. That's why I claim that it is clearer to use
> > > one of the terms "all-or-nothing transform" (which must be unkeyed),
> > > "pseudo-random permutation", or "semantically secure encryption scheme".
> > > All of these have formal definitions of security, the first given in
> > > [Boyko99], and the other two in [BDJR97], for example.
> >
> > People that brute force a block cipher work only on a couple of
> > blocks, don't they?
>
> A secure block cipher will not be breakable by brute-force; that's one of
> the simplest attacks to defend against, by making the key sufficiently
> long.

Covered above.

> However, if you really want to analyse whether information about the
> plaintext can be found by a computationally unbounded attacker when not
> all of the ciphertext is available, that can be done by assuming that the
> key is known, as I did in my first reply to this thread. That gives the
> same result as assuming that the key can be brute-forced, and so there is
> no loss of generality in taking AONTs to be unkeyed.

It is anyway my implicit assumption that a computationally unbounded
opponent doesn't exist. Assuming an omnipotent opponent might be
useful for the purpose of manufacturing scientific papers, though I am
not sure even of that.

> [snip]
> > > > As to the second half of your sentence, I wonder how I am going to
> > > > classify my use of two non-linear block chaining steps (see a follow-up
> > > > of mine of 3rd. Jul), which also forces the opponent to treat the entire
> > > > message, according to your terminology.
> > >
> > > I wouldn't try to classify what security properties it has unless I had
> > > time to analyse it thoroughly, which I don't.
> >
> > Independent of whehter my scheme is any good, it plainly shows
> > that your classification scheme is inappropriate.
>
> No, it doesn't. Your scheme is an encryption scheme, not an AONT. Whether
> it is semantically secure or not, I've no idea.

Rivest's encryption scheme is 'his all-or-nothing processing steps' + a
common block cipher. I integrated the steps required for produding
the all-or-nothing effects WITH the block cipher (through non-linear
block chainings). (It's much like one can have the hardware needed
for ISDN outside or inside of one's computer.)

M. K. Shen


------------------------------

From: [EMAIL PROTECTED] (Mark Wooding)
Subject: Re: Hash and Entropy
Date: 6 Jul 2000 10:24:59 GMT

Benjamin Goldberg <[EMAIL PROTECTED]> wrote:

> I thought that a "random oracle" was an oracle that had a way of
> getting Truly Random bits from somewhere in a single step.

The definition I've usually come across works like this:

Let \Omega be the set of all functions from the set of finite bitstrings
to the set of infinite bitstrings, i.e., all functions of the form
H: { 0, 1 }^* \to { 0, 1 }^\infty.  When we use such a function, we tend
to just take the first few bits of the result, and the amount of the
result we need is usually obvious from context.  For example, if I
compute x' = x (+) H(y) then I only want the first |x| bits of the
result of H(y).

A `random oracle' will compute a function chosen randomly and uniformly
from \Omega.  It will always use the same function, but it won't tell
you what the function is.  Both the `good guys' and the adversary are
given access to the oracle.

The idea is that we can prove various useful things about systems which
use random oracles and then, when we want to actually use such systems,
replace the oracles with things based on hash functions.  The reality
isn't quite so rosy, but it's not too bad.

You could possibly think of the oracle as something which:

  * given an input it's not seen before, responds with a string of
    random data; and

  * given an input it has seen before, response with the same string it
    responded with last time.

I think I prefer the one above, though.

Phil Rogaway has some good stuff on his web pages, including the
original paper which introduced random oracles to us, and the first
appearances of OAEP and PSS.  See http://www.cs.ucdavis.edu/~rogaway/.

-- [mdw]

------------------------------

From: David Lancashire <[EMAIL PROTECTED]>
Subject: Re: Prime Numbers?
Date: Thu, 06 Jul 2000 06:36:21 -0400

> Worse, I believe it's been proven that no algebraic function can generate
> exclusively prime numbers. If such a function were to exist, the main
> benefit would be that instead of having to algorithmically, and (generally)
> fallibly, determine a number to be prime over quite a bit of effort, instead
> we could simply compute an algebraic function and a large prime would
> magically appear. Also the race to see who could find the largest prime
> would become an abusive search with people finding the nth prime with some
> absurdly large number for n.
>                 Joe

well... there's always Euclid's theorum (300bc):

Let n be an arbitrary integer.

    prime = n! + 1
    (find an integer divisible by all positive integers less than itself, and
simply add one....)

This doesn't generate all possible primes, but it's guaranteed to produce only
primes.  That being said, its a violently bad algorithm for all practical
purposes.  Nothing like wiping out vast portions of the keyspace to make
cryptanalysis easier, eh?

As I understand from Mr. Schneier's Applied Cryptography, most encryption
programs simply generate a random number, and run it through several
probabilistic tests.  I'm not very familiar with these, but have heard that the
"Rabin-Miller" test is one of the simplest to implement. [see:
http://mathworld.wolfram.com/Rabin-MillerStrongPseudoprimeTest.html]

--David



------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Compression & Encryption in FISHYLAND
Reply-To: [EMAIL PROTECTED]
Date: Thu, 6 Jul 2000 10:03:14 GMT

Kurt Shoens <[EMAIL PROTECTED]> wrote:
: In article <[EMAIL PROTECTED]>, Tim Tyler  <[EMAIL PROTECTED]> wrote:

:>For example, it's easy to visualise circumstances where the keyspace is
:>reduced so that a brute-force search would is possible, but fails for
:>lack of a halting criterion that uniquely establishes the correct plaintext.
:>This might happen (for example) if the plaintext is relatively short.

: So let's make sure I understand your point:  using 1-1 compression is
: useful in cases where:

: a) You've done such a poor job of key management that brute force is
:    practical
: ... and ...
: b) The uncompressed plaintext of the message is indistinguishable
:    from random

It /may/ be useful in these cases.  However don't forget that I mentioned
this case a counter-example to your statement that:

``If the keyspace is reduced sufficiently (say you know the key is one of
  these 1,000 alternatives), details of the compression or even the
  encryption algorithm itself aren't going to help materially.''

To think that inadequate key management is the *only* case where the
ability to rapidly reject bad keys helps does not appear to be correct.

Consider the case where the attacker has a known-plaintext attack on the
algorithm.  Here, sections of the key may be revealed even if the
management of the key has otherwise been excellent.

Having the plaintext "indistinguishable from random" is not necessary for
benefit to result either.  If it takes frequency analysis on a significant
volume of the message to spot a correct plaintext, then that's obviously
going to be slower than (say) the case of looking for a header of
all-zeros in the first block.

: There's really no recovery from poor key management.  If you give your
: adversaries the possibility of brute force search, you've lost the
: battle.  Why bother encrypting at all then?

Brute force is not enough if you're within the unicity distance - if you
don't have a good-enough halting criterion.

"Bad key management" reduces the search space.  It doesn't affect whether
the search is looking for something it knows how to recognise.  You can
still have some pretty good encryption with a key-space of only 2^8 keys.

*If* you were arguing that the 1-1 property is icing on the cake of
compression - and that compression is icing on the cake of encryption
(from a security POV) - then I might agree with you.

This material is mainly of significance because cryptographers try to
live close to the edge, and produce the best systems they know how to -
because competition means that if they don't, then other folks will.
-- 
__________  Lotus Artificial Life  http://alife.co.uk/  [EMAIL PROTECTED]
 |im |yler  The Mandala Centre   http://mandala.co.uk/  VIPAR GAMMA GUPPY.

------------------------------

From: Quisquater <[EMAIL PROTECTED]>
Subject: Re: Prime Numbers?
Date: Thu, 06 Jul 2000 13:10:11 +0200

>    prime = n! + 1
>    (find an integer divisible by all positive integers less than itself, and
>    simply add one....)

Oops!

n = 2: 2! + 1 = 3   a prime, OK
n = 3: 3! + 1 = 7   a prime, OK
n = 4: 4! + 1 = 25  a square
n = 5: 5! + 1 = 121 a square

conjecture ? :-)

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to