Cryptography-Digest Digest #953, Volume #11       Tue, 6 Jun 00 04:13:00 EDT

Contents:
  Re: Statistics of occurences of prime number sequences in PRBG output as  ("John A. 
Malley")
  Re: DVD encryption secure? -- any FAQ on it (jungle)
  Re: Question about recommended keysizes (768 bit RSA) (David Blackman)
  Re: Multiple exponentiation (lcs Mixmaster Remailer)
  Re: RSA Algorithm ("Joseph Ashwood")
  Re: Favorite Cipher Contest Entry (Runu Knips)
  Re: Multiple exponentiation ("Peter L. Montgomery")
  Re: Cipher design a fading field? (Mok-Kong Shen)
  Re: Statistics of occurences of prime number sequences in PRBG output as  (Mok-Kong 
Shen)
  Some dumb questions (Mok-Kong Shen)

----------------------------------------------------------------------------

From: "John A. Malley" <[EMAIL PROTECTED]>
Subject: Re: Statistics of occurences of prime number sequences in PRBG output as 
Date: Mon, 05 Jun 2000 21:53:24 -0700



Mok-Kong Shen wrote:
> 
>
> I don't understand your sentence 'there is no iterated or recursive
> mathematical function that generates all primes starting from a given
> input value'. One can write a program to generate successive primes.
> Woudn't that contradict your statement?

Well, no, I don't think so - let me clarify. Let's use L'Ecuyer's
definition of a generator:

A generator is a structure Gen = { S, s[0], T, U, G } where S is a
finite set of states, s[0] is an element of S and is the initial state,
T: S -> S is the transition function, U is a finite set of output
symbols, and G : S -> U is the output function. 

A generator operates as follows: Start from the initial state s[0] (call
it the seed) and let the initial output of the generator be u[0] =
G(s[0]). Then for i = 1, 2, ... let the next state s[i] = T(s[i-1])and
the next output be u[i] = G(s[i]) = G( T( s[i-1] )). 

The next state is derived from the current state. The next output is
derived from the current state. 

Now let's deliberately simplify the generator for a thought experiment. 

Let each state equal an m-bit number. The seed s[0] is some m bit
number. Let the set U equal the set S (so each state is an output of the
generator, each output of the generator is an m-bit number) and G is
just an identity mapping S to U.  Now u[i] = s[i] = T( s[i-1] ). And
since u[i] = s[i] we can say the next state s[i] = T( u[i-1] ). 

So the transition function T takes the previous output of this generator
(an m-bit number)and transforms it into the next output of this
generator (another m-bit number.) The domain of T is at most 0 to 2^m,
and the codomain of T is at most 0 to 2^m.  However we assume the domain
and codomain can be smaller than these ranges (and the seed was in the
domain.) The "feedback" of the previous state takes every element of the
codomain of T back to its same value in the domain of T for the next
iteration. The longest sequence we can expect from this generator before
it repeats itself is the # of numbers ( <= 2^m ) in its codomain, in
some permutation.  Output sequences could be less than the # of numbers
in T's codomain before it repeats.  Different T functions relating
different domains and codomains would produce different permutations of
the # of number ( <=2^m ) in T's codomain.  

What kind of transition functions T produce "long" sequences of prime
numbers in the output series s[i] mixed with long sequences of composite
numbers? Any? What are the characteristics of a transition function T in
terms of domain, codomain and mathematical operations, that takes a
prime number as an input, and then produces a prime number output. And
then takes that prime number output as input and produces another prime
number output, and then again, and then again....and then stops doing
this at some value of prime number input and outputs a sequence of
composite numbers. 

like this  - 

 s[0] is composite. Apply T to s[0]. s[1] is prime. Apply T to s[1].
s[2] is prime. Apply T to s[2].  s[3] is prime. Apply T to s[3]. s[4] is
prime. Apply T to s[4]. s[5] is composite. Apply  T to s[5]. s[6] is
composite....

I conjecture that finding such T transition functions isn't easy when
you make T out of m-bit register shifts and logical functions of bit
values in the m-bit register (or other mathematical operations.) 

BUT - No proof yet on my part to show this is true. It's a conjecture
that struck me as worth pursuing.  Before I sit down and work it through
I thought to ask if existing research/papers already covered it or if
there's some fundamental fact I missed that renders the conjecture
false. 

> 
> Further in the other sentence, does the expression 'long sequence of
> primes' mean a sequence of primes without gaps, i.e. a number
> of seccessive primes, e.g. 7,11,13,17,19,23, in increasing or
> decreasing order? 

Yes, they are the kinds of sequence I had in mind - increasing,
decreasing. Or a "burst" of successive primes, in any order, appearing
in the keystream. Without gaps. 

> I don't yet understand the logic underlying your claim
> that the probability you mentioned in a non-perfect sequence should
> be much lower than that in a perfect random sequence?

Depends on whether transition functions like the one just described
above actually exist. 

Given k bit string of (truly) random bits where k is a multiple of m,
the expected probabilities of encountering 3 primes in a row in the
sequence, or 4 primes in a row, etc. can be calculated. If there are no
transition functions T that generate very long sequences of primes in
the bitstream, or only generate at most triplets or pairs of primes in
the output,etc,  then the expected probabilities of encountering 3
primes, 4 primes, 5 primes, etc in a row for PRBG bit streams are lower
or even zero.  Then encountering 3, 4, 5 etc prime sequences in the
ciphertext stream is very important. These unlikely occurrences for the
PRBG keystream would be due to the characteristics of the plaintext
itself, and would thus leak information for the cryptanalyst to use. 

A viable attack? A real situation? Don't know. But it got my curiosity. 
 

> Could you
> please explain a bit more, perhaps with a hypothetical example to
> illustratrate? Thanks.

Hope this helped. 


> 
> M. K. Shen

------------------------------

From: jungle <[EMAIL PROTECTED]>
Subject: Re: DVD encryption secure? -- any FAQ on it
Date: Tue, 06 Jun 2000 01:29:40 -0400

yes ...

Travis Farral wrote:
> 
> One thing that has been missed so far is the fact that most DVD movies
> cost ~$30US while a blank DVD costs ~$50US (to the consumer anyway,
> correct me if I'm wrong) so that means that only the DVD mass-production
> houses would benefit from DVD piracy (in a bit-for-bit fashion).

yes ...



------------------------------

From: David Blackman <[EMAIL PROTECTED]>
Subject: Re: Question about recommended keysizes (768 bit RSA)
Date: Tue, 06 Jun 2000 15:46:03 +1000

"Thomas J. Boschloo" wrote:
> 
> Hi sci.crypt guys,
> 
> I'm mainly from a.p.a-s nowadays and yesterday I found the document
> <http://www.rsasecurity.com/rsalabs/bulletins/bulletin13.html>. In it it
> says "Further, 768 bits seems unreachable today, even utilizing the
> TWINKLE device, because of the difficulties in dealing with the matrix.
> However, we do believe that 768-bits might be breakable in about
> 10-years time. The fitted data from section III gives a date of 2019 for
> breaking such a key by public effort."

Sure, 768 bits is probably safe for the moment and certainly safer than
some other things you are doing.

The question to ask is: what would it cost you and the people you talk
to to go to 1024 bits? 2048 bits? more? I suggest you use the biggest
one where you won't notice the cost. I suspect 1024 will be fast enough
for your purposes, and larger keys might possibly be ok too.

When extra security is available for free, use it, even if it doesn't
seem relevant. And when you upgrade to 1024 bits or more, don't think
"Now i'm safe", think "Now i'm probably safe from that one, what's the
next leak to plug?"

------------------------------

Date: 6 Jun 2000 06:00:40 -0000
From: lcs Mixmaster Remailer <[EMAIL PROTECTED]>
Subject: Re: Multiple exponentiation

> > Using algorithm 14.88 (pp 618, Handbook of Applied Cryptography), a
> > multiple exponentiation of the form a^x b^y can be done in 1.2
> > exponentiation instead of 2.
> > 
> > My question is how to do the following?
> > a^x, a^x b^y
> > 
> > I been told that it can be done in 1.2 exponentiation instead of 2.2
> > exponentiation by deriving a^x from a^x b^y.
>
> You can clearly improve on 2.2 by computing a^x and b^y separately and
> multiplying!  But I can't see any improvement beyond that.

You also pretty clearly can't improve on 2.0.  If you could, you
could then compute b^y by dividing (multiplying by the inverse, if
this is modular arithmetic).  Divisions take asymptotically the
same time as multiplications.  So this would let you compute two
independent exponentiations, a^x and b^y, in less than twice the
time to compute one exponentiation, which is impossible.






------------------------------

From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: RSA Algorithm
Date: Tue, 6 Jun 2000 00:01:41 -0700

The problem is that, as you have described it, you are
performing an encryption (which should not be compressible
according to what I've stated and is in question), then you
are artificially expanding it after the fact, later you are
compressing out some of the added expansion, it suffers from
the same problem as Tom's. I think I can end this though by
changing my statements to be that if you take the input to
an isolated strong encryption method, the output cannot be
compressed to smaller than the input. Can we agree to that,
and agree to differing ideas of what happens in certain
special cases?
                Joe


"Douglas A. Gwyn" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> wtshaw wrote:
> > ... "Joseph Ashwood" <[EMAIL PROTECTED]> wrote:
> > > I think you need a restatement of one of crypto's
basic
> > > lemmas, the output of a strong cipher MUST be
> > > indistinguishable from a random number. ...
>
> > ... The appearance of ciphertext can be non-random, yet
even be
> > indecipherable.
>
> Perhaps it would help to give a simple counterexample:
> Suppose the ciphertext from some system X meets JA's
criterion.
> Consider a new ciphertext consisting alternately of 0 bits
and
> the system-X ciphertext bits.  Clearly the result is easy
to
> distinguish from a random bit stream, yet the underlying
> plaintext is just as secure as using system X alone.
>
> > ... Do you work for the government, or do you just
believe their
> > propaganda?
>
> I don't think either would be a fair inference from what
the fellow
> said.





------------------------------

Date: Tue, 06 Jun 2000 09:44:23 +0200
From: Runu Knips <[EMAIL PROTECTED]>
Subject: Re: Favorite Cipher Contest Entry

[EMAIL PROTECTED] wrote:
> I was reviewing the ciphers in the cipher contest and listing them in
> favorite order. [...] What are other peoples favorites?

I agree that Storin had the nicest structure of all ciphers yet, I
especially liked the nifty 'x ^= x >> 12' in it.

------------------------------

From: "Peter L. Montgomery" <[EMAIL PROTECTED]>
Subject: Re: Multiple exponentiation
Date: Tue, 6 Jun 2000 07:58:41 GMT

In article <[EMAIL PROTECTED]> 
lcs Mixmaster Remailer <[EMAIL PROTECTED]> writes:
>> > Using algorithm 14.88 (pp 618, Handbook of Applied Cryptography), a
>> > multiple exponentiation of the form a^x b^y can be done in 1.2
>> > exponentiation instead of 2.
>> > 
>> > My question is how to do the following?
>> > a^x, a^x b^y
>> > 
>> > I been told that it can be done in 1.2 exponentiation instead of 2.2
>> > exponentiation by deriving a^x from a^x b^y.
>>
>> You can clearly improve on 2.2 by computing a^x and b^y separately and
>> multiplying!  But I can't see any improvement beyond that.
>
>You also pretty clearly can't improve on 2.0.  If you could, you
>could then compute b^y by dividing (multiplying by the inverse, if
>this is modular arithmetic).  Divisions take asymptotically the
>same time as multiplications.  So this would let you compute two
>independent exponentiations, a^x and b^y, in less than twice the
>time to compute one exponentiation, which is impossible.

    It is possible for some exponent pairs.  For example, 
consider x = y = -1.  You can compute a^(-1) = b/ab and b^(-1) = a/ab
with one inversion and three multiplications.  If an inversion costs
more than three multiplications, this is an improvement.

    For a more complicated example, assume you are given 
A = a^3 and B = b^5, and want a, b.
Find the exponent for -1/15.  [That is, if working modulo a prime p,
with GCD(p-1, 3*5) = 1, compute -1/15 mod (p-1).]  Then compute

             T1 = A^5 * B^3 = a^15 * b^15
             T2 = T1^(-1/15) = 1/ab 
             T3 = A^2 * T2^5 * B = a
             T4 = A^3 * T2^9 * B^2 = b

There are complications when T1 is not invertible (e.g., A = 0).
Most of the time we achieve a cube root and a fifth root 
by taking a (-1/15)-th power and a few extra multiplications.
-- 
E = m c^2.  Einstein = Man of the Century.  Why the squaring?

        [EMAIL PROTECTED]    Home: San Rafael, California
        Microsoft Research and CWI

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Cipher design a fading field?
Date: Tue, 06 Jun 2000 10:09:32 +0200



"Douglas A. Gwyn" well:

> Mok-Kong Shen wrote:
> > The problem is, I guess, that the knowledge can't be perfect and
> > often leaves much to be desired, e.g. what concerns the resources
> > of the opponents or certain yet unproved propositions related to
> > number theory, etc.
>
> That's not really an obstacle; a large number of theorems take
> the form "assuming X, then Y".

Well what if one has no way at all to verify whether X is true, and has
to 'believe' that it is, like in religion? The sheer number of people
believing something being true (but without any proof) can be a very
important matter in e.g. politics, but shouldn't play a role in science
in
my humble view.

M. K. Shen



------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Statistics of occurences of prime number sequences in PRBG output as 
Date: Tue, 06 Jun 2000 10:09:23 +0200



"John A. Malley" wrote:

> Mok-Kong Shen wrote:
> >
> > I don't understand your sentence 'there is no iterated or recursive
> > mathematical function that generates all primes starting from a given
> > input value'. One can write a program to generate successive primes.
> > Woudn't that contradict your statement?
>
> Well, no, I don't think so - let me clarify. Let's use L'Ecuyer's
> definition of a generator:

[snip]

We are talking just about an arbitrary (to be defined) function, not
about any randomness property of it, right? Now a function in a finite
discrete domain, here of integer values, can be simply defined by a
look up table and it is obvious that you can prescribe the iteration
sequence of the function starting from a certain given initial value to
be anything you want, in particular a sequence containing a specific
bunch of given and that exactly in the location you want. Do I miss
something here?

I have discussed the above in the context of your generator, which
has a finite period of its output. The wording of the sentence quoted,
if 'exactly' interpreted, means generating ALL primes. ALSO that is
possible, as I said previously. One can easily write a program in C
or any language to generate the next prime from a given prime. So
with sufficient computing time, one get to any large prime one wants.
And a program is a function, isn't it?

M. K. Shen

>
>
> A generator is a structure Gen = { S, s[0], T, U, G } where S is a
> finite set of states, s[0] is an element of S and is the initial state,
> T: S -> S is the transition function, U is a finite set of output
> symbols, and G : S -> U is the output function.
>
> A generator operates as follows: Start from the initial state s[0] (call
> it the seed) and let the initial output of the generator be u[0] =
> G(s[0]). Then for i = 1, 2, ... let the next state s[i] = T(s[i-1])and
> the next output be u[i] = G(s[i]) = G( T( s[i-1] )).
>
> The next state is derived from the current state. The next output is
> derived from the current state.
>
> Now let's deliberately simplify the generator for a thought experiment.
>
> Let each state equal an m-bit number. The seed s[0] is some m bit
> number. Let the set U equal the set S (so each state is an output of the
> generator, each output of the generator is an m-bit number) and G is
> just an identity mapping S to U.  Now u[i] = s[i] = T( s[i-1] ). And
> since u[i] = s[i] we can say the next state s[i] = T( u[i-1] ).
>
> So the transition function T takes the previous output of this generator
> (an m-bit number)and transforms it into the next output of this
> generator (another m-bit number.) The domain of T is at most 0 to 2^m,
> and the codomain of T is at most 0 to 2^m.  However we assume the domain
> and codomain can be smaller than these ranges (and the seed was in the
> domain.) The "feedback" of the previous state takes every element of the
> codomain of T back to its same value in the domain of T for the next
> iteration. The longest sequence we can expect from this generator before
> it repeats itself is the # of numbers ( <= 2^m ) in its codomain, in
> some permutation.  Output sequences could be less than the # of numbers
> in T's codomain before it repeats.  Different T functions relating
> different domains and codomains would produce different permutations of
> the # of number ( <=2^m ) in T's codomain.
>
> What kind of transition functions T produce "long" sequences of prime
> numbers in the output series s[i] mixed with long sequences of composite
> numbers? Any? What are the characteristics of a transition function T in
> terms of domain, codomain and mathematical operations, that takes a
> prime number as an input, and then produces a prime number output. And
> then takes that prime number output as input and produces another prime
> number output, and then again, and then again....and then stops doing
> this at some value of prime number input and outputs a sequence of
> composite numbers.
>
> like this  -
>
>  s[0] is composite. Apply T to s[0]. s[1] is prime. Apply T to s[1].
> s[2] is prime. Apply T to s[2].  s[3] is prime. Apply T to s[3]. s[4] is
> prime. Apply T to s[4]. s[5] is composite. Apply  T to s[5]. s[6] is
> composite....
>
> I conjecture that finding such T transition functions isn't easy when
> you make T out of m-bit register shifts and logical functions of bit
> values in the m-bit register (or other mathematical operations.)
>
> BUT - No proof yet on my part to show this is true. It's a conjecture
> that struck me as worth pursuing.  Before I sit down and work it through
> I thought to ask if existing research/papers already covered it or if
> there's some fundamental fact I missed that renders the conjecture
> false.
>
> >
> > Further in the other sentence, does the expression 'long sequence of
> > primes' mean a sequence of primes without gaps, i.e. a number
> > of seccessive primes, e.g. 7,11,13,17,19,23, in increasing or
> > decreasing order?
>
> Yes, they are the kinds of sequence I had in mind - increasing,
> decreasing. Or a "burst" of successive primes, in any order, appearing
> in the keystream. Without gaps.
>
> > I don't yet understand the logic underlying your claim
> > that the probability you mentioned in a non-perfect sequence should
> > be much lower than that in a perfect random sequence?
>
> Depends on whether transition functions like the one just described
> above actually exist.
>
> Given k bit string of (truly) random bits where k is a multiple of m,
> the expected probabilities of encountering 3 primes in a row in the
> sequence, or 4 primes in a row, etc. can be calculated. If there are no
> transition functions T that generate very long sequences of primes in
> the bitstream, or only generate at most triplets or pairs of primes in
> the output,etc,  then the expected probabilities of encountering 3
> primes, 4 primes, 5 primes, etc in a row for PRBG bit streams are lower
> or even zero.  Then encountering 3, 4, 5 etc prime sequences in the
> ciphertext stream is very important. These unlikely occurrences for the
> PRBG keystream would be due to the characteristics of the plaintext
> itself, and would thus leak information for the cryptanalyst to use.
>
> A viable attack? A real situation? Don't know. But it got my curiosity.
>
>
> > Could you
> > please explain a bit more, perhaps with a hypothetical example to
> > illustratrate? Thanks.
>
> Hope this helped.
>
> >
> > M. K. Shen


------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Some dumb questions
Date: Tue, 06 Jun 2000 10:13:10 +0200


I like very much to know how to 'conretely' answer the
following certainly dumb questions:

1. If a bit sequence has a certain known small bias in
   frequency but is uncorrelated, how can one exploit
   that fact to analyse messages encrypted with xor?

2. If an ideal OTP is misused, in that it is used a small
   number n of times, how is one going to attack, if
   absolutely no known plaintext is available?

Thanks.

M. K. Shen




------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to