Cryptography-Digest Digest #166, Volume #14      Tue, 17 Apr 01 08:13:01 EDT

Contents:
  Re: Publishing is *hard* (Mok-Kong Shen)
  Re: Note on combining PRNGs with the method of Wichmann and Hill (Mok-Kong Shen)
  >>>> See the Latest Cryptology Theories Here, Why Silverman is DEAD Wrong!<<<< 
(Mathman)
  Re: Note on combining PRNGs with the method of Wichmann and Hill (Mok-Kong Shen)
  Re: Lorentz attractor... (Gerhard Wesp)
  Re: NSA is funding stegano detection (Mok-Kong Shen)
  Re: NSA is funding stegano detection (Mok-Kong Shen)
  Re: Note on combining PRNGs with the method of Wichmann and Hill ("Brian Gladman")
  Re: Rabin-Miller prime testing (Simon Josefsson)
  Streem combination without the use of dynamic tables. (David Formosa (aka ? the 
Platypus))
  Re: HOW THE HELL DO I FIND "D"?!?! (John Savard)
  Re: Lorentz attractor... (John Savard)

----------------------------------------------------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Publishing is *hard*
Date: Tue, 17 Apr 2001 10:09:36 +0200



"John A. Malley" wrote:
>  
> (I put up a URL to a draft of the paper last year here in the group -
> http://www.homeworlds.com/papers/SECLCG.pdf )
[snip]

Very dumb question: Does the article mean that there 
is indication of a probable weakness of ElGamal (for
otherwise such an attack would be inconceivable) or
else that for ANY secure cipher encrypting the output
of a poor PRNG can be problematical? If it is the
second case, then passing the output of a common
(statistically good) PRNG to, say, AES would result
in a sequence that could be cracked, which I would
suppose to be a new and rather significant result.

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Crossposted-To: sci.crypt.random-numbers
Subject: Re: Note on combining PRNGs with the method of Wichmann and Hill
Date: Tue, 17 Apr 2001 10:17:13 +0200



Bryan Olson wrote:
> 
> Mok-Kong Shen wrote:
> 
> >I like to add something to make my last paragraph better
> >understandable: If one of the streams gets a factor 1.0
> >(and it is uniform), isn't that everything is again
> >(rigorously) theoretically o.k. in that particular issue?
> 
> Of course not.  The theorem was:
> | as long as the streams are independent, if any of the
> | streams are uniform then the sum is uniform.

The PRNGs are assumed to be independent (I forgot to
explictly say that) and uniform. Now one stream gets
the factor 1.0, so that is uniform. The others are
not uniform. So according to the theorem the modular
sum is uniform, isn't it? (As I said elsewhere, the
continuous case can be considered the limiting case
of the discrete case, whose proof we have discussed
sometime back. There is in fact a rigorous proof
of the continuous case that doesn't use that limiting
process. I found the paper oneday by chance in Math. 
Rev. but I unfortunately didn't note down the reference.)

M. K. Shen

------------------------------

From: Mathman <[EMAIL PROTECTED]>
Subject: >>>> See the Latest Cryptology Theories Here, Why Silverman is DEAD Wrong!<<<<
Date: Tue, 17 Apr 2001 03:36:49 -0500

www.mediaboy.net/1010100-1100001-1111010/gahk/


------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Crossposted-To: sci.crypt.random-numbers
Subject: Re: Note on combining PRNGs with the method of Wichmann and Hill
Date: Tue, 17 Apr 2001 11:01:08 +0200



Mok-Kong Shen wrote:
> 
> Bryan Olson wrote:
> >
> > Mok-Kong Shen wrote:
> >
> > >I like to add something to make my last paragraph better
> > >understandable: If one of the streams gets a factor 1.0
> > >(and it is uniform), isn't that everything is again
> > >(rigorously) theoretically o.k. in that particular issue?
> >
> > Of course not.  The theorem was:
> > | as long as the streams are independent, if any of the
> > | streams are uniform then the sum is uniform.
> 
> The PRNGs are assumed to be independent (I forgot to
> explictly say that) and uniform. Now one stream gets
> the factor 1.0, so that is uniform. The others are
> not uniform. So according to the theorem the modular
> sum is uniform, isn't it? (As I said elsewhere, the
> continuous case can be considered the limiting case
> of the discrete case, whose proof we have discussed
> sometime back. There is in fact a rigorous proof
> of the continuous case that doesn't use that limiting
> process. I found the paper oneday by chance in Math.
> Rev. but I unfortunately didn't note down the reference.)

Addendum: The scheme of Wichmann and Hill is intended
to get a more uniform stream from a number of not well
uniform streams. The assumption I made above that
the PRNGs are uniform is for discussion of the 
theoretical point you raised which I quote below:

   The modification destroys an important property of 
   the basic combination method: as long as the streams 
   are independent, if any of the streams are uniform 
   then the sum is uniform.

So in that situation we assume that there are uniform
streams to start with. Note that we are actually doing
splitting of hairs. The practical situation is what
Wichmann and Hill treated, namely non-perfect PRNGs,
barely very uniform. They don't use multipliers (which
is equivalent to using all 1.0), I use some multipliers
in the vicinity of 1.0. There are thus little deviations
from their scheme, but on the average, the effect should 
somehow cancel out. The original output from Wichmann 
and Hill in the practical case is not 'perfect' anyway. 
So it can't matter much, if a little bit more inperfection 
is introduced. (The PRNGs with the multipliers could be
considered to be 'given' but slightly poorer PRNGs.) To 
get better results, one should preferrably use more PRNGs
in the combination.

M. K. Shen

------------------------------

From: [EMAIL PROTECTED] (Gerhard Wesp)
Subject: Re: Lorentz attractor...
Date: 17 Apr 2001 09:44:41 GMT
Reply-To: [EMAIL PROTECTED]

In article <[EMAIL PROTECTED]>,
Douglas A. Gwyn <[EMAIL PROTECTED]> wrote:
>I think you must mean use of chaos, because the problem for use in
>encryption *is* the existence of attractors.
>
>You won't find much written on the use of kittens in encryption,
>either, for much the same reasons.

Is this a claim that the existence of these cute little 4-feeted balls
of fur is a problem for encryption???

;-)

-g
-- 
Afgrnd der Ensparngsmassnhmen bei den Onlne-Kostn ist ab sfort in jedm Wrt
von mhr als dri Buchstabn mindestns ein Vkal wegzlassn.

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Crossposted-To: comp.security.misc,talk.politics.crypto
Subject: Re: NSA is funding stegano detection
Date: Tue, 17 Apr 2001 12:10:17 +0200



Bernd Eckenfels wrote:
> 
> In comp.security.misc Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> > I have a 'naive' logical problem. If such changes are
> > not perceivable by the naked eye, doesn't it mean that
> > one can have a class of pictures (centering around
> > a particular one) that are the same for the naked eye
> > but have differences in LSB of the Fourier coefficients?
> 
> The problem is for example if you take a GIF of a drawing which is done with a
> grafixs program, this is an very bad hiding point for LSB noise, since you
> will even see the pixel faults with the naked eye. If you use a jpg scanned
> from paper you will have natural pixel faults which are detectable by frquency
> analysis.
> 
> A simple fictiv example: using scanner panasnic and software scanxy can be
> detected because the scanner ha a typical red/blue peek in the frequeunce
> spectrum (just made it up). It is known hat this scanner with this software
> can never ever generate the color #aabbbbff because of an micro code problem.
> If you now have the color #aabbbbfe and add 1 to the lsb you *poof* made your
> altering of the pic obvious. Just one possible example.

I know too little about image processing. So allow me
a few dumb questions: Are there devices/software that are
without such restrictions in the spectrum? Is is possible
to have an option of non-lossy compression with the common 
image compression algorithms, i.e. I want to exactly 
recover the original pixels? Thanks.

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Crossposted-To: comp.security.misc,talk.politics.crypto
Subject: Re: NSA is funding stegano detection
Date: Tue, 17 Apr 2001 12:17:50 +0200



Walter Roberson wrote:
> 
> <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] (Walter Roberson) wrote:
> :>One can hide data even in gross-level picture elements.
> :>"If I'm wearing a tie in the picture, add one to all numbers in the
> :>hidden message. If I'm holding a dog, negate everything. If
> :>I'm looking at my watch, please send more beer hidden in the next
> :>shipment."
> 
> :No s**t! I'm looking at my watch, OK?
> 
> Good thing for me you're holding a dog too ;-)
> 
> : But this is neither strong
> :encryption, nor steganograhy.  It is a simple (and possibly effective)
> :pre-agreed visual code.
> 
> You are right that the watch/beer example is just a pre-agreed visual
> code, but the other two examples modify the more traditional
> steganographic content, and thus must themselves be considered
> part of the "hidden information" to be decoded. I would claim this
> makes these kind of gross visual elements themselves steganographic.
> 
> One could, more generally, create a complete algebra of picture elements,
> in which shape, color, orientation, perspective location (in
> front of or behind other items), and location in the image, could
> all encode bits or higher-level meanings.
> 
> The order of considering the elements could vary, so same resulting
> image could encode multiple different messages depending on current key
> (e.g., this time ignore the dog and the tie and spiral outwards from
> the hat for the other visual elements; tomorrow, read the diagonals
> starting from the dog and ignore the hat. When the number
> of significant elements in a given key is small relative to the number
> of elements that might be significant in some key or other,
> then one has more freedom to compose images that are more "natural".

If one is ready to go that 'deep', I wonder whether
something similar is not possible with texts. That is,
if one uses a certain (key) word, then some action
is to be done to part of the message, etc. etc., I 
mean quite parallel to what you described above. On
the other hand, that seems to have been already employed 
by common people.

M. K. Shen

------------------------------

From: "Brian Gladman" <[EMAIL PROTECTED]>
Crossposted-To: sci.crypt.random-numbers
Subject: Re: Note on combining PRNGs with the method of Wichmann and Hill
Date: Tue, 17 Apr 2001 12:04:11 +0100

"Mok-Kong Shen" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...

[snip]
> > If the multiplier is less than one there will be numbers close to 1.0
that
> > cannot be output.  And if the multiplier is greater than 1.0, numbers
above
> > 1.0 will add to the numbers close to zero when the 'mod 1.0' is taken so
> > numbers in this range will be twice as probable as higher numbers.
> >
> > My gut feeling is that when multiple generators are used with
multipliers
> > close to 1.0 the uneven distribution will be more complex but it will
still
> > be there unless the mutlipliers used are chosen very carefully to
restore
> > uniformity (I am not sure what the conditions on the multipliers would
be to
> > achieve this - e.g. if the multiplers on two combined generators add up
to
> > 2.0 is the result then uniform?)
>
> First I want to correct what I wrote previously: In what
> is quoted at the top, the phrase in the parentheses
> '(more exactly .....)' should be deleted, because for
> the present issue this restriction is not necessary.
>
> Please note, as I wrote in another previous post, that
> for 'exact' argumentation one of the multipliers has
> to be 1.0.

But I was commenting on your original post that did not have this
constraint.  I agree that when two generators are combined, both multipliers
have to be different from 1.0 for the non-uniformity to show up.

> Now, you said you were employing your gut feeling. Let
> me appeal to your gut feeling to consider the following:
>
> Let r1 and r2 be two uniform random variable in [0, 1).
> Consider r3 = 1.0*r1 + 1.0*r2 mod 1. I suppose you
> don't have difficulty in accepting that r3 is uniform.
> Now consider r4 = 1.0*r1 + 0.999999*r2 mod 1. According
> to what you wrote previously, r4 would be very
> non-uniform. Do you still have that feeling? How about
> replacing 0.999999 with one that increases and become
> infinitely close to 1.0, though not exactly equal to it
> (we ignore the fact that one can't do that on computer)?
> Do you think that there is sort of a 'jump' phenomenon?
> If still yes, you could do an experimental check if you
> have a numerical library like NAG available, which has
> a good standard uniform pseudo-random number generator.
> You can use two different seeds for getting r1 and r2.
> (They are, exactly speaking, not independent, but that
> shouldn't matter too much here.) You can then do a
> chi-square test to see whether you get the very (large)
> non-uniformity you conjectured. Otherwise, you have
> to do some math to see if you can prove your claim.

If you read my earlier posts you will see that I have already said that
adding two generators with multipliers of 1.0 and taking the result 'mod
1.0' will be uniform.  This was never in doubt.  The issue is not this but
rather that of what happens when two such generators are added with
multipliers that are both close to, but different from, 1.0.

For example, here is the result of 10,000,000 trials for each of 3 PRNGs,
two uniform generators, A and B, and a third C, which is (0.9 * A + 1.1 * B)
mod 1.0:

  range            gen A      gen B      gen C
[0.0:0.1) ->    999970   999545   958872
[0.1:0.2) ->    999207 1002146 1009123
[0.2:0.3) ->    998672   999233 1009761
[0.3:0.4) ->  1000023   999415 1009993
[0.4:0.5) ->  1000984   999239 1009305
[0.5:0.6) ->  1001546 1000377 1010543
[0.6:0.7) ->    998803   998860 1010898
[0.7:0.8) ->  1000290   999234 1008644
[0.8:0.9) ->  1001218 1000259 1011547
[0.9:1.0) ->    999287 1001692   961314

Notice that the first and last intervals for the combined generator (C) are
significantly less populated than the other eight - these intervals are
respectively about 0.96 and 1.01 times the frequency expected from a uniform
generator.

> (I am quite sure that you can't, for the reason I wrote previously.)

I provide the C++ code below so that you can check this effect out for
yourself.

   Brian Gladman

=============================================================
// My thanks to George Marsaglia for the PRNGs
//
// Generator A: mwc  in [0.0:1.0)
// Generator B: cong in [0.0:1.0)
// Generator C: (0.9 * mwc + 1.1 * cong) mod 1.0

#include <iostream>
#include <iomanip>
using std::cout;
using std::setw;

const unsigned int trials = 10000000;

const double two_p32 = 65536.0 * 65536.0;
const double mult_1  = 0.9;
const double mult_2  = 1.1;

// NOTE: code depends on mult_1 + mult2 being less than 3.0

double mwcg(void)
{   static unsigned long w = 521288629, z = 362436069;
    z = 36969 * (z & 65535) + (z >> 16);
    w = 18000 * (w & 65535) + (w >> 16);
    return ((z << 16) + w) / two_p32;
}

double cong(void)
{   static unsigned long jcong = 380116160;
    return (jcong = 69069 * jcong + 1234567) / two_p32;
}

double comb(void)
{   double r = mult_1 * mwcg() + mult_2 * cong();
    return (r >= 1.0 ? r >= 2.0 ? r - 2.0 : r - 1.0 : r);
}

int main(void)
{   unsigned long   c_mwcg[10], c_cong[10], c_comb[10];
    unsigned long   s_mwcg = 0, s_cong = 0, s_comb = 0;
    int             i;

    for(i = 0; i < 10; ++i) c_mwcg[i] = c_cong[i] = c_comb[i] = 0;

    for(i = 0; i < trials; ++i) c_mwcg[(int)(10.0 * mwcg())]++;
    for(i = 0; i < trials; ++i) c_cong[(int)(10.0 * cong())]++;
    for(i = 0; i < trials; ++i) c_comb[(int)(10.0 * comb())]++;

    for(i = 0; i < 10; ++i)
    {
        cout << "\n// [0." << i << (i < 9 ? ":0." : ":1.0) -> ");
        if(i < 9) cout << i + 1 << ") -> ";
        cout << setw(9) << c_mwcg[i];
        cout << setw(9) << c_cong[i];
        cout << setw(9) << c_comb[i];
        s_mwcg += c_mwcg[i];
        s_cong += c_cong[i];
        s_comb += c_comb[i];
    }

    if(s_mwcg != trials || s_cong != trials || s_comb != trials) cout <<
"\n*** error ***";

    std::cout << "\n\n";
    return 0;
}




------------------------------

From: Simon Josefsson <[EMAIL PROTECTED]>
Subject: Re: Rabin-Miller prime testing
Date: 17 Apr 2001 13:07:53 +0200

Bryan Olson <[EMAIL PROTECTED]> writes:

> 2. Do one base-2 Miller-Rabin (or Fermat) test.
> 
> 3. Do several random-base Miller-Rabin tests, or a Lucas 
>    test.
> 
> In practice, step 2 dominates the run time, since it usually 
> rejects many candidates.
> 
> Alternately, we can generate provable primes in not much 
> more time, but there's no practical advantage.

Depends on the threat model -- if you're verifying someone else's
numbers for primeness, you can be deceived if you're only using
Miller-Rabin or similar methods.

Even when generating keys yourself, prime "witnesses" is useful if you
don't completely trust the application or especially the randomness
source.  IMHO applications should include a prime witness with primes
they generate, if for no other reason to prove the integrity of the
application and randomness source.

------------------------------

From: [EMAIL PROTECTED] (David Formosa (aka ? the Platypus))
Subject: Streem combination without the use of dynamic tables.
Reply-To: dformosa@[202.7.69.25]
Date: Tue, 17 Apr 2001 11:01:43 GMT

I was thinking on the train and I thought up a simple algorithm that
allows you to combine two (or more) streams that has a slightly better
avalanche properties then xor.  At the moment its avalanche property
is quite low but I think with some tweeking I can get it so a bit
change effects at least two bits in the next iteration.

The idea is to help mask any statistical patterns in the two streems,
which could be plain text and a crytostream or two cryptostreems.

Given two streams X_n and Y_n and an S_n that has been initialized with
an IV

  C_n     <- X_n xor Y_n xor S_n
  S_(n+1) <- (X_n and S_n) or (Y_n and bitwisenot S_n)
  rotate_left S_(n+1)




-- 
Please excuse my spelling as I suffer from agraphia. See
http://dformosa.zeta.org.au/~dformosa/Spelling.html to find out more.
Free the Memes.

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: HOW THE HELL DO I FIND "D"?!?!
Date: Tue, 17 Apr 2001 11:12:27 GMT

d is the reciprocal of e modulo (p-1)(q-1). And you need to know p and
q to find (p-1)(q-1), it isn't a simple function of pq.

John Savard
http://home.ecn.ab.ca/~jsavard/crypto.htm

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Lorentz attractor...
Date: Tue, 17 Apr 2001 11:16:38 GMT

On Sun, 15 Apr 2001 20:00:01 GMT, "ClaudeVMS" <[EMAIL PROTECTED]>
wrote, in part:

>I am looking for a source of objective information on using
>attractors in a cryptographic algorithm.

Basically, the problem is that the formulas for attractors really
aren't more chaotic than those used in insecure PRNGs. Using
constructs derived from chaos theory, therefore, is not believed to
add anything to security; secure ciphers could be achieved of this
type, but they would be very inefficient compared to more conventional
ones.

John Savard
http://home.ecn.ab.ca/~jsavard/crypto.htm

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to