Cryptography-Digest Digest #199, Volume #14      Sat, 21 Apr 01 11:13:00 EDT

Contents:
  Re: Censorship Threat at Information Hiding Workshop (Mok-Kong Shen)
  Re: Note on combining PRNGs with the method of Wichmann and Hill (Mok-Kong Shen)
  Re: newbie: cryptanalysis of pseudo-random OTP? (Leonard R. Budney)
  Re: Note on combining PRNGs with the method of Wichmann and Hill (Mok-Kong Shen)
  Re: Can this be done? (John Wasser)
  Re: Cryptanalysis Question: Determing The Algorithm? (Leonard R. Budney)
  Better block cipher pre/post whiten step ("Tom St Denis")
  Re: The Extended Euclidian Algorithm (The original, not modular!) ("Alexis Machado")
  Re: newbie: cryptanalysis of pseudo-random OTP? ("Osugi Sakae")
  Re: "UNCOBER" = Universal Code Breaker (Joe H Acker)
  Re: "UNCOBER" = Universal Code Breaker (Joe H Acker)

----------------------------------------------------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Censorship Threat at Information Hiding Workshop
Date: Sat, 21 Apr 2001 14:22:55 +0200



Jonas wrote:
> 
> It seems the US record industry tries to prevent publication
> of an academic paper next week at the Information Hiding
> Workshop in Pittsburgh:
> 
>   http://cryptome.org/sdmi-attack.htm

>From this one couldn't help wonder whether there aren't 
analogous situations where instead of the phrase 
'commercial interest' there is 'national (security) interest'. 
As is well-known, decades ago there were ideas around asking 
editors of journal to submit crypto relevant manuscripts 
to authorities for review.

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Crossposted-To: sci.crypt.random-numbers
Subject: Re: Note on combining PRNGs with the method of Wichmann and Hill
Date: Sat, 21 Apr 2001 14:27:28 +0200



Brian Gladman wrote:
> 
> "Mok-Kong Shen" <[EMAIL PROTECTED]> wrote:
> 
> [snip]
> > of the general readers, I like to request you to present your results in a
> terse manner as follows:
> [snip]
> 
> I only provided the chi-squared values in order to to show that this was not
> a sensible thing to do.  In consequence I do not intend to do any more work
> on this because I consider it unecessary to apply such techniques to data
> that can be seen by simple inspection to be distributed very differently to
> that which is expected.  However, there is nothing to stop you doing further
> work on this if you wish since I have already provided all the basic
> information that is needed.
> 
> But a few minutes of high school maths is all that is necessary to show that
> with:
> 
> (1) two random variables (A) and (B) that are uniformly distributed on
> [0.0:1.0);
> (2) two numbers, a and b, both of which are close to, but not equal to, 1.0;
> (3) a third variable (C) =  {a * (A) + b * (B)} mod 1.0;
> 
> the random variable (C) is not uniformly distributed on [0.0:1.0).
> 
> You originally disputed this but it seems, later, that you might have
> recognised that the distribution would only be uniform if at least one of
> the multipliers was 1.0.
> 
> Do you still dispute my claim that the variable (C), as defined above, has a
> non-uniform distribution on [0.0:1.0)?

Do you claim that high-school math could exempt the
established practice of statistics? You have already
used chi-square, but in a way that I would say
totally inacceptable to practice. What do your previous
result show, if you used, instead of confidence levels
of 0.50 and 0.75, the value 0.95? Are the generators
you used sufficiently random at all?

M. K. Shen

------------------------------

Subject: Re: newbie: cryptanalysis of pseudo-random OTP?
From: [EMAIL PROTECTED] (Leonard R. Budney)
Date: 21 Apr 2001 08:49:40 -0400

"Osugi Sakae" <[EMAIL PROTECTED]> writes:

> In article <[EMAIL PROTECTED]>, "Leonard R. Budney"
> <[EMAIL PROTECTED]> wrote:
> 
>> Your analogy with a caeser cipher is a good one, though. If the key
>> sequence is sufficiently non-random, then you could read a message by
>> "guessing" the key bits using a similar bias to the cipher. Guesses
>> can be checked against a dictionary, so the entire crack can be
>> automated if the PRNG is lousy enough.
> 
> I don't understand "the key bits using a similar bias to the cipher".
> What does that mean?

I mean this. Suppose your PRNG was incredibly rotten, and emitted "1" 75%
of the time. Then I can emit key streams where "1" occurs 75% of the time,
and by pure coincidence my keystream will agree with yours about 62.5%
of the time (.75 * .75 + .25 * .25 = .625).

By doing that several times, I can produce a bunch of "guesses" about
your plain text, each of which is about 62.5% correct. But messages
generally contain such high redundancy that knowing 62.5% is probably
enough to deduce the rest of the message, especially after multiple
guesses.

This is similar to the "coincidence counting" method for analyzing a
caeser cipher or a polyalphabetic cipher. Coincidence counting means
putting one copy of the message next to a second copy, then sliding
the second copy by one byte, two bytes, etc., and counting coincidences
each time. When two same-language texts are side by side, enciphered
with the same substitution cipher, the rate of coincidences jumps way
up--for exactly the same reason outlined above.

Using similar reasoning, there is an elementary test for such a crummy
PRNG: just compress the enciphered message. If the PRNG is heavily
biassed, then the redundancies in the original message will show through,
and the ciphertext will compress somewhat. With a true OTP, the data
looks totally random, and will not compress.

Which offers two more points to ponder. (1) Compressing before encrypting
helps frustrate such a childish attack, by removing redundancy beforehand.
And (2) a PRNG must at least pass some minimal statistical tests, like the
diehard tests, before being used, to catch such trivial weaknesses.

As an exercise, I would encourage you to play with some Gutenberg text and
see that you can at least defeat the trivial attack above. Your suggested
method isn't a bad start, or you could convert all letters before M to 1,
and all letters after M to 0.

The result will still contain bias, but there are some elementary ways of
reducing bias. The simplest is Von Neumann's: look at pairs of bits. Throw
away pairs where the bits are equal. Then emit 1 if the pair is 01,
and 0 if the pair is 10. It's an elementary exercise to prove that if
the input stream is random but biassed, then the output stream is random
and the probabilities of 0 and 1 are exactly 50%.

You might also simply try Von Neumann's trick on the raw Gutenberg
text. Run diehard over the results of both methods. See if the keystream
compresses. If you detect a strong bias, use the keystream anyway and
then "crack" an ASCII-coded message which was encrypted against it.
(Use /dev/urandom for your guess-keystream, and introduce a bias to
match your original keystream.)

Have fun!

Len.


-- 
Everything you say here is wrong.
                                -- Dan Bernstein

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Crossposted-To: sci.crypt.random-numbers
Subject: Re: Note on combining PRNGs with the method of Wichmann and Hill
Date: Sat, 21 Apr 2001 14:49:41 +0200



Mok-Kong Shen wrote:
> 
> Brian Gladman wrote:
> >
> > "Mok-Kong Shen" <[EMAIL PROTECTED]> wrote:
> >
> > [snip]
> > > of the general readers, I like to request you to present your results in a
> > terse manner as follows:
> > [snip]
> >
> > I only provided the chi-squared values in order to to show that this was not
> > a sensible thing to do.  In consequence I do not intend to do any more work
> > on this because I consider it unecessary to apply such techniques to data
> > that can be seen by simple inspection to be distributed very differently to
> > that which is expected.  However, there is nothing to stop you doing further
> > work on this if you wish since I have already provided all the basic
> > information that is needed.
> >
> > But a few minutes of high school maths is all that is necessary to show that
> > with:
> >
> > (1) two random variables (A) and (B) that are uniformly distributed on
> > [0.0:1.0);
> > (2) two numbers, a and b, both of which are close to, but not equal to, 1.0;
> > (3) a third variable (C) =  {a * (A) + b * (B)} mod 1.0;
> >
> > the random variable (C) is not uniformly distributed on [0.0:1.0).
> >
> > You originally disputed this but it seems, later, that you might have
> > recognised that the distribution would only be uniform if at least one of
> > the multipliers was 1.0.
> >
> > Do you still dispute my claim that the variable (C), as defined above, has a
> > non-uniform distribution on [0.0:1.0)?
> 
> Do you claim that high-school math could exempt the
> established practice of statistics? You have already
> used chi-square, but in a way that I would say
> totally inacceptable to practice. What do your previous
> result show, if you used, instead of confidence levels
> of 0.50 and 0.75, the value 0.95? Are the generators
> you used sufficiently random at all?

Addendum: I neglected to answer you last issues. In
the follow-up requesting you to repeat the experiments,
I suggested that you use c1 and c2 (both different
from 1.0 in general) to investigate whether these
lead to a result that is non-uniform as you have
repeatedly claimed (i.e. I didn't suggest that use 
1.0 and c, but two values both different from 1.0). Does 
that answer your points?

M. K. Shen

------------------------------

Subject: Re: Can this be done?
From: John Wasser <[EMAIL PROTECTED]>
Date: Sat, 21 Apr 2001 13:21:35 GMT

In article <3ae12b5e$0$15023$[EMAIL PROTECTED]>, Mark Lomas
<[EMAIL PROTECTED]> wrote:

> If Bob receives two messages, one from Alice and one from Eve, each
> containing a different public key, followed by similar messages from both
> Alice and Eve all of which are signed as you suggest, how can he tell
> whether Eve is duplicating Alice's messages or Alice duplicating Eve's?

He can't.  That was not a requirement.  The only requirement was that
Bob be able to determine that a set of  mesages all came from a source
that knew some unspecified secret.  There was no requirement that Bob
be able to tell who the source was.  If Carl intercepts all messages to
Bob, changes the text, and creates his own hash, Bob will still be able
to tell that al the messages came from the same holder(s) of some
unspecified secret (Carl's private key).

> You might also consider the case where Alice and Eve send identical
> messages (i.e. both claim to know the same private key).  Apart from
> looking at the times at which each message is delivered, is there any
> way to determine which of them really knows the key?

Again, not a requirement.  Bob can tell that whoever originally created
the hash knew the unspecified secret.  If Eve doesn't know the secret
she cannot create mesages, she can only repeat what Alica has sent.


> "John Wasser" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > [[ This message was both posted and mailed. ]]
> >
> > Alice sends a message containing her public key (or she can add it to
> > one or more of the messges).  Alice hashes each message and encrypts
> > the hash with her private key.
> >
> > Bob decrypts the hash with the known  public key to find that the
> > decrypted hash matches the message.  This proves that all messages were
> > sent by someone who knows the private key that matches the public key.
> >
> > The public key
> >
> > In article <[EMAIL PROTECTED]>,
> > Julian Morrison <[EMAIL PROTECTED]> wrote:
> >
> > > Here's a scenario:
> > >
> > > Alice sends messages to Bob. The messages are sent in clear, but Alice
> > > includes a "check hash" with each message that allows Bob to ascertain
> > > that (1) the message matches its hash, and (2) all the messages were
> > > generated by someone who knew some unspecified secret, said secret being
> > > provably the same for all the mesages.
> > >
> > > HOWEVER, Bob does not know this secret, he and Alice do not exchange any
> > > information (the flow of data is solely from Alice to Bob), nor can he
> nor
> > > anyone else listening in determine this secret. And, no-one without the
> > > secret can forge new hashes that falsely seem to have been created by
> > > Alice.
> > >
> > > The result being: all the messages are proven to come from the same
> place,
> > > despite that place remaining anonymous.
> > >
> > > Can this be done? If so, how?

------------------------------

Subject: Re: Cryptanalysis Question: Determing The Algorithm?
From: [EMAIL PROTECTED] (Leonard R. Budney)
Date: 21 Apr 2001 09:29:37 -0400

I can only give general answers; somebody else will have to give the 21st
century answer.

[EMAIL PROTECTED] (pjf) writes:
> How does one determine what algorithm was used...

Generally, an intercept lives in a context. The NSA doesn't usually find
encrypted telegrams lying on the sidewalk, and ask, "I wonder what cipher
this might be?" Instead, we generally know in advance that we're dealing
with a German intercept, which went from a U-boat to naval high command,
at 6am yesterday. That's a *pile* of information to start with! We already
know what sort of cipher was probably used, and have a large collection of
intercept traffic made with the same cipher. So one short answer is,
"traffic analysis".

However, there is another answer which works for newbies who post
"decrypt this!" message to sci.crypt. Assuming they've posted a large
enough message, you should do a quick "coincidence count" against the
ciphertext. You'll catch many ciphers immediately, because they often
turn out to be caeser ciphers or polyalphabetic substitution.

The NSA no doubt has a bestiary of bad ciphers, including all hand
ciphers, where the effort of breaking is so trivial that they could
simply run a random message through all of them by brute-force, and
they probably succeed most of the time. (With the advent of PGP, GPG,
etc., hopefully that is changing.)

Len.


-- 
> you dislike Vixie's API

``The existing interface is crap.'' ---Paul Vixie

------------------------------

From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Better block cipher pre/post whiten step
Date: Sat, 21 Apr 2001 13:43:38 GMT

Here's a neat idea.  Instead of just xor-ing a key against the block
before/after the main core of the cipher (as in most modern ciphers) why not
use a pair-wise decorrelated function?  Unlike COCONUT98 where the function
was put in the middle of the cipher you put it at the ends i.e

(L,R) = plaintext

L = (K0 * L) + K1
R = (K2 * R) + K3
main cipher
L = (K4 * L) + K5
R = (K6 * R) + K7

ciphertext = (L,R)

Where the * and + are GF(2^W) operations (we assume that K0,K2,K4,K6 are not
zero).  To decrypt we simply find the inverses of K0,K2,K4,K6 and do the
operations in reverse.

The benefit of this is that it makes the core cipher immune to first order
differential and linear attacks since you don't know the input or output
differecences that hit the core of the cipher.  As long as the core doesn't
commute with the whiten steps this should be secure.

If speed is critical you can precompute the GF mults too (a 32x32 mult takes
4kb of memory).

Comments please.
--
Tom St Denis
---
http://tomstdenis.home.dhs.org



------------------------------

From: "Alexis Machado" <[EMAIL PROTECTED]>
Crossposted-To: alt.sources.crypto,sci.math
Subject: Re: The Extended Euclidian Algorithm (The original, not modular!)
Date: Sat, 21 Apr 2001 11:26:12 -0300
Reply-To: "Alexis Machado" <[EMAIL PROTECTED]>


"The Death" <[EMAIL PROTECTED]> wrote in message
news:9bp5i0$etb$[EMAIL PROTECTED]...
> Where can i find the extended euclidian algorithm to find x and y (given a
> and b) such that xa + yb = GCD(a,b) ???
>
> 10x in advanced,
>                    The Death
>

Hi,

I could not understand what you meant with "The original, not modular!".

I hope the following recursive function can help you.

(******************************************)
type Vector = array [0..2] of Integer

function G (P, Q: Vector): Vector
var
   d: Integer
begin
   if P[2] = 0 then
       G := Q
   else begin
       d := Q[2] div P[2]
       Q := Q - d*P
       G := G (Q, P)
   end
end
(******************************************)

Calling the function:

   R := G ( (1, 0, a), (0, 1, b) )

The result

   R[0] * a = 1 mod b
   R[1] * b = 1 mod a
   R[2] = GCD (a, b)

---
Alexis



------------------------------

From: "Osugi Sakae" <[EMAIL PROTECTED]>
Subject: Re: newbie: cryptanalysis of pseudo-random OTP?
Date: Sat, 21 Apr 2001 23:47:28 +0900
Reply-To: [EMAIL PROTECTED]

In article <[EMAIL PROTECTED]>, "Leonard R. Budney"
<[EMAIL PROTECTED]> wrote:


> "Osugi Sakae" <[EMAIL PROTECTED]> writes:

>> I don't understand "the key bits using a similar bias to the cipher".
>> What does that mean?
> 
> I mean this. Suppose your PRNG was incredibly rotten, and emitted "1"
> 75% of the time. Then I can emit key streams where "1" occurs 75% of the
> time, and by pure coincidence my keystream will agree with yours about
> 62.5% of the time (.75 * .75 + .25 * .25 = .625).

[snip of what seems to be an excellent explanation]

> Have fun!
> 
> Len.

Thank you, this is easy to understand and very interesting. I shall look into
it.

--
Osugi Sakae


====== Posted via Newsfeeds.Com, Uncensored Usenet News ======
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
=======  Over 80,000 Newsgroups = 16 Different Servers! ======

------------------------------

From: [EMAIL PROTECTED] (Joe H Acker)
Subject: Re: "UNCOBER" = Universal Code Breaker
Date: Sat, 21 Apr 2001 16:53:18 +0200

Joseph Ashwood <[EMAIL PROTECTED]> wrote:

> "Joe H Acker" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > If the random source is truly random, it doesn't do any harm when some
> > of its output is discarded, except for performance slowdown. So yes,
> > tests are useful to (a) continually test wether the hardware has failed
> > or does appear to work correctly, and (b) to prevent against very very
> > bad luck when a true random source happens to output the complete volume
> > of Shakespeare's Macbeth (very unlikely)
> 
> 
> Actually it can do a great deal of harm. A short, rather extreme
> demonstration:
> take a perfect random number generator that generates binary
> throw away all the output bits that are 1
> Is the sequence predictable?
> Yes it is (it's all 0). So there is an extreme where removing bits from even
> a perfect random number generator weakens the output, so the statement is
> not completely correct, although when the tossing is done in certain ways it
> will not harm the system.

I don't quite understand your statement. I wasn't claiming that
discarding sequences in general can't do any harm, I was claiming that
discarding sequences that fail basic tests for randomness can't do any
harm. Take your own sample: Suppose the tRNG starts to output all
zeroes. After some amount of zeroes in sequence (like a few hundreds or
thousands), probability is very low that this is actually random,
although it can be random of course, but probability that the tRNG
hardware has failed is significantly high. So displaying failure or
waiting until some (large) units of random sequences pass the basic
tests again and discarding the all zero sequence cannot do any harm,
except a performance slowdown. That's because the subsequent output may
not depend on the previous output that has been discarded (otherwise it
wouldn't be a tRNG).

When basic tests indicate failure for a certain longer time without
working correctly again, it's important that the whole system does not
switch to a less secure default mode, but indicates overall failure and
refuses to work.

Discarding whole random sequences can *never* do any harm to a tRNG. You
were talking about *filtering* them---changing them heuristically, which
is a complete different issue and indeed not correct.

Regards,

Erich

------------------------------

From: [EMAIL PROTECTED] (Joe H Acker)
Subject: Re: "UNCOBER" = Universal Code Breaker
Date: Sat, 21 Apr 2001 17:01:37 +0200

Joseph Ashwood <[EMAIL PROTECTED]> wrote:

> "Joe H Acker" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > If the random source is truly random, it doesn't do any harm when some
> > of its output is discarded, except for performance slowdown. So yes,
> > tests are useful to (a) continually test wether the hardware has failed
> > or does appear to work correctly, and (b) to prevent against very very
> > bad luck when a true random source happens to output the complete volume
> > of Shakespeare's Macbeth (very unlikely)
> 
> 
> Actually it can do a great deal of harm. A short, rather extreme
> demonstration:
> take a perfect random number generator that generates binary
> throw away all the output bits that are 1
> Is the sequence predictable?

You may not filter the sequence heuristically. I was talking about
testing large sequences and when they fail the test, discarding them
*completely*. Discarding an output sequence of a tRNG completely can
never do any harm, except a performance slowdown, given that the
discarded sequence is large enough and not just 1 bit in length. That's
provable.

Regards,

Erich

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to