Cryptography-Digest Digest #360, Volume #9        Thu, 8 Apr 99 17:13:03 EDT

Contents:
  Re: True Randomness & The Law Of Large Numbers ([EMAIL PROTECTED])
  Re: True Randomness & The Law Of Large Numbers ("Trevor Jackson, III")
  Re: True Randomness & The Law Of Large Numbers ("Trevor Jackson, III")
  Re: True Randomness & The Law Of Large Numbers (R. Knauer)
  Re: Douglas A. Gwyn : True Jerk (R. Knauer)
  Re: Security of LFSR on noisy channel. (Chris Monico)
  Re: Comments to DOJ re NICS (Jim Patrick)
  Re: RC4 test (Jim Gillogly)
  Re: Security of LFSR on noisy channel. (Chris Monico)
  No QUADIBLOC 99 just yet... (John Savard)
  Re: Brute force encryption cracker (Jim Dunnett)
  Re: Stupid Question (Jim Gillogly)
  Re: True Randomness & The Law Of Large Numbers (John Briggs)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED]
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Thu, 08 Apr 1999 08:01:09 GMT

[EMAIL PROTECTED] wrote:
> [EMAIL PROTECTED] wrote:
>
> >That is _not_ how you defined set #1. You defined #1 as the
> >set shown with reasonable certainty to be non-random.
>
> I am taking a literal view of the efficacy of those tests.

What view you happen to be taking this minute is not the
point. You had a definition of #1, and then you claimed #1
meant something else.

> They are
> either reasonably certain to disclose non-randomness or they are
> worthless.

> I consider any tests that cannot detect a property with reasonable
> certainty to be worthless. What good is it to find out that your wife
> is half pregnant?

Most of us have granted that statistical tests are not
reasonably certain to disclose non-randomness.  Your claim
of worthlessness does not follow.

> >You seem to be confusing the distinction between #1 and #2
> >with the distinction between #a and #b.  The relations we
> >can assert with confidence are that #1 is a subset of #a
> >and #b is a subset of #2.
>
> I expect the tests to be able to distinguish between #a and #b, or
> otherwise it is worthless. Sets #1 and #2 as you have defined them are
> meaningless once you define #a and #b.
>
> Up to now we have not had an absolute standard for true random number
> generation, but soon we will. It will be a quantum computer programmed
> to calculate true random numbers.
>
> >Are you _trying_ not to understand?
>
> I am a Devil's Advocate, not a student. It is not my role to express
> understanding here, only to challenge claims made by others.

And you've done a stunningly poor job of it.  Your claims,
at least where you disagreed with most others, now stand
refuted.  All you have to cling to is your opinion that
statistical hypothesis tests are worthless.

> I may
> understand a hell of a lot more than you are giving me credit for. But
> it is not really important that you to know my level of understanding
> - it is only important for you to answer my challenges.

Which of course I've done.  You have refused to answer even
the simplest challenges.  Three times already I've asked you
to quote where FIPS 140 says what you claimed it says.  All
three times you snip and merely repeat your own opinions on
how things are.

> They admit of tests for the *appearance* of randomness, but not tests
> for what they call true randomness, which is a quantum mechanical
> concept, not a mathematical concept.

_You_ defined true randomness as a mathematical concept.


> >The test are reliable in
> >that they do not place truly random generators (generators in
> >set #b) in set #1.
>
> You don't know even that. If a set of tests are worthless, their
> results will be, uh... random.

That's an error on your part.  You said they were worthless
if we can't be reasonably certain they detect non-randomness.
Now you say since there worthless, when they do detect
non-randomness we can't be reasonably certain that they are
correct.  That's a nonsense argument; your criterion for
worthlessness changed in the middle.


> >They do often place non-truly random
> >generators (those in set #a) in set #2.  That amounts to
> >unreliability only if one is so foolish as to mistake
> >membership in set #2 as an assertion the generator is truly
> >random.
>
> Then you agree with me that those tests cannot determine set
> membership properly.

Why translate a precise statement into a vague statement?
Tests can relaibly determine that some generators are in set
#a (non-random).  They cannot reliably determine that
generators are in set #b.


> Why not take your favorite one and defend it? Take the Run Test or the
> Chi Square Test or whatever you consider the "best" test. Present a
> detailed analysis and let's see you defend your claims that such tests
> are highly reliable.

In another thread I showed that a test based on the binomial
distribution of bit counts could show that a generator fails
your true randomness criteria.  This directly disproves claims
you made about the inapplicability of the test.

> My challenge has always been along the lines of the size of the sample
> compared to the population size. I fully agree that those tests will
> detect non-randomness (and therefore randomness) for extremely large
> samples of order the size of the ensemble. But for infinitesimally
> small samples compared to the size of the ensemble, I am not buying
> it, no matter how much you claim the Central Limit Theorem says
> otherwise.

I think I mentioned the Central Limit Theorem once to correct
something you had said about it.


> If I have a keystream of any decent size, then the number of sequences
> of that size in the ensemble is astronomically large compared to any
> possible sample size that you could hope ever to test properly.

So test what you can.  If the bit count is not binomially
distributed, then the sequences are not uniformly
distributed.  The number of bits _is_ large enough to do
tests that may reject the randomness hypothesis with high
reliability.  For example consider the case you yourself
introduced in which n = 1,000,000 and we count cases in
which the number of ones is more than 0.05*n from the mean.
What's the probability of a truly random generator, again
by your definitions, finding such cases in a few thousand
tries?

> >That show _you_ didn't follow the reasoning and _you_ don't
> >see the worth of the tests.
>
> The tests are inconclusive and no amount of mental gymnastics will
> overcome that fact.

Are you hoping if you just keep repeating that some
justification will magically appear?

Oh, one more time,
  Bob Knauer:
  | : Yet people still insist that if their pet RNG
  | : passes such and such a statistical test, it is truly random.
  | : Hell, the famous FIPS-140 says that. Have you ever read FIPS-140?
  Bryan Olson:
  | Can you quote where FIPS-140 says that?

--Bryan

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

Date: Thu, 08 Apr 1999 09:54:43 -0400
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers

Douglas A. Gwyn wrote:
> 
> "Trevor Jackson, III" wrote:
> > Penrose made much of this issue in his book on artificial intelligence.
> > In principle there is supposed to be something special about quantum
> > events that makes them distinguishable from non-quantum events.  He
> > argues that brains are based on, subject to, this special quality that
> > machines. computers, cannot simulate.
> 
> I was very disappointed with "The Emperor's New Clothes".
> Penrose has done some remarkable technical work,
> but that doesn't seem to carry over to philosophy.
> For example, he cites approvingly Searle's "Chinese"
> argument (the essence of which is, how can a system
> understand Chinese if none of the cogs in the machine
> understands Chinese).

My reaction was much the same.  Most of the bokk is brilliant, but
wrong.

------------------------------

Date: Thu, 08 Apr 1999 10:02:20 -0400
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers

Eric Smith wrote:
> 
> "Trevor Jackson, III" <[EMAIL PROTECTED]> writes:
> > Penrose made much of this issue in his book on artificial intelligence.
> > In principle there is supposed to be something special about quantum
> > events that makes them distinguishable from non-quantum events.  He
> > argues that brains are based on, subject to, this special quality that
> > machines. computers, cannot simulate.
> 
> It really aggravated me that he presented this as a postulate with no
> convincing evidence, and then proceeded to use this as a basis for a "proof"
> that an intelligent computer can't be made.  Never mind that if we wanted to,
> we could fairly easily make even an otherwise conventional computer (i.e., not
> a quantum computer) that had a quantum randomness based random number
> generator as a peripheral.
> 
> It seemed to me that Penrose was waving quantum randomness around like it was
> a magic wand that could be used to prove or disprove anything.  I was recently
> in an argument with someone who insisted on using Godel's Incompleteness
> Theorem in much the same way.  He claimed that compilers will never be able
> to produce object code as efficient as the best human hand-coded assembly,
> because of GIT.  When I asked how GIT was even relevant, he told me to think
> about it and I should be able to figure it out.  I guess that's what Penrose
> intended us to do regarding the connection between quantum randomness and
> intelligence.

Well, I'm not arrogant enough to claim the ability to resolve
intelligence versus quantum activity, but I do know a bit about
compilers and hand coding.  I believe the person making the claim of
superiority of hand coding is wrong.  In fact, I'd be willing to bet
real money that I can design an instruction set that no human can code
as well as a compiler.

There are annual contests for the most obfuscated code, usually C, that
a person can generate.  My favorite is a program that plays tic-tac-toe
by rewriting its source code into both a representation of the board,
with Xs and Os, and into the program to run to determine the next move.

Consider the possibility of an obfuscated instruction set (Intel is the
clear leader at the moment).  We could make the instruciton set
arbitrarily complex (whisper: random?) so that *only* a compiler could
generate working object code.

------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Thu, 08 Apr 1999 14:30:44 GMT
Reply-To: [EMAIL PROTECTED]

On Thu, 08 Apr 1999 10:02:20 -0400, "Trevor Jackson, III"
<[EMAIL PROTECTED]> wrote:

>There are annual contests for the most obfuscated code, usually C, that
>a person can generate.

I am sure Microsoft is barred from participating, because otherwise
they would always win.

Bob Knauer

Signature files traditionally present comments from a population with an IQ over 100.
I thought it would be appropriate to present comments from the other half of the 
population, 
namely the one with an IQ under 100. After all, they do make up 50% of the human 
condition.
Never mind that they also tend to become politicians. Here is a representative example:

"People blame me because these water mains break, but I ask you, if the water mains
didn't break, would it be my responsibility to fix them then? WOULD IT!?!"
- Marion Barry, Mayor of Washington DC


------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Douglas A. Gwyn : True Jerk
Date: Thu, 08 Apr 1999 14:32:32 GMT
Reply-To: [EMAIL PROTECTED]

On Thu, 08 Apr 1999 08:01:53 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
wrote:

>"Douglas A. Gwyn" wrote:
>> hapticz wrote:
>> > truth is relative.
>> And you're asserting that as an absolute?
>> If not, it's self-defeating.
>
>I guess I should add: If so, it's self-contradictory.

Is there an echo in here?

That's what I said.

Bob Knauer

Signature files traditionally present comments from a population with an IQ over 100.
I thought it would be appropriate to present comments from the other half of the 
population, 
namely the one with an IQ under 100. After all, they do make up 50% of the human 
condition.
Never mind that they also tend to become politicians. Here is a representative example:

"People blame me because these water mains break, but I ask you, if the water mains
didn't break, would it be my responsibility to fix them then? WOULD IT!?!"
- Marion Barry, Mayor of Washington DC


------------------------------

From: [EMAIL PROTECTED] (Chris Monico)
Subject: Re: Security of LFSR on noisy channel.
Date: Thu, 08 Apr 99 06:18:58 GMT

In article <[EMAIL PROTECTED]>,
   <[EMAIL PROTECTED]> wrote:
>If I have a long (64 bit+) LFSR, and you want to read it, but
>can only read the bits with lots of noise, so that you can only
>get a little better than random chance (say 501 guesses out of 1000).
>
>Is there a way of doing this?
>Does anyone know of any upper, or lower bounds on the complexity of 
this?
>
>

See the original papers of Shannon ("A Mathematical Theory of 
Communication", I think). As far as lower bounds on the complexity, I 
don't know whether or not that's known. In these papers, Shannon shows 
 the best that can be achieved with infinite computing power. But to 
investigate the complexity, you would want to investigate the 
complexity of various decoders (trellis, Viterbi, Berlekamp-Massey, 
majority-logic) that might be applicable to the code you would use on 
this channel (I think the complexity of most common decoders is 
well-studied and probably known). I bet that for this channel, you 
could find a rate 1/16 code that would do fairly well and be easy to 
decode (polynomial time).


------------------------------

From: [EMAIL PROTECTED] (Jim Patrick)
Crossposted-To: talk.politics.guns
Subject: Re: Comments to DOJ re NICS
Date: Thu, 08 Apr 1999 12:26:44 GMT
Reply-To: [EMAIL PROTECTED]

On Wed, 07 Apr 1999 17:14:08 -0700,  in talk.politics.guns  Eric
Williams <[EMAIL PROTECTED]> wrote:

>> I'm as much against this illegal extension of NICS as you are, but I really
>> don't see this as being practical. Unless you want to insist that the FFL
>> has to have a fax to receive the NICS approval, which would probably not be
>> acceptable to many of them.
>
>I think there wouldn't be much of a problem getting a random-looking
>string of letters and digits written down correctly, people would tend
>to check those things over and make sure they got them right, and we're
>only talking about 12 characters or so for the signature (which could
>probably be shortened if need be by only using a thumbnail).  Retailers
>handle more than that much information all the time for credit card
>verifications.
>
>I'm more worried about the possibility that all the data entered into
>the signature-generating computer didn't exactly match the info on the
>form, due to typos, mis-spellings, poor handwriting, etc.  One way of
>handling this might be to fax the output from the computer, including an
>echo of the input, back to the FFL for attachment to the original form. 
>That way any mis-match could be accounted for.  If a fax machine is too
>much a burden, the same information could be sent by follow-up mail.

The easy (duuh) solution is to put the database query online.
Have the whole thing run by computers, eliminating transciption,
coffe-breaks, daily meetings, etc.

It requires 1 telephone line and one US based computer; printer
optional.  No transcription errors ever.

And it means that individuals can check their own record accuracy.




     Jim P

"A fear of weapons is a sign of retarded sexual and emotional maturity."
-Sigmund Freud, General Introduction to Psychoanlysis (1952)

------------------------------

From: Jim Gillogly <[EMAIL PROTECTED]>
Subject: Re: RC4 test
Date: Thu, 08 Apr 1999 05:32:13 -0700

[EMAIL PROTECTED] wrote:
> 
> > > I am willing to help (except giving the key...).
> >
> > OK, how about some plaintext, to make up for that 2^28 space.  Does
> > it end with a CRLF?  How many spaces in it?
> 
> The first three letters is 'RC4'.

Also, was it encrypted with the buggy version?  Bryan Olson pointed
out a problem with your swapping procedure.

BTW, to check out your RC4 implementation you can build a CipherSaber
and verify the results at http://ciphersaber.gurus.com .

-- 
        Jim Gillogly
        17 Astron S.R. 1999, 12:30
        12.19.6.1.12, 1 Eb 5 Uayeb, Fifth Lord of Night

------------------------------

From: [EMAIL PROTECTED] (Chris Monico)
Subject: Re: Security of LFSR on noisy channel.
Date: Thu, 08 Apr 99 06:24:56 GMT

In article <[EMAIL PROTECTED]>,
   <[EMAIL PROTECTED]> wrote:
>If I have a long (64 bit+) LFSR, and you want to read it, but
>can only read the bits with lots of noise, so that you can only
>get a little better than random chance (say 501 guesses out of 1000).
>
>Is there a way of doing this?
>Does anyone know of any upper, or lower bounds on the complexity of 
this?
>
>

Oh, wait a second. If a Binary Symmetric Channel (BSC) is a good model 
of your channel, there is little hope. The capacity of a BSC 
approaches zero as the crossover probability approaches 1/2. 
(Measurably) Higher or lower than 1/2, you're ok, but at (or this 
close to) 1/2,  the channel capacity is quite low. See the papers by 
Shannon.


------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: No QUADIBLOC 99 just yet...
Date: Thu, 08 Apr 1999 16:02:34 GMT

After rereading the paper on the boomerang attack, to see if I could find
any comment on generalizing it further to divide a block cipher into more
than two parts, I noticed, in the description of COCONUT 98, attacked with
this method, that there was a key-dependent bit-transpose in the middle of
the cipher. This was no hindrance at all to the boomerang attack.

The original QUADIBLOC block cipher had, to muddy the waters, a *fixed* bit
transpose in the middle of the cipher. Hence, I thought I had better review
the design, to find out if it had been made obsolete.

The use of the S-box S2 in the cipher - and in association with the bit
transpose in the middle - seems to be enough to make differential
cryptanalysis very difficult. The key schedule, even without key
augmentation, is not too awful. But the symmetric use of S1 is also a
potential weakness: again, the double SP nature of the f-function seems to
provide strength.

The main problem with QUADIBLOC still seems to be, not that it is insecure,
but that it is not a particularly efficient design.

John Savard (teneerf is spelled backwards)
http://members.xoom.com/quadibloc/index.html

------------------------------

From: [EMAIL PROTECTED] (Jim Dunnett)
Subject: Re: Brute force encryption cracker
Date: Thu, 08 Apr 1999 19:41:26 GMT
Reply-To: Jim Dunnett

On Wed, 07 Apr 1999 16:18:47 -0700, Sundial Services
<[EMAIL PROTECTED]> wrote:

>CSRA wrote:
>> 
>> Hi.
>> 
>> Are there any DES/RSA/ or PGP brute force key finders out there?
>> I realize that for large key sizes it would take a zillion years on a home
>> PC but I would like to decode say a PGP message with say a < 8 bit key
>> size.
>> Are there any out there?
>
>
>If you mean 8 bits, as in one byte, then there are less than 256
>possible keys and you certainly don't mean that.
>
>If you meant 8 bytes, then there are bezillions of possible keys and you
>could be stone-dead of old age before you find one even if you were born
>yesterday.

Or he may be lucky and hit the right one in the first ten
he tries!

-- 
Regards, Jim.                | I am suspicious about those who love
olympus%jimdee.prestel.co.uk | humanity. I've noticed they become
dynastic%cwcom.net           | terribly worried about Hottentots but
nordland%aol.com             | don't actually like people they know.
marula%zdnetmail.com         | -  Brian Walden. 
Pgp key: pgpkeys.mit.edu:11371


------------------------------

From: Jim Gillogly <[EMAIL PROTECTED]>
Subject: Re: Stupid Question
Date: Thu, 08 Apr 1999 13:50:13 -0700

Cubicdog wrote:
> pardon my ignorance. I'm certain that there must be a
> webpage or FAQ that addresses my concerns, so a simple
> pointer will be most appreciated.

Check out comp.security.pgp.discuss -- your questions are
answered there frequently.

> reestablish my crypto identity. So, I go poking about on the
> web for PGP, and lo, It (pgp) now belongs lock-stock and
> barrel to Network Associates, where (and I quote) "Virually

True.

> This doesn't exactly fill me with warm fuzzy feelings. Digging
> around, I see that NA doesn't bother with publishing the source
> code for their products, so the newer PGP is no longer
> subject to objective review.

False.  They print the source in scannable books with checksums,
and Stale and his merry band scan them in overseas.  See
http://www.pgpi.com for the results.  NAI has not broken faith
in this regard.

I'm still waiting for the Unix version of the new stuff, but
am doing just fine with 2.6.2, 5.0i, and some homebrew stuff
of my own written up from the OpenPGP spec in RFC 2440 -- another
new and wondrous innovation that happened as you were netless.

-- 
        Jim Gillogly
        17 Astron S.R. 1999, 20:46
        12.19.6.1.12, 1 Eb 5 Uayeb, Fifth Lord of Night

------------------------------

From: [EMAIL PROTECTED] (John Briggs)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: 8 Apr 99 16:34:10 -0400

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (R. Knauer) 
writes:
> I think it is clear that if you could test the whole ensemble, you
> would have certain knowledge. And it is clear that if the sample size
> is too small, that your knowledge is very poor. I am looking for the
> knowledge criterion I call "reasonable certainty". I am leaving that
> open for you to specify. Once you specify it, then you should be able
> to some up with a sample size for a given test. Just keep in mind that
> the gaussian falls off very slowly, so each increase in significance
> costs exponentially more in sample size.

It is not clear what you are talking about here.  Increase in significance
need not cost exponentially more in sample size.

Say, for example we start with a baseline test on a sample size of
10,000 bits.  And this test triggers at the 99% confidence level
if the observed bias is outside the range of 48.7 to 51.3 percent.
(I think these figures are correct for a 10,000 bit sample).

Now suppose we want to increase this significance.

With 20,000 bits an observed bias outside the range of 48.7 to 51.3
is only about .0003% likely.  That's a 99.97% confidence level.

At 30,000 bits the confidence level rises to 99.9993%

So rather than sample size being an exponential function of significance,
we've got significance being something like an exponential function
of sample size.  (I suspect it's exponential in sqrt(sample size))

Perhaps you were talking about discrimination requiring exponentially
increasing sample sizes as the gap (48.7 to 51.3 above) is tightened
around 50%.  But the words you used indicated otherwise.

> I am also still concerned on the meta-mathematical level with the
> (apparent) circular reasoning of assuming that a sampling procedure is
> random in order for the tests to be valid and then concluding that the
> sampling procedure is not random because the sample failed the tests.

> That tells me that either the notion of sample is confused or the
> notion of random is confused, as they apply to the notion of sampling
> and generation. For example, in FIPS-140, the authors tell you to
> gather a sample of 20,000 contiguous bits. Is 20,000 the sample size?
> If so, then you are testing 1-bit properties since the sample space is
> now {0,1}. But that is not what our distribution is about. It is about
> sequences of much larger size than 1 bit. So is the 20,000 bit
> sequence just one sample? If so, that is a very small sample size.

What difference do you think it makes whether you consider the sample
to be large or small?  The likelihood of finding 34 consecutive zero
bits in 20,000 bits is the same whether you think of it as 20,000
single bit strings or as one 20,000 bit string.

> My purpose is to generate a prevailing consensus so we can move on to
> topics more closely related to crypto.

Hint:  Devil's advocates don't generate consensus.  They disrupt it.
Admittedly you do a fine job of disruption.

        John Briggs                     [EMAIL PROTECTED]

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to