Cryptography-Digest Digest #364, Volume #9        Fri, 9 Apr 99 20:13:03 EDT

Contents:
  Re: True Randomness & The Law Of Large Numbers (John Briggs)
  Re: Announce - ScramDisk v2.02h ("Harvey Rook")
  Re: Wanted: Why not PKzip? (Jim Dunnett)
  Re: Recommendation of books? ("Steven Alexander")
  Re: True Randomness & The Law Of Large Numbers (R. Knauer)
  Re: Douglas A. Gwyn : True Jerk (R. Knauer)
  Comments on FileLockE? (Lee Jorgensen)
  Re: True Randomness & The Law Of Large Numbers (Terry Ritter)
  Re: Test vector repository--specifically, help with a broken Blowfish  
implementation. (Jerry Coffin)
  Re: True Randomness & The Law Of Large Numbers (Jerry Coffin)
  Re: Wanted: Why not PKzip? ([EMAIL PROTECTED])

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (John Briggs)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: 9 Apr 99 14:37:58 -0400

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (R. Knauer) 
writes:
> On 9 Apr 99 07:51:43 -0400, [EMAIL PROTECTED] (John Briggs)
> wrote:
> 
>>Since R. Knauer has responded to this article and has snipped the text
>>above without response, we can apparently take it that he agrees that
>>I was right and he was wrong and increasing significance does not
>>require exponentially increasing sample size.
> 
> As I pointed out earlier, I was referring to the fact that the
> gaussian is an exponential which falls off very slowly, which means
> that in order to gain more significance you have to pay an
> exponentially greater price in terms of sample size.

And that statement is, as I pointed out, false.

> I did not intend for that comment to be a precise analytical
> statement, but just a qualitiative observation. I do recall reading in
> Triola's elementary statistics book that, in general, to double the
> level of confidence of a measurement you must get an order of
> magnitude larger sample.

That recollection is, as it stands, false.

What Triola probably said was that to double the accuracy of a
measurement at a fixed confidence level you need to quadruple the
sample size.

To add one significant figure to the accuracy at a fixed confidence
level (multiplying accuracy by 10) you need to multiply the sample
size by 100.  It's a quadratic relationship.

That's not the same thing as increasing the confidence level.

> I do not care to chase down that specific reference, since I did not
> intend anything quantitatively precise by my original comment. I agree
> with your analysis for a specific test statistic, so let's move on,
> unless you think there is something fundamental to be gained by
> further discussion.

Your statement is not quantitatively wrong.  It's qualitatively wrong.

Or, more likely, as above, the result of a confusion about which
quantity Triola was talking about.

> I focus entirely on the so-called "Monobit Test" in FIPS-140, which I
> reproduce here (we can take up other tests later):
> 
> +++++
> A single bit stream of 20,000 consecutive bits of output from the
> generator is subjected to each of the following tests. If any of the
> tests fail, then the module shall enter an error state.
> 
> The Monobit Test 
> 
> 1.Count the number of ones in the 20,000 bit stream. Denote this
> quantity by X.
> 
>  2.The test is passed if 9,654 < X < 10,346.
> +++++
> 
> Here are my questions.
> 
> Assumming that the sample consists of 20,000 individual samples of one
> bit each (which is what "monobit" means):
> 
> What is the exact statistical test being used and what are its
> parameters?

What is the test?  You just described it yourself.  Seems pretty
clear and unambiguous.  It takes 20,000 bits of input and produces
1 bit of output.  What more do you want?

What are its parameters?

   Let's assume that you mean that the test defines a random variable
   with a distribution that is a function of the joint distribution
   of each of the 20,000 individual input variables.  And you're asking
   about the parameters that describe the distribution of this resulting
   random variable.  Ok.  Even a non-expert like myself can provide some
   answers on that basis.

   Suppose that the 20,000 individual input samples are independent
   and each has a distribution such that 0 and 1 are equally likely.

   Then the probability of failing the test is roughly 0.0000003
   And the probability of passing is roughly 0.9999997
   If we apply values of 1 (for pass) and 0 (for fail) then
   we have the distribution:  {0 with probability 0.00000003
                               1 with probability 0.99999907}

   That distribution has a mean of .99999997
   And, if my calculations are correct, its standard deviation is .000173

   Suppose that the 20,000 individual input samples are independent
   but biased so that 1 is favored over 0 by a 51/49 ratio.

   Then the probability of failing the test is roughly .0035
   And the probability of passing is roughly 0.9965
   Which gives us the distribution:  {0 with probability 0.0035
                                      1 with probability 0.9965}

   That distribution has a mean of 0.9965 and standard deviation .06

There.  Parameters.

> Mario Triola, the statitical expert recommended by one poster here,
> states that parametric tests cannot be used to determine true
> randomness. Is FIPS-140 employing a parametric test, and if so, what
> is the justification?

Asked and answered.  Many times by many folks.

Because the FIPS-140 tests don't determine true randomness.  Nor
do they claim to.

They can diagnose (with some conditional probability of error) certain
classes of nonuniform distributions.


Probably time I clammed up for another couple of months.  No sense
contributing further to the trollage.  Later.

        John Briggs                     [EMAIL PROTECTED]

------------------------------

From: "Harvey Rook" <[EMAIL PROTECTED]>
Subject: Re: Announce - ScramDisk v2.02h
Date: Fri, 9 Apr 1999 11:31:40 -0700


Terry Ritter <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
>
> On 9 Apr 1999 11:10:56 GMT, in <7ekn80$kc1$[EMAIL PROTECTED]>, in
> sci.crypt [EMAIL PROTECTED] (Andrew Haley) wrote:
>
> >The answer is simple.  Kerckhoff's maxim says that your attacker knows
> >the cryptosystem you're using, but does not know the key.  If you're
> >using superencryption, your attacker knows which systems you're using.
>
> That's fine if you always use the same ciphers in the same order.  But
> if the ciphers are dynamically selected by keying, or just dynamically
> selected frequently by communications under cipher, the attacker does
> *not* know "which systems you're using."  Kerckhoff's maxim does not
> apply.
>
This is incorrect. By Kerchhoff's maxim, you have to assume your attacker
has a copy of your deciphering machine. If he has a copy of your deciphering
machine, the attacker can figure out the algorithm you use to select
ciphers.
Once he knows the algorithm used to select ciphers, super-encipherment
only doubles or triples the amount of time needed to brute force. You'd
be much better off adding an extra byte to your key.

> I suggest that each communication include a small encrypted control
> channel, over which a continuous conversation of what ciphers to use
> next takes place.  This would be an automatic negotiation, somewhat
> like occurs in modern modems, from cipher selections approved by the
> users (or their security people).
>
> >Of course, your attacker must now analyze the compound cipher, which
> >is almost certainly harder to do than than attacking a single cipher.
>
> Yes.  Even if each cipher used has known weaknesses, those may not be
> exploitable in the multi-ciphering case.
>
It's only harder by one or two orders of magnitude. Computers built 3 years
from now
will have enough power to compensate for this difference. Adding an extra
byte
to your key makes the problem harder by 4 to 8 orders of magnitude. This is
much
harder to attack, yet it's simpler and cleaner to analyze. Simple clean
systems, are
much less likely to have holes.

Unexpected weaknesses in ciphers designed by good cryptographers (Say
Rivest, or Schneier)
are very unlikely to appear. Remember DES, after 25 years of analysis, is
still only
vulnerable to a brute force attack. I'd be willing to bet that RC-6 and
TwoFish will withstand
the same scrutiny.

Security breaks down because of bad passwords and poor protocols. Not
because of cipher
weakness. Plan your system accordingly, or you are deluding yourself.

Harv
RedRook At Zippy The Yahoo Dot Com
Remove the Zippy The to send email.




------------------------------

From: [EMAIL PROTECTED] (Jim Dunnett)
Crossposted-To: comp.security.misc
Subject: Re: Wanted: Why not PKzip?
Date: Fri, 09 Apr 1999 19:00:18 GMT
Reply-To: Jim Dunnett

On 8 Apr 99 22:31:14 GMT, [EMAIL PROTECTED] wrote:

>In sci.crypt Green Adair <[EMAIL PROTECTED]> wrote:
>> Why would you not want to use PKZIP with a good pass_phrase?
>
>Because its encryption is incredibly weak (known plaintext).

Yes...but it will compress the file and give it a little
entropy.

And it's useful for archiving several files together.
(With or without a password).

-- 
Regards, Jim.                | I am suspicious about those who love
olympus%jimdee.prestel.co.uk | humanity. I've noticed they become
dynastic%cwcom.net           | terribly worried about Hottentots but
nordland%aol.com             | don't actually like people they know.
marula%zdnetmail.com         | -  Brian Walden. 
Pgp key: pgpkeys.mit.edu:11371


------------------------------

From: "Steven Alexander" <[EMAIL PROTECTED]>
Subject: Re: Recommendation of books?
Date: Fri, 9 Apr 1999 14:33:11 -0700

"Applied Cryptography", "Handbook of Applied Cryptography", and
"Cryptography:Theory and Practice"

find them at amazon.com or your local book store.

-steven

[EMAIL PROTECTED] wrote in message
<7ejlhi$8a$[EMAIL PROTECTED]>...
>Hello, could someone recommend a few books which are thorough in discussing
>all known authentication protocols, key exchange protocols, etc...
>
>Thanks.
>
>
>
>--
>---------------------------------------------------------------------
>www.clark.net/pub/sinecto/index.html (optimized dsp/math/image libs.)
>TMS320C3x/C4x/TMS320C6x, PowerPC, Pentium, Alpha, NT/Linux/Solaris/AIX
>



------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Fri, 09 Apr 1999 21:42:31 GMT
Reply-To: [EMAIL PROTECTED]

On 9 Apr 99 14:37:58 -0400, [EMAIL PROTECTED] (John Briggs)
wrote:

>What is the test?  You just described it yourself.  Seems pretty
>clear and unambiguous.  It takes 20,000 bits of input and produces
>1 bit of output.  What more do you want?

The name of the kind of statistical test being used.

>What are its parameters?
>
>   Let's assume that you mean that the test defines a random variable
>   with a distribution that is a function of the joint distribution
>   of each of the 20,000 individual input variables.  And you're asking
>   about the parameters that describe the distribution of this resulting
>   random variable.  Ok.  Even a non-expert like myself can provide some
>   answers on that basis.

It sure took long enough.

>   Suppose that the 20,000 individual input samples are independent
>   and each has a distribution such that 0 and 1 are equally likely.

That is the assumption of a uniform Bernoulli process.

>   Then the probability of failing the test is roughly 0.0000003

I assume that comes from the Standard Normal (z) Distribution.

>   And the probability of passing is roughly 0.9999997

>   If we apply values of 1 (for pass) and 0 (for fail) then
>   we have the distribution:  {0 with probability 0.00000003
>                            1 with probability 0.99999907}
>   That distribution has a mean of .99999997

Good grief, now we are getting distributions of distributions. Where
will it ever end?

You seem to be saying that there is a distribution for the validity of
the original distribution. That seems highly irregular at first
glance.

>   And, if my calculations are correct, its standard deviation is .000173

The standard deviation for what random variable?

>   Suppose that the 20,000 individual input samples are independent
>   but biased so that 1 is favored over 0 by a 51/49 ratio.

>   Then the probability of failing the test is roughly .0035
>   And the probability of passing is roughly 0.9965
>   Which gives us the distribution:  {0 with probability 0.0035
>                                     1 with probability 0.9965}

>   That distribution has a mean of 0.9965 and standard deviation .06

>There.  Parameters.

So what do they relate to? What is being parameterized? What is the
random variable of that parameterized distribution?

>> Mario Triola, the statitical expert recommended by one poster here,
>> states that parametric tests cannot be used to determine true
>> randomness.

I should have said: "parametric tests cannot be used to determine
non-randomness."

We all know that there are no tests of any kind to determine
randomness.

>> Is FIPS-140 employing a parametric test, and if so, what
>> is the justification?

>Asked and answered.  Many times by many folks.

But not answered correctly.

>Because the FIPS-140 tests don't determine true randomness.  Nor
>do they claim to.

See comment above.

>They can diagnose (with some conditional probability of error) certain
>classes of nonuniform distributions.

You use the word "diagnose". I have used the same term several times
myself. When I use it, I mean that one cannot determine with
reasonable certainty that a TRNG is not random. All statistical tests
can do os warn you that the TRNG is very likely to be non-random.

The distinction is whether such a diagnosis is to serve as a criterion
for outright rejection, or to serve as a diagnostic warning which
prompts further action separate from statistical testing.

>Probably time I clammed up for another couple of months. 

Well, you will leave some questions unanswered above.

I note that you did not address my comments about the circular nature
of deciding non-randomness with statistical tests.

> No sense contributing further to the trollage.

There is no *primary* intent to troll. If, however, the so-called
experts decide to step in it, that's their problem.

 Is every question you can't answer trollish to you?

Bob Knauer

"I am making this trip to Africa because Washington is an international city, just 
like Tokyo,
Nigeria or Israel.  As mayor, I am an international symbol.  Can you deny that to 
Africa?"
- Marion Barry, Mayor of Washington DC


------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Douglas A. Gwyn : True Jerk
Date: Fri, 09 Apr 1999 21:12:01 GMT
Reply-To: [EMAIL PROTECTED]

On Fri, 9 Apr 1999 11:31:10 -0700, "Dann Corbit"
<[EMAIL PROTECTED]> wrote:

>Your choice of thread title suggests what about *you*?

It is you who needs to take a trip to the archives.

Some twit posted a header like the one above but with my name. I
ignored it. Gwyn deliberately posted to it to propagate it. I
responded in kind.

Get your facts straight before you stick your nose in someone else's
affairs. I am willing to let the matter end here, or we can continue
to propgate this post. The choice is yours.

Bob Knauer

"I am making this trip to Africa because Washington is an international city, just 
like Tokyo,
Nigeria or Israel.  As mayor, I am an international symbol.  Can you deny that to 
Africa?"
- Marion Barry, Mayor of Washington DC


------------------------------

From: Lee Jorgensen <[EMAIL PROTECTED]>
Subject: Comments on FileLockE?
Date: Fri, 09 Apr 1999 17:32:20 -0500

Has anyone looked at, or reviewed FileLockE from Zephyr Technologies?
(http://www.zephyrtech.com)

I'm thinking of purchasing it, however, I've become wary of 'snake oil'
type
products.

The Security appears sound, but then again, I'm no cryptographer.

If anyone has any information on this program, I'd appreciate it.

--
Lee Jorgensen, Programmer/Analyst - Bankoe Systems, Inc.
mailto:[EMAIL PROTECTED]  <-- reverse moc
mailto:[EMAIL PROTECTED]  <-- reverse ten



------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Fri, 09 Apr 1999 23:44:22 GMT


On Fri, 9 Apr 1999 17:02:15 -0600, in
<[EMAIL PROTECTED]>, in sci.crypt
[EMAIL PROTECTED] (Jerry Coffin) wrote:

>In article <[EMAIL PROTECTED]>, 
>[EMAIL PROTECTED] says...
>
>[ ... ] 
>
>> >Now, from a viewpoint of cryptanalysis, we can divide the generator 
>> >used into one of two groups: those for which we are able to correlate 
>> >numbers, and those for which we can't.
>> 
>> Bit bias plays an important role in true random number generation too.
>> Normally you do not lump bit bias in with correlation because they are
>> not related.
>
>Yes, my terminology was poor.  I'd intended to give the idea of the 
>cryptanalyst being able to infer any sort of predictable 
>characteristics, regardless of the exact nature of the predictability 
>found.

Nah, "correlation" is very acceptable terminology.  "Bit-bias" is a
specific case of strings of length 1 correlating to constants. 

In general, we can assume strings of arbitrary length, with arbitrary
correlating functions (linear, or nonlinear in all its forms), against
constants, string positions, or ever-more-complex constructions.  Any
predictable relationship reasonably can be called a "correlation."

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: [EMAIL PROTECTED] (Jerry Coffin)
Subject: Re: Test vector repository--specifically, help with a broken Blowfish  
implementation.
Date: Fri, 9 Apr 1999 17:02:06 -0600

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...

[ ... ] 

> Sorry, but the C and C++ standard only says that  sizeof(int) <= sizeof(long).
> Compile and run the statement

Both the C and C++ standards give minimum ranges for each integer 
type.  Though given in terms of ranges they must be able to represent, 
they basically come out to:
type            bits
char            8
short           16
int             16
long            32

Of course, implementations are free to increase any of these as 
they see fit.  For example, the compilers I know of for Crays use 8 
bits for char, and 64 bits for everything else.  They must retain the 
basic ordering so any char must be able to fit in an int, any int in a 
long, and so on.  It's arguable that int must be larger than char -- 
though never specified directly, it's difficult or impossible to make 
all the I/O stuff work correctly otherwise.



------------------------------

From: [EMAIL PROTECTED] (Jerry Coffin)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Fri, 9 Apr 1999 17:02:15 -0600

In article <[EMAIL PROTECTED]>, 
[EMAIL PROTECTED] says...

[ ... ] 

> >Now, from a viewpoint of cryptanalysis, we can divide the generator 
> >used into one of two groups: those for which we are able to correlate 
> >numbers, and those for which we can't.
> 
> Bit bias plays an important role in true random number generation too.
> Normally you do not lump bit bias in with correlation because they are
> not related.

Yes, my terminology was poor.  I'd intended to give the idea of the 
cryptanalyst being able to infer any sort of predictable 
characteristics, regardless of the exact nature of the predictability 
found.

------------------------------

From: [EMAIL PROTECTED]
Crossposted-To: comp.security.misc
Subject: Re: Wanted: Why not PKzip?
Date: Fri, 09 Apr 1999 23:52:57 GMT

Our programs Navaho Lock and Navaho Zipsafe both offer compression and
encryption and the are easy-to-use, fast, and secure. You can drag any
file, folder, digital image, even an executable into the drop are and it
will be easily encrypted and compressed.

Check them out at

http://www.navaholock.com



In article <[EMAIL PROTECTED]>,
  Jim Dunnett wrote:
> On 8 Apr 99 22:31:14 GMT, [EMAIL PROTECTED] wrote:
>
> >In sci.crypt Green Adair <[EMAIL PROTECTED]> wrote:
> >> Why would you not want to use PKZIP with a good pass_phrase?
> >
> >Because its encryption is incredibly weak (known plaintext).
>
> Yes...but it will compress the file and give it a little
> entropy.
>
> And it's useful for archiving several files together.
> (With or without a password).
>
> --
> Regards, Jim.                | I am suspicious about those who love
> olympus%jimdee.prestel.co.uk | humanity. I've noticed they become
> dynastic%cwcom.net           | terribly worried about Hottentots but
> nordland%aol.com             | don't actually like people they know.
> marula%zdnetmail.com         | -  Brian Walden.
> Pgp key: pgpkeys.mit.edu:11371
>
>

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to