Cryptography-Digest Digest #383, Volume #9       Tue, 13 Apr 99 07:13:03 EDT

Contents:
  Re: True Randomness & The Law Of Large Numbers (R. Knauer)
  Re: a question about time and DES cracking ("Douglas A. Gwyn")
  Re: Adequacy of FIPS-140 (Terry Ritter)
  Re: SCRAMDISK question ("Sam Simpson")
  Re: PGP HowTo ("Speedy")
  Re: Enhanced ECB mode any thoughts ? (David Kuestler)
  Re: Adequacy of FIPS-140 (R. Knauer)
  Re: True Randomness & The Law Of Large Numbers (R. Knauer)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Mon, 12 Apr 1999 19:39:48 GMT
Reply-To: [EMAIL PROTECTED]

On 12 Apr 1999 12:23:00 -0500, [EMAIL PROTECTED] (Herman
Rubin) wrote:

>If one does not believe in physical probability, one does not believe
>in physical probability.

Can I quote you on that. :-)

>Einstein claimed, "God does not play dice with the universe."

And Big Al was wrong, too. The Universe is just one big crap shoot at
the quantum level when it comes to measurement. There are no hidden
variables. Locality is not observed. Systems do become entangled over
super-luminal distances.

The reason is that the result of a given measurement cannot be known
in advance, because in order to determine it, one would have to
measure the state of the system, which would then change the result
one determined. Quantum Randomness is intimately tied in with the
Unknown in general meta-mathematical terms. See Chaitin.

>I would not accept the stated standards as adequate for statistical
>purposes.

I smell a consensus beginning to form here, sports fans.

Are the rest of you taking notes - or are you going to accuse me again
of having fabricated the consensus once it emerges?

>I want much better randomness that they promise, and
>I would test much harder, and I still would not accept the output
>of a single device stream, as it is impossible to test it adequately.

That is what I have been saying all along, based on the comments of Li
& Vitanyi, Kolmogorov, Feller and Williams & Clearwater.

>Even if a decay source is "perfect", the transcribing device has its
>problems, and these are not the problems of dead time.

What other problems do you consider significant besides dead time? We
assume that pulse pileup and discriminator problems are lumped into
what we call the dead time problem, which is really a general
detection problem not limited to quenching the detector.

>The chi squared test was originally a test for the statistical fit of
>cell frequencies to theoretical probabilities.  This test does not 
>have exactly the chi squared distribution under the hypothesis, but
>does have this distribution asymptotically.

But what is its applicability to randomness?

>>It is tempting to equate statistical significance with closeness to
>>being a TRNG, as if there is some kind of measurement being conducted.

>This is the typical misuse of statistics by those who have not had
>decent statistical theory.

Understand that I did not mean to imply that I agreed with that
statement.

In fact I most assuredly do not agree with it, because
randomness/non-randomness is a nominal consideration and cannot be
measured on some scale or interval, like the salaries of teachers in
New York or the skid distance of smow tires.

> I suspect the statistics book you have
>cited conveys the same impression.

Actually it did not in any way. Triola avoids the issue of randomness
testing like the plague.

>The term "statistical significance"
>should be eliminated from the vocabulary; its meaning is limited 
>and not relevant to what one is doing.

I'll drink to that!

>This is correct.  What should be done instead is to construct tests
>which poor RNGs would fail with high probability.  This is not that
>hard to do, but forces large sample sizes.

Is there an echo in here? That is exactly what I said a couple weeks
ago. I smell a consensus building.

>A good approximation to
>a TRNG (notice that I did not say a TRNG)

I think you may have typoed since you DID say TRNG.

But I get your point. We are talking about classical devices, and they
can never meet the TRNG specification perfectly, any more than a round
object can meet the circularity specification perfectly.

> should pass it with large
>probability, and a poor approximation should fail it with large
>probability.  I do not think this possible for a direct use of a
>single generated sequence; the sample size needed would be well 
>beyond feasibility.

Yeah, like of order the size of the ensemble, perhaps or some
reasonable fraction.

When you run simulations, how large is the sample in general terms
compared to some typical parameter of the process you are trying to
simulate? How do you know that you have an adequate simulation in
terms of sample size?

>>Testing of a TRNG is not a measurement in the traditional sense.
>>Randomness and non-randomness are nominal entities, not real measures
>>along an interval.

>It is, if one has a long enough sequence, or considers that one
>has a poor measurement.

I meant for finite sequences. An infinite sequence must be Borel
normal, which is a quantitive entity. (But see below for further
comments.)

>But we are not going to get sequences of
>length 10^30, or even 10^20.  Sequences of 20,000 are too short for
>any kind of consideration.

I have been saying that exact thing for some month now. I am pleased
that finally a True Expert is concurring with me.

>The "ensemble" is a construct in the absence of a belief in
>probability.

Isn't that a bit harsh? When I studied statistical mechanics long ago,
I recall using ensemble averages along with probability concepts, e.g.
the "grand canonical ensemble".

>It does not.  It involves a finite sequence.  The "population of
>sequences" is a purely mathematical construct.

Yes, to the extent that the ensemble is a purely mathematical concept.

>The full model is that there is a joint distribution of the
>observations.

To make absolutely certain that we all understand exactly what you
mean by the term "joint distribution", please give a concise
defintion. I know it to mean one thing, namely P(A|B) which is the
probability of A given B. If that is what you mean by it, why is it so
important?

>This model is huge.

How huge is it?

>What one can test for is that
>the behavior of the physical process is close to what is wanted,
>ASSUMING that it changes slowly,

That is the so-called "adiabatic hypothesis", which is also used in
quantum mechanics for certain calculations.

>and that long-term dependencies can
>be ignored.  It is possible to come up with tests which will be very
>likely to accept a sequence which is random within an error of .001,
>and very likely to fail for a sequence which has an error of .01.
>If we then test something like eight sequences produced by different
>devices, or the same device at different times, and accept if at
>least 5 of them pass, and XOR the sequences to get one sequence, I
>would feel rather confident in using that sequence as having the
>desired probabilities of a TRNG within about one part in 10^10.

I wonder if you could you use a quantum computer (assuming one
existed) programmed to calculate true random numbers as the standard,
and if so how would you go about the tests in general terms?

>Notice that the tests would consider the probabilities of both
>kinds of errors, and use a defined measure of "good" and "bad".

I agree that those entities are nominal, if that is what you are
saying. Or maybe with this more elaborate methodology you are saying
that one CAN construct a scale for randomness.

>>Are you agreeing with me?

>No. 

Oh, well - I will take whatever agreement I can get when I can get it.

After all, I am only a lowly Informed Layman (tm), so I cannot expect
complete conformity with True Experts all the time.

> The law of large numbers is meaningful, but frequency 
>interpretations of probability are inadequate.

I have been saying that from the very beginning, ever since I read Li
& Vitanyi and they said that. Even Kolmogorov said that, and so did
Feller.

Folk, you are watching a consensus in the formation.

Bob Knauer

"The contagious people of Washington have stood firm against
diversity during this long period of increment weather."
- Marion Barry, Mayor of  Washington DC


------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: a question about time and DES cracking
Date: Wed, 07 Apr 1999 09:06:20 GMT

Pete wrote:
> i'm running john the ripper DES cracker on my home linux password file.
> i have to say that i'm extremely surprised to see how slowly the
> incremental cracker works.  it's been working for weeks, and i've only
> obtained about 10 cracked passwords.
> does anybody know any order of magnitude estimates on unix password hacking
> and time on a typical pentium II/300?   are we talking decades, years or
> months?

There is a substantial difference between reversing the (salted) DES
encryption, and hacking UNIX passwords.

The latter is usually done only by *forward* encrypting a large
database of common passwords, as well as username, etc., which
is only a small subset of all possible inputs (which is what a
brute-force DES cracker would use).

If people pick good passwords, they're not going to be found by
the latter kind of password hacker.

The latter kind of password hacker typically runs in hours, at most.

If you have a DES cracker running on a single PII/300 that can
crack a DES encryption in only a day or two, that is remarkable.

------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Adequacy of FIPS-140
Date: Tue, 13 Apr 1999 05:53:09 GMT


On Mon, 12 Apr 1999 09:43:29 -0400, in <[EMAIL PROTECTED]>,
in sci.crypt "Trevor Jackson, III" <[EMAIL PROTECTED]> wrote:

>[...]
>I believe there is no serious dispute on the value of RNG diversity. 
>Since independent device streams are not expensive or difficult, any
>serious user should demand a composite stream.

You believe in error:  *I* see no need to build multiple "unknowable"
generators, provided the output is properly processed.  Physical RNG's
are not ciphers.  There is ample opportunity in a physically-random
design to discard vast amounts of entropy.  

In the conversations here there seems to be an assumption that our
goal is to build a "close to perfect" physical device, and then use
the unfiltered output directly.  That sounds like a recipe for
disaster to me.  It will be almost impossible to build a device that
good.  It is no wonder people are dubious: *I* am dubious.  

I believe a better way to approach this is to build a "satisfactory"
physical device (one with "some amount" of unknowable entropy), sample
it well, flatten the distribution, accumulate information, and throw
much of that away.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: "Sam Simpson" <[EMAIL PROTECTED]>
Subject: Re: SCRAMDISK question
Date: Tue, 13 Apr 1999 08:37:18 +0100

=====BEGIN PGP SIGNED MESSAGE=====
Hash: SHA1

Yes.  We try to minimise the size of ScramDisk, so that it can be
easily carried on a floppy.


Regards,

- --
Sam Simpson
Comms Analyst
http://www.scramdisk.clara.net/ for ScramDisk hard-drive
encryption & Delphi Crypto Components.  PGP Keys available at the
same site.
If you're wondering why I don't reply to Sternlight, it's because
he's kill filed.  See http://www.openpgp.net/FUD for why!


Subscriber wrote in message
<[EMAIL PROTECTED]>...
>After downloading version 2.02h from the download
>page, I notice the zip file has two files within:
>
>1 - SdInstal.exe
>2 - sd.vxd
>
>When I first tried running the SdInstal program, it came
>up with some error screen about not finding the sd.vxd
>file. I followed the instructions about placing that file into
>my IOSUBSYS file and rebooted, then I tried running
>the SdInstal program again. I was anticipating some
>type of install program to walk me through the installation,
>but instead, this file actually seems to be the program
>itself, not an installer. Is this correct and/or normal ?
>
=====BEGIN PGP SIGNATURE=====
Version: 6.0.2ckt http://members.tripod.com/IRFaiad/

iQA/AwUBNxL0Ku0ty8FDP9tPEQKxgACcCJg3BbBUgvLKDBB4Xi2FroS4FsQAoLmm
VmhzPmK1A6TtaPkBIQo/pvMb
=xTyW
=====END PGP SIGNATURE=====




------------------------------

From: "Speedy" <[EMAIL PROTECTED]>
Subject: Re: PGP HowTo
Date: Tue, 13 Apr 1999 10:40:21 +0200


Ben wrote in message <[EMAIL PROTECTED]>...
>Hi is there any pgp-howto out there? I couldn't find in PGP's web site.
>
>I want to learn how to set key-block and about its security issue.
>regards


Try this: http://www.skuz.net/pgp4dummies/




------------------------------

From: David Kuestler <[EMAIL PROTECTED]>
Subject: Re: Enhanced ECB mode any thoughts ?
Date: Tue, 13 Apr 1999 20:22:12 +1000

[EMAIL PROTECTED] wrote:

> In article <[EMAIL PROTECTED]>,
>   David Kuestler <[EMAIL PROTECTED]> wrote:
> > [EMAIL PROTECTED] wrote:
> >
> > > In article <[EMAIL PROTECTED]>,
> > >   David Kuestler <[EMAIL PROTECTED]> wrote:
> > > > I am developing an enhancement to ECB mode using the block number (BN).
> > > >
> > > > The minimal :
> > > > E(BN^PT)
> > > > leaves a lot to be desired.
> > > >
> > > > Where as :
> > > > E(E(BN)^PT)
> > > > would double the processing.
> > > >
> > > > I have been experimenting with :
> > > > firstly create a 2kbyte lookup table (LT) of 251 real random long long
> > > > int ( 64 bits )
> > > > then :
> > > > E( ( ( LT[BN%251] ^ BN ) * 9223372036863164417ULL ) ^ PT )
> > > >
> > > > 9223372036863164417ULL is a three bit prime and so multiplication could
> > > > actually be reduced to 3 shifts and 3 adds.
> > > >
> > > > The idea behind the lookup table is to efficiently change most ( if not
> > > > all ) of the bits.
> > > > The rest is designed to avoid  repetitions.
> > > >
> > > > Any thoughts ?
> > > >
> > > >
> > >
> > >   Yes. I think it adds little to the encryption process. Since
> > > one of the "game rules" for breaking an encryption method is
> > > to assume the enemy has a copy of the program. If this is the
> > > case your method adds very little to the encryption. Also if
> >
> > This is an enhancement to a mode not to a cypher itself.
> >
> > >
> > > two messages that are to be encrypted differ in only one block.
> > > Then the encrypted out files would still only differ in one
> > >
> >
> > Yes this would still be the case !
> >
> > > block. This is not what one wants unless you wish to make the
> > > NSA's job of breaking you code very easy.
> > >
> >
> > It would rely on the strength of the cypher and this is not intended to
> > enhance a cypher, it is intended to enhance a mode.
> >
> > >  If you want an enhanced chaining method take a look at
> > > SCOTT19U.ZIP and look at how "wrapped PCBC" works.
> > >
> >
> > The advantage of ECB mode is that it allows access to blocks at random, as
> > in a database or disk partition.
> > The dissadvantage is that repeated PT is obvious and block substitution is a
> >
> > problem.
> > My intention is to enhance ECB mode to cover these weeknesses and be used in
> >
> > random access situations.
> >
>
>    Then what you called the first choice is all you need. No reason
> to make it complicTed. That is a job of the encryption program.
> But it would be nice to use a block size that matches some multiple
> size of the disk blocks your using.
>

I assume you mean E(BN^PT) as the first choice. Though this covers repeated
blocks, it makes sequences obvious eg. ( in binary ) :

   PT       BN
1000 ^ 0000 = 1000
1001 ^ 0001 = 1000
1010 ^ 0010 = 1000

The general idea behind my algorithm is to extremely reduce the possibility of
sequences and other patterns, while adding as little processing overhead as
possible.

I actually reposted this thread under the subject 'What is the best whitening
algorithm ?' in an effort to explain myself better.

In the reposting I also made another enhancement where I encrypted the lookup
table once before using. This would prevent anyone pre calculating the whitening
block in a known/chosen PT attack.

As was pointed out in a direct email, my algorithm is still vulnerable to
position dependent block replay, but it appears immune to block substitution
(copying/moving/swapping), insertion and deletion.


> > > Thanks For Asking
> > > David Scott
> > >
> > > http://cryptography.org/cgi-bin/crypto.cgi/Misc/scott19u.zip
> > > http://members.xoom.com/ecil/index.htm
> > >
> > > -----------== Posted via Deja News, The Discussion Network ==----------
> > > http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own
> >
> > Thanks for you responce.
> >
> >
>
> http://cryptography.org/cgi-bin/crypto.cgi/Misc/scott19u.zip
> http://members.xoom.com/ecil/index.htm
>
> -----------== Posted via Deja News, The Discussion Network ==----------
> http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own


------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Adequacy of FIPS-140
Date: Tue, 13 Apr 1999 10:50:24 GMT
Reply-To: [EMAIL PROTECTED]

On Tue, 13 Apr 1999 05:53:09 GMT, [EMAIL PROTECTED] (Terry Ritter) wrote:

>I believe a better way to approach this is to build a "satisfactory"
>physical device (one with "some amount" of unknowable entropy), sample
>it well, flatten the distribution, accumulate information, and throw
>much of that away.  

How can you be sure that in carrying out your program above you are
not changing the keystream into something that can be analyzed by a
cryptanalyst?

In the book "Stream Ciphers and Number Theory" I do not recall the
authors mentioning any such program. I realize that means little, but
it does give one pause. Here are 3 mathematicians struggling to come
up with unbreakable PRNGs, like such esoterica as cycltomic
generators, and according to your program all they had to do is hash
the livin' bejeezuz out of ROT13.

If your methodology were satisfactory, it would be a relatively simple
matter to generate keystreams from almost any source as long as there
was some randomness available. All which would be necessary is to
"distill" that randomness by massive applications of your program.

There seems to be something not quite right here, although I am not
trained to discover it. Perhaps you could give us the necessary
justification to believe that what you propose actually delivers what
is promised.

I hope you are right, because now one can use Internet-based text
streams for keys. You index the text source based on some widely known
pointer like the last digits of world market closes, and hash that
stream to get a proveably secure key.

Bob Knauer

"The contagious people of Washington have stood firm against
diversity during this long period of increment weather."
- Marion Barry, Mayor of  Washington DC


------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Tue, 13 Apr 1999 11:09:07 GMT
Reply-To: [EMAIL PROTECTED]

On Tue, 13 Apr 1999 03:06:47 GMT, Dave Knapp <[EMAIL PROTECTED]> wrote:

>> Are the rest of you taking notes - or are you going to accuse me again
>> of having fabricated the consensus once it emerges?

>We'll accuse you of fabricating it,

I would expect no less from someone like you.

>since your position (that
>statistical tests are of no use in determining whether a RNG is "good
>enough") is very different from Herman's (that the specific FIPS tests
>are not enough for him).

You did not listen to what he said. And I never said that ALL
numerical tests are inadequate. In fact, if you had paid attention to
what I have said in the past, you would know that I believe an
adequately large sample and proper kinds of tests are the ticket to
successful determination of non-randomness.

I mentioned earlier on more than one occasion that as a first
estimate, the sample size would have to be of the order of the
ensemble size. That was just a safe estimate, not intended to be a
firm position. It was intended to emphasize that infinitesimally small
samples compared to the ensemble size are not adequate.

But Herman Rubin has recast this whole matter into a new light, one of
the experimental mathematician. That offers hope in coming to grips
with this problem of determination of randomness or non-randomness in
real world terms. It is exactly the way an experimental physicist
would resolve a dispute among theorists - take the problem into the
lab.

>That doesn't smell anything at all like a consensus.  Herman is still
>firmly in the camp that believes that statistical tests are indeed
>valuable in testing RNG's, while (unless you've admitted you were
>completely wrong and I missed it) you are not.

You simply did not listen to what he had to say, and now you are
trying to weasel out from under that.

He and I have both said, each in our own way, that standard small
sample tests are not adequate for determining the randomness or
non-randomness of candidate RNGs. There are serious questions about
such simple tests as those for 1-bit bias (e.g. the Monobit Test) and
there are serious questions about using such a small sample size as
20,000 bits for such tests. All those tests do is give you the
*appearance* of being random or non-random.

Quit trying to obfuscate the issues here. No one has ever said that
numerical testing is inadequate - just that simplistic, small sample,
standard statistical tests based on normal distribution properties,
such as 1-bit bias, are inadequate.

Since no one would stick their neck out and propose a specific test, I
took the first one in a widely acknowleged reference article, the
FIPS-140 Monobit Test. If you have one to propose, then by all means
propose it and defend it as an adequate test of randomness or
non-randomness. The FIPS-140 Long Run Test is still up for grabs.

Until you are willing to put your neck on the line, you are just
blustering in the background.

Bob Knauer

"The contagious people of Washington have stood firm against
diversity during this long period of increment weather."
- Marion Barry, Mayor of  Washington DC


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to