Cryptography-Digest Digest #306, Volume #9       Wed, 31 Mar 99 00:13:05 EST

Contents:
  Re: freeware implementation of one-time pad? (Wayne D. Hoxsie Jr.)
  Re: True Randomness & The Law Of Large Numbers (Bryan G. Olson; CMSC (G))
  Re: Norton diskreet ([EMAIL PROTECTED])
  Re: client authentication (Bryan G. Olson; CMSC (G))
  Re: Does anyone know how to solve vigenere and tranposition cipher? ("Douglas A. 
Gwyn")
  Re: Live from the Second AES Conference ("Douglas A. Gwyn")
  Re: True Randomness & The Law Of Large Numbers ("Trevor Jackson, III")
  Re: Wanted: free small DOS Encryption program ("Trevor Jackson, III")
  Re: freeware implementation of one-time pad? ("Douglas A. Gwyn")
  Re: ---- Two very easy secret key Cryptosystems (kctang8)
  Re: True Randomness & The Law Of Large Numbers ("Douglas A. Gwyn")
  Re: What is fast enough? (Bruce Schneier)
  Re: True Randomness & The Law Of Large Numbers ("Trevor Jackson, III")
  Re: Random Walk ("Douglas A. Gwyn")
  Re: My Book "The Unknowable" ("Trevor Jackson, III")

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Wayne D. Hoxsie Jr.)
Subject: Re: freeware implementation of one-time pad?
Date: 31 Mar 1999 00:26:25 GMT
Reply-To: [EMAIL PROTECTED]

=====BEGIN PGP SIGNED MESSAGE=====

In article <7drcb0$6v1$[EMAIL PROTECTED]>, Charles Blair wrote:
>   It should be easy to write something in which the user gives the
>plaintext and the pad, and the program creates the cyphertext, with
>the user bearing responsibility for the pad being random and used
>only once.  Has anyone made available a ``standard'' implementation?

The de facto standard is exclusive-or (xor).

=====BEGIN PGP SIGNATURE=====
Version: PGPfreeware 5.0i for non-commercial use
Charset: cp850

iQEVAwUBNwFdDhvLRvkTi87hAQGbcgf+IfP9IEJmRfwsq+SHZENocX3PbPpLtSlH
3ZERRzFrDD28Ok7cbKqgGJeP44wnCLuI7Cd74YWe1Y8P3YrJKRWm7AB2QdoOgZI4
Z/+bCWQYhQc3i7zamU0miTWBEPHZpcqGLyc/IHiNh9GFwfQkFMGqyaN7uN2BAx+L
4pykwxe9oZ+OkVj6/eFvTh3eaNxggcw7++LP3+zHPMSEI1TUr97IwO798v7R4nn2
qfzDxoi/GvujW+TI7q6FzExa60cM98JuOoHOYqglXfrD4K4n3BHe079Y2sETvMlL
8YOePuwNXLBOFCbKA9HsnPxROSA+GF4Hh/ZZjhdZX2n5yvuKOBikjw==
=lIvL
=====END PGP SIGNATURE=====

-- 
Wayne D. Hoxsie Jr. KG9ME    | Small wheel turn by the fire and rod,
[EMAIL PROTECTED]        | Big wheel turn by the grace of God,
http://www.hoxnet.com        | Every time that wheel turn 'round,
PGP Key ID 138BCEE1          | Bound to cover just a little more ground.

------------------------------

From: [EMAIL PROTECTED] (Bryan G. Olson; CMSC (G))
Subject: Re: True Randomness & The Law Of Large Numbers
Date: 31 Mar 1999 02:16:01 GMT


R. Knauer ([EMAIL PROTECTED]) wrote:

: Calculate the frequency of particles that are within +- 5% of the mean
: in n steps for the uniform Bernoulli process. The mean for the random
: walk is zero, namely the origin. And the extremes are at +- n.

: Compare that frequency with that for all the rest of the sequences.
: For reasonably large n, the frequency of sequences that are within +-
: 5% of the mean (the origin) is small. Most of the sequences are
: outside +- 5%. The reason is that the Gaussian profile has flattened
: out considerably. Appeal to the physical intuition of diffusion. Most
: of the ink particles have diffused to a location outside +- 5% of the
: origin. 

Not so.  As n grows large, a greater and greater fraction of
the ink molecules will be within 0.05*n of where they started.

: For example, let's say that we watch the ink particles diffuse 1
: million steps. The extremes are at +- 1 millon. Therefore the inner
: range we are considering (+- 5%) is at +- 10,000 from the origin,
: which is not all that small.

Have you done that experiment?  You could try simulating it
with a PRNG.  Generate bits, and send a particle left for
each zero and right for each 1.  After a million bits, what
do you think the chance is that the particle will be farther
than 10,000 units from the starting point?

: Even a particle at the boundary of that narrow range has a 1-bit bias
: of 10,000 bits, which is not trivial. And only a small fraction have
: that property - all the rest have a 1-bit bias in excess of 10,000
: bits.

Only a small fraction?  I expect the vast majority to be within
10000 of the center.

You seem to be confusing probability and frequency.  The bias
is in the probability of generating a 1 vs a 0 bit.  The number
of 1 and zero bits actually generated is the frequency.

: IOW, the vast majority of sequences have a very large bias. It is
: little consolation that the percentage bias is small. It is still very
: large in each individual case.

So we shouldn't reject a candidate RNG for generating a sequence
with a different number of ones and zeros.  Fine, nobody does.

--Bryan


------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Norton diskreet
Date: 31 Mar 1999 02:23:19 GMT

Well, what algoritihm does Norton Diskreet use? If it's something like
XOR I can do this for you (and I'll do it for free ...), if it's, like,
DES, then, well ... you're screwed. 

>
>I have a file encrypted with Norton Diskreet and would like to decrypt
>it but I (of sourse) forgot the password. Is there a program that can
>decrypt the file or is there someone that would decrypt it for me. I
>can't afford to pay the price AccessData charges but I am willing to pay
>a small fee (if theres no other way) if someone decrypts this file for
>me.
>Please e-mail me if you can help me.
>
>Thanks in advance!
>
>Borut

------------------------------

From: [EMAIL PROTECTED] (Bryan G. Olson; CMSC (G))
Subject: Re: client authentication
Date: 31 Mar 1999 03:26:07 GMT

denis bider ([EMAIL PROTECTED]) wrote:
: In article <7dl35n$ifh$[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
: > : 1. The client sends the server its name, C, and its certificate, A<<C>>:
: > : 2. The server checks the validity of the client's certificate, then 
: > : 3. The client generates some random data, rC. The client then computes a 
: > : 4. The server computes the hash of rS and rC, decrypts the hash it 

: > I'd suggest the server should encrypt a symmetric key using the
: > client's public key, and require the client to MAC subsequent
: > messages with that key.  The problem with the current protocol
: > is that the challenge/response is only bound to the meaningful 
: > messages by the fact that they go over the same channel.

: So if I get you right then what you're saying is that the individual 
: messages aren't coupled together well enough?

: Hm... but there are only three messages exchanged:
:      Cli-->Srv:  C, A<<C>>
:         <--      rS
:         -->      rC, Cs[h(rS, rC)]

Those messages are cryptographic overhead.  What about the
messages that motivated the client and server to set up
this connection?  What the MAC does is ensure that meaningful
messages that follow the challenge come from the party
that holds the client's private key.

: I don't see how any of these messages could later be maliciously reused. 
: How would coupling the messages with MACs improve the security of the 
: protocol?

Recall that my objection was that a party, say Eve, who could
spoof the server, which admittedly isn't easy, could also spoof
the client.  Here's how:

    Eve spoofs the server, and when the client connects, Eve
    connects to the server claiming to be the client.

    The client sends C, A<<C>>, which Eve passes to the server,
    the server sends rS which Eve passes to the client, and the
    client sends rC, Cs[h(rS, rC)] which Eve passes to the
    server.

    Eve drops the connection to the client and sends whatever
    messages she wants to the server, and the server thinks
    they come from the client.

The problem with the protocol is that the meaningful messages
are bound to the challenge only by going over the same channel.
The MAC binds them to the client's ability to decrypt with the
private key.  Eve can keep playing the man-in-the-middle game,
but it won't be any fun for her, since the only messages she
can get the server to accept are those that actually did come
from the client.

--Bryan


------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Does anyone know how to solve vigenere and tranposition cipher?
Date: Wed, 31 Mar 1999 02:40:52 GMT

John Savard wrote:
> Well, for Vigenere or Beaufort, there are two things you can do:
> look for repeated strings in the text, and find the displacement
> between them.

This is known as a "Kasiski analysis", after the cryptographer who
first described it.

> Failing that, slide the message against itself, finding when the
> number of individual letter matches is higher.

This is a simple application of the principle of Coincidence,
first described by Friedman.

> For transposition, the methods are more complicated.

Usually, one proceeds via multiple anagramming.  There are some
tricks not widely known, such as the "hat diagram", that can
facilitate the process.

All these are described in the MilCryp series, which is
essential reading for anyone cryptanalyzing classical systems
(and I would maintain, many of its lessons are valuable even
if one is attacking modern systems).

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Live from the Second AES Conference
Date: Wed, 31 Mar 1999 02:45:07 GMT

Robert Harley wrote:
> Is the AES process supposed to choose a very fast algorithm that is
> somewhat secure or a very secure algorithm that is somewhat fast?
> I sincerely hope it is the latter but if the discussions being
> reported are anything to go by, it looks like the process is off
> track.

I think we're seeing the standard academic problem, attention being
focussed on what we know how to measure rather than on something
more relevant but much harder to measure.

------------------------------

Date: Tue, 30 Mar 1999 22:58:01 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers

Dave Knapp wrote:
> 
> "R. Knauer" wrote:
> >
> > On Tue, 30 Mar 1999 09:16:45 GMT, Dave Knapp <[EMAIL PROTECTED]> wrote:
> >
> > >Your great argument is based on _this_?  That correlated measurements
> > >don't give the same answer as uncorrelated measurements?
> > >So you are claiming that true random numbers are correlated how?
> >
> > You have just disclosed for all to see that you obviously do not have
> > a clue as to what we are discussing here. How on earth you can claim
> > that a uniform Bernoulli process is "correlated" is beyond me.
> >
> > So that others will not possibly get confused by your deliberate
> > obfuscation (although I certainly do not know how anyone could be that
> > stupid), I point out that the random walk model is a way of depicting
> > bias in actual sequences. The distance a particle is from the origin
> > is a direct measure of the net excess of steps in one direction. The
> > fact that many particles do migrate away from the origin has
> > absolutely nothing to do with any "correlation".
> 
> The distance from the origin is indeed a correlated property,
> especially, as you claim, that one must look at it as a time series.
> 
> Please, please, please go read a first-year book on statistics, OK?  And
> if you've already tried that, go re-read it and try to _understand_ it.
> 
> Your great insight seems to be this:
> 
>    If you follow a _particular particle_ in time, it tends to be, on
> average, further from the origin than an ensemble of particles created
> using the distribution of distances from the origin at a particular
> time.

Actually, his original model where the distribution of particles is
symmetrical about the injection point produces an ensemble average that
is expected to match the injection point.  This is a structural
artifact.

A more interesting distribution starts with the injection point at one
end of the channel.  In this case the extrema are the (few) particles
that make it all the way and the (many) particles that are at the origin
at the completion of the diffusion.

> 
> And the above is indeed true.  But what you don't seem to understand is
> this:
> 
>    The _cause_ of this behavior is correlation: the position of a
> _given_ particle as a function of time is a highly correlated value.
> Let the position of the particle at time i be Xi and the random variable
> for its motion be theta.  Then Xi+1 = Xi + theta.  Notice the "Xi" in
> that equation: it says that Xi+1 depends on Xi.  That is the
> _definition_ of correlation.
> 
>    So the statistical properties you are trumpeting as proof that
> statistics is useless in characterizing random number generators is
> based upon a correlation effect.  If you _really_ cannot understand
> this, then I guess I have no choice but to give up, as you would in that
> case be inhabiting some universe different from the one in which we
> live, but I believe you can understand it.
> 
>    The correlation effect to which I refer is also the cause of the
> coin-tossing example you gave earlier.
> 
>    Therefore, I ask again: what correlation in random numbers do you
> claim causes them to be resistant to statistical analysis?  Since I
> claim that truly random numbers exhibit no correlation, I claim your
> entire argument is specious.
> 
>   -- Dave

------------------------------

Date: Tue, 30 Mar 1999 23:03:01 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Crossposted-To: comp.security.misc
Subject: Re: Wanted: free small DOS Encryption program

This definition sets an interesting criteria regarding *intensely*
personal encryption technology.  Perhaps we could enhance this wich list
to the point of conducting a Personal Encryption Standard (PES) contest
analogous to the AES conducted by NIST.


Milton Pomeroy wrote:
> 
> Wanted - a free DOS-based encryption program which is small, fast,
>          strong and friendly
> 
> Explanation
> 
> I want recommendations of encryption software to store small amounts of
> sensitive information (up to 30kbytes) for my own use - i.e. I encrypt it,
> and I decrypt it.  Since I plan to carry the encrypted datafile and
> encryption software on floppy disk and use it on various PCs (some of which
> may not be owned by me), I plan to use it from DOS (don't want to load it on
> PC, don't want any temporary decrypted data left on the PC's hard-disk).  The
> PCs will be running DOS, Win95/8, or WinNT.  Typically, I'd run it from the
> floppy in a DOS-Window.
> 
> The mandatory requirements therefore are:
> 
>   (1) runs from DOS (and DOS-prompt in a DOS-Window)
> 
>   (2) freeware/public domain
> 
>   (3) be accessible to someone (like me in Australia) who is outside USA
>         (no export restrictions)
> 
>   (4) works in 450kBytes or less of RAM
> 
>   (5) already compiled i.e. an EXE or COM version is available
>        (I don't want the uncertainty of my doing a poor compilation
>        resulting in a poor security implementation)
> 
>   (6) EXE or COM file must be small - less than say 80kbytes
>        (if it's large, like PGP for DOS at around 400kbytes, it takes
>        over 10sec to load from floppy-drive)
> 
>   (7) fast execution - less than say 5 seconds to load from floppy
>       and complete encryption/decryption of up to 30kBytes of data
>       on a 486-66
> 
>   (8) can run from a floppy with any temporary files being stored on the
>       floppy
> 
>   (9) strong (at least 80-bit-key strength)
> 
>  (10) user-ready incl enough documentation to be used directly without doing
>         programming, compilation etc
> 
>  (11) algorithm has some pedigree - i.e. has widespread degree of respect
>         in the crypto community
> 
>  (12) implementation (inc. compilation) should have some pedigree - i.e. has
>            widespread degree of respec
> 
> -----------== Posted via Deja News, The Discussion Network ==----------
> http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: freeware implementation of one-time pad?
Date: Wed, 31 Mar 1999 03:58:38 GMT

"Wayne D. Hoxsie Jr." wrote:
> In article <7drcb0$6v1$[EMAIL PROTECTED]>, Charles Blair wrote:
> >   It should be easy to write something in which the user gives the
> >plaintext and the pad, and the program creates the cyphertext, with
> >the user bearing responsibility for the pad being random and used
> >only once.  Has anyone made available a ``standard'' implementation?
> The de facto standard is exclusive-or (xor).

Try this (written in-line here; hasn't been tested):

/* otp.c -- implements XOR of two byte-stream files into another */
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[]) {
        register int p, k;
        register FILE *pf, *kf, *cf;

        if (argc != 4) {
                fprintf(stderr,
                        "Usage: otp plainfile keyfile cipherfile\n"
                        "   or: otp cipherfile keyfile plainfile\n");
                return EXIT_FAILURE;
        }
        if ((pf = fopen(argv[1], "rb")) == NULL) {
                fprintf(stderr, "otp: can't open input file \"%s\"\n",
                        argv[1]);
                return EXIT_FAILURE;
        }
        if ((kf = fopen(argv[2], "rb")) == NULL) {
                fprintf(stderr, "otp: can't open key file \"%s\"\n",
                        argv[2]);
                return EXIT_FAILURE;
        }
        if ((cf = fopen(argv[3], "wb")) == NULL) {
                fprintf(stderr, "otp: can't open output file \"%s\"\n",
                        argv[3]);
                return EXIT_FAILURE;
        }
        while ((p = getc(pf)) != EOF)
                if ((k = getc(kf) == EOF) {
                        if (ferror(kf))
                                fprintf(stderr, "otp: error reading"
                                        " key file "%s\"\n",
                                        argv[2]);
                        else
                                fprintf(stderr, "otp: key file "%s\""
                                        " has insufficient data\n",
                                        argv[2]);
                        return EXIT_FAILURE;
                } else if (putc(p ^ k, cf) == EOF) {
                        fprintf(stderr, "otp: error writing"
                                        " output file "%s\"\n",
                                        argv[3]);
                        return EXIT_FAILURE;
                }
        if (ferror(pf)) {
                fprintf(stderr, "otp: error reading input file "%s\"\n",
                        argv[1]);
                return EXIT_FAILURE;
        } else {
                fprintf(stderr,
                        "otp: the key file "%s\" must not be reused\n",
                        argv[2]);
                return EXIT_SUCCESS;
        }
}

------------------------------

From: kctang8 <[EMAIL PROTECTED]>
Crossposted-To: sci.math.symbolic
Subject: Re: ---- Two very easy secret key Cryptosystems
Date: Wed, 31 Mar 1999 10:45:52 +0800

Fiji wrote:

> > kctang8 wrote:
> >>a,b,c and e denote positive integers.
> >>
> >> Please crack the following systems:
> >>
> >>(Q1)
> >>plaintext: [a,b,c]
> >>encryption: choose a secret number e,
> >>            cyphertext = [A,B,C] = [e*a, e*b, e*c]
> >>decryption: the partner know e,
> >>             get plaintext [a,b,c]= [A/e, B/e, C/e]
> >>

> Well if this were english text, it would be nothing but a substitution
> cipher which would allow one to do frequency analysis. This is not unlike
> the affine cipher.

But the point of system (Q1) is that _A_ secret number is used for
_EACH_ three components. For more than 3 components, such as 9
components, 3 _different_ secret numbers will be used.

Yours,   kctang8




------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Wed, 31 Mar 1999 03:09:52 GMT

"R. Knauer" wrote:
> The underlying process for the random walk is the UBP, and it is
> completely uncorrelated. Each step is completely independent of all
> other steps, including the one preceding it.

Dave's not saying that the delta (step) is correlated with the
previous delta, but rather that the net position (sum of all
previous deltas) is correlated with the previous position.

------------------------------

From: [EMAIL PROTECTED] (Bruce Schneier)
Subject: Re: What is fast enough?
Date: Wed, 31 Mar 1999 03:13:50 GMT

On Tue, 30 Mar 1999 01:13:49 GMT, [EMAIL PROTECTED] wrote:

>Jack Lloyd and I are currently working on a cipher together.  I was just
>wondering (from the communities point of view) what is acceptable speeds?
>
>Right now, in the unoptimized C code, on a 233mhz Cyrix MII, I get between
>1.4MB/sec and 2.9MB/sec (32 rounds and 16 rounds respectively).
>
>Isn't anything above 1MB/sec considered fast enough? I mean my hd controller
>only works at 4.5MB/sec anyways!

First off, don't calculate speed in MB/sec, count it in clock cycles
per byte encrypted.  (It's a more general measure.)  The fastest AES
candidated encrypt at 15 clock cycles per byte on a Pentium-class
computer.  Stream ciphers are faster--RC4 encrypts at about 9 clock
cycles per byte.

I have applications where I would like a stream cipher that encrypts
at 2-3 clock cycles per byte, and a block cipher that encrypts at
about 10 clock cycles per byte, with no time for key setup.  These are
software numbers.  I have other requirements--speed, latency, and
number of gates--for hardware.

Bruce
**********************************************************************
Bruce Schneier, President, Counterpane Systems     Phone: 612-823-1098
101 E Minnehaha Parkway, Minneapolis, MN  55419      Fax: 612-823-1590
           Free crypto newsletter.  See:  http://www.counterpane.com

------------------------------

Date: Tue, 30 Mar 1999 22:28:50 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers

R. Knauer wrote:
> 
> On Tue, 30 Mar 1999 09:16:45 GMT, Dave Knapp <[EMAIL PROTECTED]> wrote:
> 
> >Your great argument is based on _this_?  That correlated measurements
> >don't give the same answer as uncorrelated measurements?
> >So you are claiming that true random numbers are correlated how?
> 
> You have just disclosed for all to see that you obviously do not have
> a clue as to what we are discussing here. How on earth you can claim
> that a uniform Bernoulli process is "correlated" is beyond me.
> 
> So that others will not possibly get confused by your deliberate
> obfuscation (although I certainly do not know how anyone could be that
> stupid), I point out that the random walk model is a way of depicting
> bias in actual sequences. The distance a particle is from the origin
> is a direct measure of the net excess of steps in one direction. The
> fact that many particles do migrate away from the origin has
> absolutely nothing to do with any "correlation".
> 
> Naive intuition attempts to confuse the time average of one sequence
> with the ensemble average of the entire collection of sequences. It is
> the latter that the law of large numbers addresses, not the former.
> Any given sequence can (and for most sequences does) have terribly
> "abnormal" statistical properties, yet the entire collection taken as
> an ensemble has perfectly normal statistical properties, thanks to the
> law of large numbers.
> 
> The problem then comes down to what size sample must one test
> statistically to get a reasonably correct characterization of the
> properties of a TRNG? If I am interested in the randomness of 10,000
> bit sequences, how many such 10,000 bit sequences must I test in
> aggregate before I can have some reasonable assurance that the TRNG is
> not malfunctioning?
> 
> Since there are 2^10000 possible sequences in the phase space of this
> generator (a number in base 10 so large my TI calculator can't compute
> it), what fraction of those 2^10000 sequences must I test
> statistically? Will 1% suffice? If so, then how can anyone reasonably
> expect me to test 2^10000/10^2 sequences, which is still an impossibly
> large number?

The simplest answer is to gather a sample as large as the expected usage
for analysis.  If the analysis is successful, the generator is adequate
and may be used.  But note that, since you've already gathered the
sample there's no reason to gather more data; simply use what you have.

This approach makes the disctinction between testing output and testing
generation process extremely clear.

> 
> Put another way, how can anyone expect that testing even 1,000,000
> sequences of length 10,000 bits ever hope to characterize the TRNG
> correctly, even to some level of approximation. 1,000,000 sequences is
> such as small fraction of the 2^10000 sequences in the ensemble that
> such few sequences cannot ever possibly do a good job of representing
> the ensemble reasonably well.

This is exactly the right question.

If you have this issue to a professional statistician, along with the
desired confidence, you'll get a number that describes the size of the
sample set required to reach that degree of confidence.  If the degree
of confidence is to exclude all but one chance in 2^128 or 2^256 of a
"bad" result, and the population is 2^(10^4), there is a sample whose
analysis will give us the desired confidence.

Any readers able to fill in this blank?


------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Random Walk
Date: Wed, 31 Mar 1999 03:32:07 GMT

"R. Knauer" wrote:
> I just looked up the term "point null hypothesis" in both volumes of
> William Feller's monumental work on probability, 3rd ed. (op. cit.). I
> cannot find it. Is there another term that is used to describe it? If
> not, please explain what you mean by that term.

Probability != Statistics.  As several have suggested, you should
study statistics if you want to work in this area.

The null hypothesis for a given statistical test is that which
the test attempts to falsify.  But a mere definition doesn't
convey the application nor importance of this concept, so look
this up in any reputable statistics textbook.

For example, if one wishes to test whether two distributions
are correlated, the null hypothesis would be that they are not
correlated.  The main thing about a suitable null hypothesis is
that it permits exact calculation of what is expected if the
null hypothesis were true, i.e. a simple model of behavior,
so we know what to test (namely, evidence against the model).

A "point" null hypothesis is one of the form, "parameter exactly
equals yay".  For example, "the average speed of cars on I-95 is
exactly 57.000... mph".  As Herman noted, this is "untenable"
(unless one measures the *entire* population, but it is nearly
impossible to have formulated such a hypothesis that turns out
to be correct, unless the formulator already had measured the
entire population).  This is the basis for the rules you find for
choice of null vs. alternate hypotheses in introductory statistics
textbooks.

------------------------------

Date: Tue, 30 Mar 1999 23:49:41 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: My Book "The Unknowable"

karl malbrain wrote:
> 
> In article <7dec5m$8gn$[EMAIL PROTECTED]>,
>   karl malbrain <[EMAIL PROTECTED]> wrote:
> > Under materialism, CHAOS is BROKEN DOWN inorder to LIQUIDATE RANDOMNESS (See
> > the <<equation>> above, the unusable part being the dividend).  It's the
> > REMAINDER that I'm after. This should read ABILITY to USE -- USEABILITY is
> > DETERMINED by MATTER'S admittance of INFORMATION -- see an electronics
> > definition of OPERATIONAL AMPLIFIERS.  Karl M
> 
> CORRECTION:  The QUOTIENT is LIQUIDATED as MYSTICISM.

This submission is a good example of a problem characteristic of most of
your posts.  It is clear that you're smoking some *really* great stuff,
but you aren't *sharing*!

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to