Cryptography-Digest Digest #323, Volume #9 Thu, 1 Apr 99 23:13:03 EST
Contents:
Re: True Randomness & The Law Of Large Numbers (Dave Knapp)
Re: Random Walk (R. Knauer)
Re: True Randomness & The Law Of Large Numbers (R. Knauer)
Re: True Randomness & The Law Of Large Numbers (R. Knauer)
Re: North Korean A3 code (Eric Hildum)
Re: True Randomness & The Law Of Large Numbers (R. Knauer)
Re: True Randomness & The Law Of Large Numbers (R. Knauer)
Re: True Randomness & The Law Of Large Numbers ("Trevor Jackson, III")
Re: My Book "The Unknowable" (karl malbrain)
Re: Live from the Second AES Conference (rohatgi)
Re: Random Walk ("Douglas A. Gwyn")
Re: True Randomness & The Law Of Large Numbers (R. Knauer)
Re: True Randomness & The Law Of Large Numbers ("Douglas A. Gwyn")
----------------------------------------------------------------------------
From: Dave Knapp <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Fri, 02 Apr 1999 01:44:38 GMT
Herman Rubin wrote:
>
> In article <[EMAIL PROTECTED]>, Dave Knapp <[EMAIL PROTECTED]> wrote:
> >"R. Knauer" wrote:
>
> >> On Wed, 31 Mar 1999 01:49:21 GMT, Dave Knapp <[EMAIL PROTECTED]> wrote:
>
> >> >Let me make sure I get this straight: you are claiming that Sn and Sn+1
> >> >are uncorrelated?
>
> >> Define "correlated".
>
> >"Correlation," which is defined in any first-year book on statistics, is
> >the dependence of one value on another.
>
> This is incorrect. It is the signed extent of LINEAR dependence.
You're right. I was wrong.
> >A correlation of 0 means that the two values are independent.
>
> This is only true under certain strong assumptions about the joint
> distribution. One can have zero correlation and very high dependence.
>
> >Sn+1 depends, to seom extent, on Sn; they are not completely
> >independent. Therefore they are correlated.
>
> Suppose that Sn is uniformly distributed from -1 to 1. Suppose
> the sign of Sn is changed to some random variable independent
> of Sn to form Sn+1. The correlation will be 0, but |Sn+1| = |Sn|/
An excellent point. Lack of correlation is necessary but not sufficient
condition for independence.
Sorry about any misunderstanding; I was trying to explain correlation in
layman's terms, and apparently didn't do a very good job.
The thrust of my argument, however, is unaffected, no?
-- Dave
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Random Walk
Date: Fri, 02 Apr 1999 01:42:28 GMT
Reply-To: [EMAIL PROTECTED]
On Fri, 02 Apr 1999 00:17:27 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
wrote:
>There have been researchers looking into such questions, however.
>There are actually some pretty good papers about entropy in QM
>available on the net.
I am aware of that. In fact I have cited some of them, and the books
that have compiled them. Cerf and Adami come to mind.
>A point that is generally agreed is that whatever is going on at
>the quantum superposition level, it is inconsistent with standard
>probability theory (a la Feller, for example), although it is
>usually described in probabilistic terms.
Can you explain what you believe is inconsistent with standard
probability theory?
Are you saying that true randomness cannot be modeled mathematically?
Hell, I have been arguing that for several months now.
BTW, I got your Triola book today, so in a few days I should be
"enabled". Hot Damn, eh!
I am going to focus my attention only on those issues that relate to a
determination of true randomness. I want to see the justification for
the presumption that statistics has found the holy grail of true
randomness.
Stay tuned.
>But this isn't the newsgroup for discussing physics.
It is a newsgroup that discusses true randomness, which is at the
heart of crypto. Whatever help we can get in that pursuit is welcome.
Bob Knauer
"The laws in this city are clearly racist. All laws are racist.
The law of gravity is racist."
- Marion Barry, Mayor of Washington DC
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Fri, 02 Apr 1999 01:59:52 GMT
Reply-To: [EMAIL PROTECTED]
On Thu, 01 Apr 1999 16:03:46 -0600, Jim Felling
<[EMAIL PROTECTED]> wrote:
>remove biased from my above statement. A better phrasing of my assertion is that
>random walks ending more than 1% of N from 0 are so vanishingly rare that it is
>much more likely that what is being evaluated is not producing truly random
>output.
There are several examples in Feller that defy your "vanishingly rare"
hypothesis.
That is what I am basing my position on. In fact Feller's second
volume is a compendium of "fluctuations" from "normal beahvior" for
true random processes, such as the UBP and its cousin, the uniform
random walk in one dimension.
Perhaps the UBP is not a proper model for a true random process. We
know that even the slightest deviation from p = 1/2 will cause the
Bernoulli process to go into the weeds. Las Vegas roulette creates
fortunes for casino owners and there the value of p is only 20/38,
which is a small deviation from 1/2.
The good news is that if you can enhance the odds only a small amount
from "random" (p = 1/2), wonderful things can happen to you. Before I
break out into song about the ant and the rubber tree, let me
conjecture that even probability theory is flaky as all hell, like
non-linear chaos in classical physics.
How the world works in light of all these counter-intuitive notions is
testimony to the fact that the human brain is a quantum mechanical
device (cf. Roger Penrose). But that is another thread on another
forum (sci.philosophy.meta).
>> I would use it as a very strong diagnostic indication of a very likely
>> malfunctioning TRNG - but no more than that.
>Either I have a 1)defective TRNG that just fooled me on my examination or
>2) a working TRNG that generated statistically unlikely output.
The problem I am having is that I do not see the model you are using
to decide that such an output is "statistically unlikely". I maintain
that the model you are using only presents the *appearances* of true
randomness, and misses the real essence of true randomness as
exhibited in quantum physics.
>Given those are the only 2 possible hypothesis Occams razor would make me choose
>hypothesis 1 and I therefore would kick out.
I am not willing to accept that on the face of it. You need to defend
it in a more rigorous manner.
According to the exposition in Li & Vitanyi, the best you can hope for
by "pac-learning" models is a probabilistic result. It takes something
different, something much more fundamental, to make a desicion based
on "reasonable certainty".
Bob Knauer
"The laws in this city are clearly racist. All laws are racist.
The law of gravity is racist."
- Marion Barry, Mayor of Washington DC
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Fri, 02 Apr 1999 02:14:03 GMT
Reply-To: [EMAIL PROTECTED]
On Thu, 01 Apr 1999 15:20:40 -0700, "Tony T. Warnock"
<[EMAIL PROTECTED]> wrote:
>One problem is that Bob Knauer is not using the "natural" measure of distance.
Hey, dude - I hate to tell you, but your old buddie, Bob Knauer, is
not doing anything other than quoting the Experts. It is Feller who is
not doing something you claim is not being done.
I have admitted up front on so many occasions that I am not an Expert
that it has to be an incredible bore to most of the regulars on
sci.crypt. I am only a lowly Informed Layman (tm), attempting to stand
on the shoulders of giants.
I learned at an early age that the dumbest sophomore could easily ask
questions that the smartest professor could not answer. You gotta give
those 50% of the people with IQs under 100 their just due. :-)
Just strive not to be too condescending when you do it.
>The natural unit of distance is the standard deviation of the distribution. In
>the case being considered, this is Sqrt(N*P*(1-P)) where P is the probability of
>a step to the right and N is the number of steps; this yields a binomial
>distribution. The range of the distribution (the distance between the extremes)
>is proportional to N whereas the standard deviation is proportional to Sqrt(N).
>In a diffusion problem in one dimension, the number of particles remaining
>within a fixed number of units of the origin goes to zero, the number within a
>fixed percent of the origin (% of the range) goes to one, and the number within
>a one standard deviation of the origin goes to about .68%. This is all in
>Feller.
Yes it certianly is. I oughta know since I have Feller's two volumes
right here. It looks as though I need to renew them instead of
returning them to the library.
Now, please tell us what all this has to do with the fact, which I
maintain, that statistical tests are not valid in determining with
"reasonable certainty" that a TRNG is not truly random. What is the
most fundamental connection between statistics and true randomness.
Bob Knauer
"The laws in this city are clearly racist. All laws are racist.
The law of gravity is racist."
- Marion Barry, Mayor of Washington DC
------------------------------
Date: Fri, 02 Apr 1999 10:56:36 +0900
From: Eric Hildum <[EMAIL PROTECTED]>
Subject: Re: North Korean A3 code
That would explain why it is well known. Perhaps the transimission system used the
standard telegraphic encoding, while the message was encyphered using some other
system. Hard to judge from one newspaper report.... ;-)
Jim Dunnett wrote:
> On Wed, 31 Mar 1999 13:08:58 +0900, Eric Hildum
> <[EMAIL PROTECTED]> wrote:
>
> >In today's Japan Times, there was a discussion of a 1978 kidnapping of a
> >Japanese woman from Japan by North Korea [this is one of about a dozen suspected
> >cases over the last twentyfive years]. The article discussed a North Korean code
> >called "A3," described as a five digit number for each hangul (?) character.
> >Given the recent discussion on this newsgroup, it would seem to me that such a
> >code system would be relatively easy to break -- are there any references on the
> >internet to this system? I assume that as so much is known about this code that
> >it has in fact been broken....
>
> The Chinese also have a system which codes a subset of the
> ideograms of Mandarin into 5-figure (letter?) groups.
>
> It's hardly a cipher, merely a means of telegraphing ideograms!
>
> Perhaps the Korean system you refer to is no more than that.
>
> --
> Regards, Jim. | Da mihi castitatem et continentiam,
> olympus%jimdee.prestel.co.uk | sed noli modo.
> dynastic%cwcom.net |
> nordland%aol.com | - St. Augustine, 354 - 430
> marula%zdnetmail.com | (in Confessions, Book 8 Chap 7)
> Pgp key: pgpkeys.mit.edu:11371
--
===========================
Eric Hildum
[EMAIL PROTECTED]
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Fri, 02 Apr 1999 02:35:31 GMT
Reply-To: [EMAIL PROTECTED]
On Thu, 01 Apr 1999 22:38:43 GMT, [EMAIL PROTECTED] wrote:
>Feller's point in no way implies that a large portion of molecules
>will be farther than 10,000 units from the mean after a million
>unbiased leftward/rightward events.
>In fact 10,000 units at n=1000000 is 20 standard deviations; 0.05*n
>is 100 standard deviations. Even with a gallon of perfume, we expect
>_no_ particles that far out.
Then how do you explain the weird "abnormal" things that Feller
discusses in the chapters on the random walk and the uniform Bernoullu
process? And what is his second volume about fluctuations all about?
Is he advancing snake oil, or is there something fundamental about
random processes he is trying to tell us?
Quantum mechanics is all about true randomness, yet after nearly a
full century, no one has come up with a satisfactory mathematical
model to explain the "collapse of the wave vector". I find that
astonishing. It tells me that the concept of true randomness, which
characterized the collapse of the wave vector is not capable of being
modeled mathematically, any more than Godel propositions, Turing's
halting problem or Chaitin's indeterminancy are capable of being
modeled mathematically.
IOW, I am challenging your contention that true randomness can even be
modeled mathematically, even to the extent that you can determine with
reasonable certainty tbat a process is not truly random based on
statistical tests.
I find it a bit curious that no one has challenged my contention that
if you could model true randomness using statistical models, that you
could then use them to filter the output of PRNGs to produce true
random sequences - which we know is impossible.
I have read most of what Greg Chaitin has written about the Unknown in
mathematics, and I am basing much of what I am claiming on those
meta-mathematical considerations - although it does not show in these
discussions. I am waiting to get past these hurdling blocks, so we can
really get down to business with regards to True Randomness.
>You wrote
Sorry, but I do not care to spend time on nitpicking. I have written
much, and I do not presume to be an Expert. I will leave you to spend
your time debating with your colleagues over points of pedantry.
>> Are you saying that a run of 100 zeros conclusively demonstrates that
>> a TRNG is malfunctioning? How about a run of 100 zeros in a sequence
>> of 10^9 bits?
>Yes. Reject the candidate TRNG.
No. The best you can decide is that the TRNG is *possibly* broken,
which then requires you to check it out.
>Yes, but let's solve one problem at a time. You want precise
>quantitative arguments, I ask at least that you follow those
>given. Under our binomial distribution with p=q=0.5 and
>n=1000000, the number 0.05*n is 100 standard deviations. We
>do not expect particles outside of 100, or even 20 standard
>deviations from the mean.
Yet Feller shows that a significant number of sequences are outside
that range. It all depends on how you pose the measurement.
> Your response to this quantitative
>argument was an example of a room full of perfume smell, even
>though you have no reason to equate the bounds on the room with
>the actual distance in question. Do you now see why the
>conclusion you drew from this example is wrong?
I have known for over 35 years that the Gaussian distribution falls
off very slowly. And I realize that the lone sequence at the far outer
reaches is the sequence that is all zeros or all ones, and that
compared to the rest of the sequences, it is very lonely out there.
What does that have to do with true randomness, in particular
statisticall measures of true randomness?
Bob Knauer
"The laws in this city are clearly racist. All laws are racist.
The law of gravity is racist."
- Marion Barry, Mayor of Washington DC
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Fri, 02 Apr 1999 02:48:49 GMT
Reply-To: [EMAIL PROTECTED]
On Thu, 01 Apr 1999 23:56:44 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
wrote:
>That's because you're operating on false premises;
No, I am operating on the manifest fact that several people here have
stated certain things about how statistical tests can be used to
determine that a RNG is not truly random.
>apart from you yourself,
It is not I who am making those claims. It is not poilitcally correct
to shoot the messenger. :-)
> nobody is suggesting
You clearly have not been a participant in these discussions.
>that "statistics can be used to describe true randomness".
Most people finally came to that conclusion in the the discussions
back 1 1/2 years ago. Yet people still insist that if their pet RNG
passes such and such a statistical test, it is truly random.
Hell, the famous FIPS-140 says that. Have you ever read FIPS-140?
>What you seemed to be disputing was the validity of
>statistical testing of RNG output, which is a radically
>different issue.
I am getting the distinct impression that you do not understand what I
am disputing.
>Statistical testing can be used to cast doubt on a claim
>(hypothesis, condition for correct operation, whatever)
>that a supposed RNG under test is a good one. It has
>been explained repeatedly how this is done in general,
>but nobody is likely to, nor should they, post a textbook
>on introductory statistics to give you all the gory
>details and examples.
Strange - you posted a textbook earlier which I got today. Are you
having a problem with short term memory? You might want to seek
professional help before you forget your name.
And why do you say that no one should post a text book? Is this some
kind of warfare to you, where you would not want to share resources?
I am beginning to get the feeling that you are a very disengenuous
person. I do not care for disengenuous people. My considerable
experience is that they are always charlatins.
But thanks for the reference anyway.
<plonk>
Bob Knauer
"The laws in this city are clearly racist. All laws are racist.
The law of gravity is racist."
- Marion Barry, Mayor of Washington DC
------------------------------
Date: Thu, 01 Apr 1999 20:42:52 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
R. Knauer wrote:
>
> On Tue, 30 Mar 1999 22:58:01 -0500, "Trevor Jackson, III"
> <[EMAIL PROTECTED]> wrote:
>
> >A more interesting distribution starts with the injection point at one
> >end of the channel.
>
> You have the problem of reflection to deal with in that case.
There are reflections but they have no impact on the distribution if the
distance between the injection point and the end cap is zero.
>
> > In this case the extrema are the (few) particles
> >that make it all the way and the (many) particles that are at the origin
> >at the completion of the diffusion.
>
> After time the particles will have diffused into the water to form a
> rather flat distribution. After all, mixing entropy does increase,
> doesn't it?
>
> Bob Knauer
>
> "The laws in this city are clearly racist. All laws are racist.
> The law of gravity is racist."
> - Marion Barry, Mayor of Washington DC
------------------------------
From: karl malbrain <[EMAIL PROTECTED]>
Crossposted-To: sci.math,sci.physics,sci.logic
Subject: Re: My Book "The Unknowable"
Date: Fri, 02 Apr 1999 03:19:19 GMT
Paul Healey <[EMAIL PROTECTED]> wrote in message
news:OhM$[EMAIL PROTECTED]...
> In article, karl malbrain <[EMAIL PROTECTED]> writes,
>
> >UNKNOWABLE along this thread attempts to tie OWNERSHIP to INFORMATION. I
> >believe the original author's work mistakes MATERIAL ownership with
> >USABILITY (ability to apply information to material things)
>
> 'Tie', implies they can be unbound. Does not attempting to tie
> information with data, confound reason ? - one is the essence of the
> other. Essence cannot be applied to anything. Information is subjective,
> in that it requires data. Data is objective, in that you can own it or
> use it. Winning the game, or knowing its rules implies information is a
> a piece of data. Likewise, data has no value without information.
> Your ability to play the game, depends on a method of reasoning that can
> be applied to it. Application implies a method, and this can be
> associated with a patent or copyright.
No. Data is exteriour information, Reason is interiour complexity. The point
being that one cannot OWN information as a thing in itself, anymore than
someone can OWN the air we all breathe. It's a naturally ACCUMULATING thing
for USABILITY. Application implies WORK, OWNERSHIP of application as a
thing in itself implies a patent or copyright. Where else can you base this
very newsgroup????
> >
> >
> >Here you fall into VULGAR MATERIALISM. Material is Paramount, not
> >Information (which accumulates as a function of TIME)
> >
>
> Materialism for-itself is vulgar, so vulgar materialism is for-itself -
> the principles which belong to it cannot be discerned by its own schema.
> It presents itself as if this is the way things actually are, but in
> fact it is only concerned with the appearance of things. It is just a
> more modern variant of empiricism, as Hegel points out ! Both are
> ethically suspect, just as transcendental logic is, in that they fail to
> ground their schema's. If 20th century logicians, had more respect for
> philosophy, no doubt our universities would be less like monasteries and
> more like oracles.
No, Materialism as a thing in-itself is VULGAR. Historically, it's the
method used to bring down Rome by objectifiying subjectivity. You reduce
someone to speech without subjects and impose redundancy instead -- you can
have it only if 50% of you can say it.
> (... snipped, I'm no philosopher ...) What Georgias, like many a modern
logician
> fails to do, is differentiate between what a thing is in-itself and that
> which it belongs to. Constructing a schema which can do this, which is
> categorically consistent, I claim has a ground, whereas an arbitrarily
> set of axioms is merely a correspondence with what things are in-
> themselves. This is why, axiomatic deductive models of reasoning fail to
> capture the dynamic nature of reasoning as it is related to language;
> you can have a dialogue with someone else, precisely because you have
> some idea, or can work out and learn what they mean. This requires, that
> what exists has to be knowable.
I think you mean differentiating a thing-in-itself-objects from a
thing-for-all-subjects. Yes, a schema is required to get any WORK done.
Read ORGANIZATION here. There's not much to go on in a newsgroup, except
perhaps to discern the LISP/SNOBOL machines from the authors. Karl M
============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
------------------------------
From: rohatgi <[EMAIL PROTECTED]>
Subject: Re: Live from the Second AES Conference
Date: Thu, 01 Apr 1999 18:24:22 -0500
Bruce Schneier wrote:
> On 01 Apr 1999 09:56:48 +0200, Jaap-Henk Hoepman <[EMAIL PROTECTED]>
> wrote:
>
> >[EMAIL PROTECTED] (Bruce Schneier) writes:
> >
> >> >: IBM's Pankaj Rohatgi explained how he got all 128 bits of
> >> >: a Twofish key after only 50 (that is 50 not 2^50) uses of a smart
> >> >: card!
> >> >
> >> >I wonder how secure some of the other ciphers would be, if the kind of
> >> >optimizations Bruce suggested for fitting Twofish on a smart card were
> >> >applied to them. That is, if it were possible.
> >>
> >> He said in his talk that every cipher is vulnerable. We've done this
> >> sort of work, too, and we have found that you can't defend against
> >> these types of attack with the algorithm. You can do some things with
> >> the implementation and some things with the hardware, but basically
> >> you need to defend in the protocol layer.
> >
> >I'm not sure I understand how a defense in the protocol layer would prevent a
> >DPA style attack. Could you give an example?
>
> Sure. When building smart-card based systems, I try very hard to make
> sure all secrets within a device can be known by the person holding
> the device.
>
> Bruce
> **********************************************************************
> Bruce Schneier, President, Counterpane Systems Phone: 612-823-1098
> 101 E Minnehaha Parkway, Minneapolis, MN 55419 Fax: 612-823-1590
> Free crypto newsletter. See: http://www.counterpane.com
Actually, the question of good-protocol design did come up during the question
and answer session after my talk at the second AES conference where someone asked
whether DPA is even an issue in well designed smart-cards protocols where the
smart-card holds only the owner's secrets and therefore the owner has no reason
to
attack the card.
My response to that question, and also my response to Bruce Schneier's posting
above
is that indeed such protocol design is very helpful in reducing the exposure by
eliminating
one potential adversary but this does not eliminate the exposure unless further
steps are
taken. This is because the number of power samples that an adversary needs in
order to attack a card (especially the cheaper varieties) is quite small, usually
10-50
samples are enough and this number is likely to decrease as the attacks get more
sophisticated.
This means that any protocol where an honest user interacts
with another honest entitiy (such as some other secure hardware device) via a
rigged
smart-card reader is vulnerable, since 10-50 transactions is not an unusually
large number.
For example if the smart-card reader attached to the cash register in a cafetaria
is rigged,
then over a period of a month almost all users would have performed enough
transactions
and the person who has rigged the reader could potentially obtain the secret keys
of all
the users. Note that smart-card readers as they are designed and used today are
not
in any physical security boundary.
It should be noted though that it may be much cheaper to put in hardware
countermeasures
which guard against power attacks mounted via rigged smart-card readers, vs
attacks
mounted by the smart-card owner.
- Pankaj Rohatgi
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Random Walk
Date: Fri, 02 Apr 1999 03:35:49 GMT
"R. Knauer" wrote:
> On Fri, 02 Apr 1999 00:17:27 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
> wrote:
> >A point that is generally agreed is that whatever is going on at
> >the quantum superposition level, it is inconsistent with standard
> >probability theory (a la Feller, for example), although it is
> >usually described in probabilistic terms.
> Can you explain what you believe is inconsistent with standard
> probability theory?
That should be obvious, but: Take the 2-slit experiment as the
canonical example (and Feyman says this is appropriate); try to
cast the quantities in terms of classical probabilities; you
quickly find that the assumption that going through each slit is
an independent event is wrong; so the only classical probabilistic
recourse is to treat these as dependent (correlated) events; but
when you attempt to work out the conditional probabilities you
can't get it to agree with the result of QM that adds complex
"probability amplitudes" then takes their squares as the
probabilities. So it is not even treatable using probability
theory; something else (other than incomplete information) is
involved in this form of variability.
> Are you saying that true randomness cannot be modeled mathematically?
No; in fact what I *think* you mean by "true randomness" you have
attributed to a uniform Bernoulli process, and apparently also a
static 0-order Markov chain, which are simple to model
mathematically.
> BTW, I got your Triola book today, so in a few days I should be
> "enabled". Hot Damn, eh!
> I am going to focus my attention only on those issues that relate to a
> determination of true randomness. I want to see the justification for
> the presumption that statistics has found the holy grail of true
> randomness.
You might as well not bother, then. Triola was suggested as a
way to learn enough about statistical testing to have an
educated opinion on the subject. I don't recall that he ever
brought up the issue of "true randomness" as such. And in
any event, nobody here (other than you) has suggested "that
statistics has found the holy grail of true randomness".
What we have stated is that properly applied and interpreted
statistical testing of RNG output is a reasonable activity.
One doesn't learn statistics by "focussing on specific issues"
any more than one learns general mathematics by focussing on
the properties of the number "ten".
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Fri, 02 Apr 1999 02:50:04 GMT
Reply-To: [EMAIL PROTECTED]
On Fri, 02 Apr 1999 00:03:12 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
wrote:
>You know, you may be onto something there. This so-called "TRNG"
>seems to be defined by R. Knauer as an RNG that he somehow "knows"
>is functioning correctly (as an RNG), *no matter what the evidence*.
>Or, in Bayesian terms, if one is a priori *certain* (P==1) about
>a claim, then no matter what the likeihood ratio from observations,
>one is also a posteriori certain about the claim. But the rest of
>us have to work without a priori certainty, so the likelihood ratio
>*does* have relevance for *us*.
Let us know when your wife gets 95% pregnant.
Bob Knauer
"The laws in this city are clearly racist. All laws are racist.
The law of gravity is racist."
- Marion Barry, Mayor of Washington DC
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Fri, 02 Apr 1999 03:19:02 GMT
"R. Knauer" wrote:
> On Thu, 01 Apr 1999 23:56:44 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
> wrote:
> >Statistical testing can be used to cast doubt on a claim
> >(hypothesis, condition for correct operation, whatever)
> >that a supposed RNG under test is a good one. It has
> >been explained repeatedly how this is done in general,
> >but nobody is likely to, nor should they, post a textbook
> >on introductory statistics to give you all the gory
> >details and examples.
> Strange - you posted a textbook earlier which I got today. Are you
> having a problem with short term memory? You might want to seek
> professional help before you forget your name.
No, I did *not* post a *textbook*. Have any of my posts been
hundreds of pages long? I did post the *name* of a textbook
author, one of whose introductory books (pretty much the same
except for depth of treatment) I suggested as a way to learn
about null hypotheses, etc. which you had been asking about.
The above quote was a direct response to your complaint that
nobody had explained the subject to you in any great detail.
> I am beginning to get the feeling that you are a very disengenuous
> person. I do not care for disengenuous people. My considerable
> experience is that they are always charlatins.
I will let the readership judge for themselves whether I
have been disingenuous, or a charlatan.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************