Hi Glen, et al,

Thanks for cashing mu $0.02 check. :-)

When I wrote that "but it doesn't have to be" I wasn't asserting that
probability theory is devoid of events.  Events are fundamental to
probability theory.  They are the outcomes to which probability is
assigned.  In a nutshell, the practice of probability theory is the mapping
of the events--outcomes-- from random processes to numbers, thus making the
practice purposefully mathematical.  And in this regard, we speak of a
mathematical entity dubbed a random variable in order to carry out the
calculus of probability and statistics.

A random variable is like any other variable in mathematics, but with
specific properties concerning the values it can take on.  A random
variable is considered "discrete" if it can take on only countable,
distinct, or separate values [e.g., the sum of the values of a roll of
seven die].  Otherwise, a random variable can be considered "continuous" if
it can take on any value in an interval [e.g., the mass of an animal].
But, a random variable is a real-valued function--a one-to-one mapping of a
random process to a number line.

This is arguably as long about way of explaining [muck?!] why I said "but
it doesn't have to be ... time." Time doesn't have to be involved such that
the random variable does not have to distributed in time but often can be,
such as in reliability theory--for, example, the probability that a device
will survive for at least T cycles or months.

Yes to your and Grant's notion that thinking in terms of probability spaces
is a good way of thinking of probability and statistic and this mapping, as
mathematically we are doing convolutions of distributions [spaces?] when
modeling independent, usually identically distributed random trials
[activities]. But, let's not confuse the mathematical modeling with the
selection process of, say picking four of a kind from a deck of 52 cards.
All we are interested in doing is mapping the outcomes--events-- to
possibilities over which the probabilities all sum or integrate to no more
than unity. The activity gets mapped in the treatment of the random
variable in the mapping [e..g., the number of trials]. So, for example,
rolling 6s six times in a row is not a function of time, but of six
discrete, independent and identically distributed trials. For the computed
probability, in this case, it doesn't matter how long it took to roll the
dice six times.

I am thinking that this is the way your "opponent" is thinking about the
problem and suspect that he has been formally trained to see it this way.
Not the only way but a classical way.

When Eric talks about the historic difference between scientists,
mathematicians, and statisticians practicing probability theory and
statistics, these differences quickly disappeared when the idea of *uncertainty
*bubbled up into the models found in the fields of physics, economics,
measurement theory, decision theory, etc.  No longer could the world be
completely described by the classical system dynamic models.  Maybe before
Gauss even (the late-1700s), who was a polymath to be sure, error terms
were starting to be added to their equations and had to be estimated.

As to my language of "when" an event occurs with some calculated
likelihood, it can be a description or a prediction. The researcher may be
asking like Nick is [kind of?] asking in the other thread, what is the
likelihood of my getting this many 1s in a row if the process is supposedly
generating discrete random numbers between, say, one and five? In this
case, a *psychologically *unexpected event has happened. Or in planning his
experiment in advance, he may just want to set a halting threshold for
determining that any machine that gives him the same N consecutive numbers
in a row to be suspect. In that case, the event hasn't happened but has a
finite potential for happening and we want to detect that if it happens ...
too much.

Those "events" don't _happen_.  They simply _are_


This bit seems more philosophical than something a statistician would
likely [no pun intended] worry about. Admittedly, my choice of
words--throughout my post--could have been more precise, but I would not
have said that "events simply are."  When discussing the nature of time in
a "block universe," maybe that could be said, but I would have been in
Henri Bergson's corner [to my peril, of course] in the 1922 debate between
Bergson and Albert Einstein on the subject of time. :-) Curiously,
Bergson's idea of time is coming back--see *Time Reborn* (2013) by Lee
Smolin.  But this is likely not what you meant. However, you are an
out-of-the-closet Platonist by your own admission. No worries; I have
friends who are Platonists, most of them being mathematicians or
philosophers or believe the brain to be a computer, but not typically
computational scientists and certainly not cognitive scientists. :-) No
such thing as computational philosophy ... yet. Hmmm.

BTW, a Random Variable--continuous or discrete--does not have to be
uniformly distributed, but you want that in a stream of equally-likely
numbers input into a Monte Carlo simulation or when computing invertible
probability distributions [not all are invertible as you say] to also feed
a Monte Carlo simulation. I am pretty sure you have done this, from past
discussions.

In response to Grant, I would say that we are way beyond the times when we
could easily distinguish between mathematicians, statisticians, and
scientists. We have computational biology, computational physics,
computational economics, computational finance, etc. all of which have
elements of computational statistics.  Computational statistics--a subset
of the field in which I practiced--is a rising and inclusive field.  Take a
look at the curriculum at George Mason University, for example. I mean, can
we say that we need to distinguish between mathematicians and statisticians
anymore? To be a statistician these days you need to be able to derive
maximum likelihood estimators, for example.  To be a mathematician, ...
well this is from the Univeristy of Oxford:


All over the world, human beings create an immense and ever-increasing
> volume of data, with new kinds of data regularly emerging from science and
> industry. A new understanding of the value of these data to society has
> emerged, and, with it, a new and leading role for Statistics. In order to
> produce sensible theories and draw accurate conclusions from data,
> cutting-edge statistical methods are needed. These methods use advanced
> mathematical ideas combined with modern computational techniques, which
> require expert knowledge and experience to apply. A degree in Mathematics
> and Statistics equips you with the skills required for developing and
> implementing these methods, and provides a fascinating combination of deep
> and mathematically well-grounded method-building and wide-ranging applied
> work with data.


Finally and relatedly, I have been trying to follow Nick's evolving query
to the forum, but it seems--to me--like he is looking for a way to *prove *that
a generator of numbers is *not *random.  As someone else has already
mentioned, one cannot really do this, that is, *prove *that a sequence is *not
*random ... almost like trying to prove that God does not exist  When you
think of it, a series of 100 rolls of a dice that is all fives, say, is as
equally likely as ANY other specific sequence of rolls or (1/6)^100. So you
can't derive anything about randomness by just looking at the numbers.
Humans are not good at differentiating between randomness and just chaos.
Essentially, anything is possible ... but "how likely?" is the right
question.

A good--simple way [the die-hard test is not simple]-- to sense the lack of
randomness in a stream of numbers is to compare the results with a
theoretical distribution that is random using the Chi-squared
distribution.  The Poker Test fits this criterion of simple, but effective.
If the number generator is dealing hands, say--like with four 8s--in a
proportion that is not at all likely, then one should be suspicious.  But
you could not say that it could never happen. The Chi-squared distribution
is skewed positively with a tail that goes to infinity. But the thickness
of the tails can be decreased with more trials [hands] or so-called degrees
of freedom. It's a pretty cool way to do this and is easily accomplished
computationally.

Hope this clarifies a few things at least. Sorry for the long explanation.
I guess I cannot help myself ... :-(

Cheers,

Robert W.

On Wed, Dec 14, 2016 at 1:40 PM, Grant Holland <grant.holland...@gmail.com>
wrote:

> And I completely agree with Eric. But we can language it real simply and
> intuitively by just looking at what a probability space is. For further
> simplicity lets keep it to a finite probability space. (Neither a finite
> nor an infinite one says anything about "time".)
>
> A finite probability space has 3 elements: 1) a set of sample points
> called "the sample space", 2) a set of events, and 3) a set of
> probabilities *for the events*. (An infinite probability space is
> strongly similar.)
>
> But what is this "set of events"? That's the question that is being
> discussed on this thread. It turns out that the events for a finite space
> is nothing more than *the set of all possible combinations of the sample
> points*. (Formally the event set is something called a "sigma algebra",
> but no matter.) So, an event scan be thought of simply *all **combination**s
> of the sample points*.
>
> Notice that it is the events that have probabilities - not the sample
> points. Of course it turns out that each of the sample points happens to be
> a  (trivial) combination of the sample space - therefore it has a
> probability too!
>
> So, the events already *have* probabilities by virtue of just being in a
> probability space. They don't have to be "selected", "chosen" or any such
> thing. They "just sit there" and have probabilities - all of them. The
> notion of time is never mentioned or required.
>
> Admittedly, this formal (mathematical) definition of "event" is not
> equivalent to the one that you will find in everyday usage. The everyday
> one *does* involve time. So you could say that everyday usage of "event"
> is "an application" of the formal "event" used in probability theory. This
> confusion between the everyday "event" and the formal "event" may be the
> root of the issue.
>
> Jus' sayin'.
>
> Grant
>
> On 12/14/16 11:36 AM, glen [image: ☣] wrote:
>
> Ha!  Yay!  Yes, now I feel like we're discussing the radicality 
> (radicalness?) of Platonic math ... and how weird mathematicians sound (to 
> me) when they say we're discovering theorems rather than constructing them. 
> 8^)
>
> Perhaps it's helpful to think about the "axiom of choice"?  Is a "choosable" 
> element somehow distinct from a "chosen" element?  Does the act of choosing 
> change the element in some way I'm unaware of?  Does choosability require an 
> agent exist and (eventually) _do_ the choosing?
>
>
>
> On 12/14/2016 10:24 AM, Eric Charles wrote:
>
> Ack! Well... I guess now we're in the muck of what the heck probability and 
> statistics are for mathematicians vs. scientists. Of note, my understanding 
> is that statistics was a field for at least a few decades before it was 
> specified in a formal enough way to be invited into the hallows of 
> mathematics departments, and that it is still frequently viewed with 
> suspicion there.
>
> Glen states: /We talk of "selecting" or "choosing" subsets or elements from 
> larger sets.  But such "selection" isn't an action in time.  Such "selection" 
> is an already extant property of that organization of sets./
>
> I find such talk quite baffling. When I talk about selecting or choosing or 
> assigning, I am talking about an action in time. Often I'm talking about an 
> action that I personally performed. "You are in condition A. You are in 
> condition B. You are in condition A." etc. Maybe I flip a coin when you walk 
> into my lab room, maybe I pre-generated some random numbers, maybe I look at 
> the second hand of my watch as soon as you walk in, maybe I write down a 
> number "arbitrarily", etc. At any rate, you are not in a condition before I 
> put you in one, and whatever it is I want to measure about you hasn't 
> happened yet.
>
> I fully admit that we can model the system without reference to time, if we 
> want to. Such efforts might yield keen insights. If Glen had said that we can 
> usefully model what we are interested in as an organized set with 
> such-and-such properties, and time no where to be found, that might seem 
> pretty reasonable. But that would be a formal model produced for specific 
> purposes, not the actual phenomenon of interest. Everything interesting that 
> we want to describe as "probable" and all the conclusions we want to come to 
> "statistically" are, for the lab scientist, time dependent phenomena. (I 
> assert.)
>
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Reply via email to