The EDSTAT traffic after the initial submission by Dennis Roberts on
4/7/2000 interested me. A lot of good thoughts on teaching a fundamental
concept.
His proposal resulted in a total of 117 messages up to 4/27/2000. This
may be a record on comments to a single theme. It struck a cord w
I am a graduate student in an engineering program which emphasizes
statistical methods for process improvement and/or product
development. I have found that I love applying statistical methods for
process/product development and testing. I would not even mind a
company that is developing softwa
From: Herman Rubin <[EMAIL PROTECTED]>
Newsgroups: sci.stat.consult, sci.stat.edu, sci.stat.math
> In article ,
> Greg Heath <[EMAIL PROTECTED]> wrote:
> >Date: Fri, 28 APR 2000 00:00:45 GMT
> >From: [EMAIL PROTECTED]
>
> ..
Date: Fri, 28 APR 2000 16:04:34 -0400
From: Rich Ulrich <[EMAIL PROTECTED]>
> On Fri, 28 Apr 2000 03:31:45 -0400, Greg Heath
> <[EMAIL PROTECTED]> wrote:
>
> < snip, various >
> > My simulation currently assumes that the residuals are Gaussian. If
> > this is a bad assumption, I need to know
On Fri, 28 Apr 2000 03:31:45 -0400, Greg Heath
<[EMAIL PROTECTED]> wrote:
< snip, various >
> My simulation currently assumes that the residuals are Gaussian. If
> this is a bad assumption, I need to know ASAP to prevent higher level
> decision makers from making some very costly mistakes.
<
Hello to all from hot Austin,
I like Minitab too for all-purpose stats stuff. The graphing is great,
and the Monte Carlo abilities are very good too. However, my vote (at
least from an applied psych area covering cognitive psych,
psycholinguistics, measurement, speech-language pathology, and a
On 27 Apr 2000 13:24:01 -0700, [EMAIL PROTECTED] (Robert McGrath)
wrote:
> I am looking for a formula for kappa that applies for very special
> circumstances:
>
> 1) Two raters rated each event, but the raters varied across event.
> 2) The study involved 100 subjects, each of whom generated app.
General question,
I've seen two descriptions of "logarithmic distribution".
One is related to the frequency of digits called Benford's law (digit 1
occurs more frequently than 2, 2 than 3, etc) whose explanation is that
it is the result of a mixture of distributions.
The other description is a 2-p
I am a graduate student in an engineering program which emphasizes
statistical methods for process improvement and/or product development.
I have found that I love applying statistical methods for
process/product development and testing. I would not even mind a
company that is developing software
see http://www.e-academy.com ... for lots of software ... including minitab
at 'rental' prices ...
At 02:04 PM 4/28/00 -0400, Donald F. Burrill wrote:
>On Fri, 28 Apr 2000 [EMAIL PROTECTED] wrote:
>
>> I need to find a statistical software packages. Most of my statistical
>> work has been done u
On Fri, 28 Apr 2000, Arvind Shah wrote:
> I have an UNBALANCED nested (also called hierarchial) design with
> Factor A being fixed and the Factor B (within A) random. So my ANOVA
> has the line entries (for source): A, B(A), Error (or within cell) and
> total. I am looking for the expected m
On Fri, 28 Apr 2000, EAKIN MARK E wrote:
> Besides independent normal errors with mean zero and constant
> variance, some (many?) econometric text books do make the assumption
> that the independent variables are uncorrelated. For example see
>
> Gujarti, Damodar (1988), _Basic Econometrics 2n
- Forwarded message from Debasmit Mohanty -
I think, now is the time when we have to decide "Do we accept DATA MINING as
a part of statistics or do we keep neglecting this field as before".
I am sure there would be few statistics students like me who feel that Data
Mining is very much
On Fri, 28 Apr 2000 [EMAIL PROTECTED] wrote:
> I need to find a statistical software packages. Most of my statistical
> work has been done using Microsoft Excel. This has worked out fine,
> however, I need to find a more heavy duty package but nothing over
> whelming. I perform some simple statis
At 11:09 AM 4/28/00 -0500, EAKIN MARK E wrote:
>
>Besides independent normal errors with mean zero and constant
>variance, some (many?) econometric text books do make the assumption that
>the independent variables are uncorrelated. For example see
>
>Gujarti, Damodar (1988), _Basic Econometrics 2
In article <8e7etv$msp$[EMAIL PROTECTED]>, <[EMAIL PROTECTED]> wrote:
>Hi,
>Could anybody tell me how to write the density of the binomiale
>distribution when x=0 is not observed? will the MLE of p different than
>X-bar in the case of truncated Binomial? How about the variance and the
>bias of
I have an UNBALANCED nested (also called hierarchial) design with Factor A
being fixed and the Factor B (within A) random. So my ANOVA has the line
entries (for source): A, B(A), Error (or within cell) and total. I am
looking for the expected mean squares and approaches for computing
confidence
I have been following the discussion on Data Mining blooper for a while.
Being a first year graduate student in statistics, my comments on this issue
might sound premature. Nevertheless, I would put forward my observations.
What I have learnt so far from my interaction with the statisticians in
Besides independent normal errors with mean zero and constant
variance, some (many?) econometric text books do make the assumption that
the independent variables are uncorrelated. For example see
Gujarti, Damodar (1988), _Basic Econometrics 2nd edition_, McGraw Hill, p.
166
Mark Ea
Ed,
Was the spec written with an understanding of the measurement resolution?
Why not ask whoever wrote the spec?
I have been following numerous discussions through other sources about
design, gd&t, and metrology. Miscommunication is a major problem.
Statistics won't help you decide what the perso
> "The Player may choose to play exactly the same rules
> as the Dealer is REQUIRED to play; or the Player may choose some of the
> other
> options. Since the Player has more choices or options in play than does
the
> Dealer, why does the Dealer have the statistical advantage? It seems to
me
> th
Clip from earlier message...
"The Player may choose to play exactly the same rules
as the Dealer is REQUIRED to play; or the Player may choose some of the
other
options. Since the Player has more choices or options in play than does the
Dealer, why does the Dealer have the statistical advantage?
On 27 Apr 2000 13:50:24 -0700, [EMAIL PROTECTED] (Donald F.
Burrill) wrote:
[ ... ]
> (3) It is true for Blackjack, unlike nearly all other Las Vegas-type
> games, that a variable strategy on the part of the Player can change the
> statistical advantage to the Player's side. It should not su
Paul Bernhardt writes:
>True, but card counters abound. Last month's (April, 00) Discover
>Magazine had an article on gambling and mentioned a newly developed card
>counting strategy that you don't need to be a genius to execute
>effectively. I have a buddy who has placed in a Vegas Blackjack
In article ,
Greg Heath <[EMAIL PROTECTED]> wrote:
>Date: Fri, 28 APR 2000 00:00:45 GMT
>From: [EMAIL PROTECTED]
...
>One variable, 20 measurements per second, 26.25 seconds (526 measurements).
>The 1/e dec
I respectfully disagree with Michael Wyatt. I come from an academic
background and now work outside of academia, except for the occassional
course here or there. I too report to a manager or managers, depending on
the circumstances. But my experiences have not been the same as his. I am
constantly
I need to find a statistical software packages. Most of my statistical
work has been done using Microsoft Excel. This has worked out fine,
however, I need to find a more heavy duty package but nothing over
whelming. I perform some simple statistical work but would like to
begin to use a more power
I think I would consider using generalizability theory for this problem.
Shavelson and Webb have a good book out on the subject, published by Sage.
On Thu, 27 Apr 2000, Robert McGrath wrote:
> I am looking for a formula for kappa that applies for very special
> circumstances:
>
> 1) Two raters
...And it extends even further. Many of us who toil in areas outside of
academia have our work and productivity "supervised" by managers or
directors who have little or no training in statistics, beyond a survey
course. They receive the flashy brochures and read the ads that promise
analytical sof
I am sorry for the confusion. English is not my native language and sometimes
I am not precise enough.
What I meant with the term error, was the statistical error of a measurement.
I am interessted in the statistical relevance of the measurement (confidence
interval that the measured value is cor
Date: Fri, 28 APR 2000 00:00:45 GMT
From: [EMAIL PROTECTED]
> > 1. Randomly draw, with replacement, 526 measurements.
>
> You are only justfied in resampling in this way if you know that all
> your observations are iid. I didn't quite follow your problem but it
> sounds that the iid assumption i
Date: Thu, 27 APR 2000 17:17:05 -0400
From: Rich Ulrich <[EMAIL PROTECTED]>
> On Wed, 26 Apr 2000 20:43:02 -0400, Greg Heath
> <[EMAIL PROTECTED]> wrote:
>
> > Can you help or lead me to the appropriate reference?
> >
> > I have 526 radar measurements evenly sampled over 26.25 sec (i.e., pulse
32 matches
Mail list logo