Anne Archibald wrote:
2009/11/29 Dr. Phillip M. Feldman pfeld...@verizon.net:
All of the statistical packages that I am currently using and have used
in
the past (Matlab, Minitab, R, S-plus) calculate standard deviation using
the
sqrt(1/(n-1)) normalization, which gives a result that
On 04-Dec-09 10:54 AM, Bruce Southey wrote:
On 12/04/2009 06:18 AM, yogesh karpate wrote:
@ Pauli and @ Colin:
Sorry for the late reply. I was
busy in some other assignments.
# As far as normalization by(n) is concerned then its common
assumption that
On Sun, Dec 6, 2009 at 11:01 AM, Colin J. Williams c...@ncf.ca wrote:
On 04-Dec-09 10:54 AM, Bruce Southey wrote:
On 12/04/2009 06:18 AM, yogesh karpate wrote:
@ Pauli and @ Colin:
Sorry for the late reply. I was
busy in some other assignments.
# As far as
On Sun, Dec 6, 2009 at 9:21 AM, josef.p...@gmail.com wrote:
On Sun, Dec 6, 2009 at 11:01 AM, Colin J. Williams c...@ncf.ca wrote:
snip
What's the best estimate? That's the main question
Estimators differ in their (sample or posterior) distribution,
especially bias and variance.
Stein
Colin J. Williams skrev:
When one has a smallish sample size, what give the best estimate of the
variance?
What do you mean by best estimate?
Unbiased? Smallest standard error?
In the widely used Analysis of Variance (ANOVA), the degrees of freedom
are reduced for each mean estimated,
On Sun, Dec 6, 2009 at 11:36 AM, Sturla Molden stu...@molden.no wrote:
Colin J. Williams skrev:
When one has a smallish sample size, what give the best estimate of the
variance?
What do you mean by best estimate?
Unbiased? Smallest standard error?
In the widely used Analysis of Variance
On 04-Dec-09 05:21 AM, Pauli Virtanen wrote:
pe, 2009-12-04 kello 11:19 +0100, Chris Colbert kirjoitti:
Why cant the divisor constant just be made an optional kwarg that
defaults to zero?
It already is an optional kwarg that defaults to zero.
Cheers,
I suggested that 1
On 04-Dec-09 07:18 AM, yogesh karpate wrote:
@ Pauli and @ Colin:
Sorry for the late reply. I was busy
in some other assignments.
# As far as normalization by(n) is concerned then its common
assumption that the population is normally distributed and
Colin J. Williams skrev:
suggested that 1 (one) would be a better default but Robert Kern told
us that it won't happen.
I don't even see the need for this keyword argument, as you can always
multiply the variance by n/(n-1) to get what you want.
Also, normalization by n gives the ML
Why cant the divisor constant just be made an optional kwarg that
defaults to zero?
It wont break any existing code, and will let everybody that wants the
other behavior, to have it.
On Thu, Dec 3, 2009 at 1:49 PM, Colin J. Williams c...@ncf.ca wrote:
Yogesh,
Could you explain the rationale
pe, 2009-12-04 kello 11:19 +0100, Chris Colbert kirjoitti:
Why cant the divisor constant just be made an optional kwarg that
defaults to zero?
It already is an optional kwarg that defaults to zero.
Cheers,
--
Pauli Virtanen
___
NumPy-Discussion
Thu, 03 Dec 2009 11:05:07 +0530, yogesh karpate wrote:
The thing is that the normalization by (n-1) is done for the no. of
samples
20 or23(Not sure about this no. but sure about the thing that this no
isnt
greater than 25) and below that we use normalization by n. Regards
~ymk
The thing is
@ Pauli and @ Colin:
Sorry for the late reply. I was busy in
some other assignments.
# As far as normalization by(n) is concerned then its common assumption
that the population is normally distributed and population size is fairly
large enough to fit the normal
On 12/04/2009 06:18 AM, yogesh karpate wrote:
@ Pauli and @ Colin:
Sorry for the late reply. I was busy
in some other assignments.
# As far as normalization by(n) is concerned then its common
assumption that the population is normally distributed and
This is getting OT, as I'm not making any comment on numpy's
implementation, but...
yogesh karpate wrote:
# As far as normalization by(n) is concerned then its common assumption
that the population is normally distributed and population size is
fairly large enough to fit the normal
Yogesh,
Could you explain the rationale for this choice please?
Colin W.
On 03-Dec-09 00:35 AM, yogesh karpate wrote:
The thing is that the normalization by (n-1) is done for the no. of
samples 20 or23(Not sure about this no. but sure about the thing that
this no isnt greater than 25) and
The thing is that the normalization by (n-1) is done for the no. of samples
20 or23(Not sure about this no. but sure about the thing that this no isnt
greater than 25) and below that we use normalization by n.
Regards
~ymk
___
NumPy-Discussion mailing
Colin J. Williams skrev:
Where the distribution of a variate is not known a priori, then I
believe that it can be shown
that the n-1 divisor provides the best estimate of the variance.
Have you ever been shooting with a rifle?
What would you rather do:
- Hit 9 or 10, with a bias to the
2009/11/29 Dr. Phillip M. Feldman pfeld...@verizon.net:
All of the statistical packages that I am currently using and have used in
the past (Matlab, Minitab, R, S-plus) calculate standard deviation using the
sqrt(1/(n-1)) normalization, which gives a result that is unbiased when
sampling from
On 29-Nov-09 17:13 PM, Dr. Phillip M. Feldman wrote:
All of the statistical packages that I am currently using and have used in
the past (Matlab, Minitab, R, S-plus) calculate standard deviation using the
sqrt(1/(n-1)) normalization, which gives a result that is unbiased when
sampling from a
On Mon, Nov 30, 2009 at 12:30 AM, Colin J. Williams c...@ncf.ca wrote:
On 29-Nov-09 17:13 PM, Dr. Phillip M. Feldman wrote:
All of the statistical packages that I am currently using and have used in
the past (Matlab, Minitab, R, S-plus) calculate standard deviation using the
sqrt(1/(n-1))
21 matches
Mail list logo