While most other Python applications (scipy, pandas) use for the calculation of 
the standard deviation the default "ddof=1" (i.e. they calculate the sample 
standard deviation), the Numpy implementation uses the default "ddof=0".
Personally I cannot think of many applications where it would be desired to 
calculate the standard deviation with ddof=0. In addition, I feel that there 
should be consistency between standard modules such as numpy, scipy, and pandas.

I am wondering if there is a good reason to stick to "ddof=0" as the default 
for "std", or if others would agree with my suggestion to change the default to 
"ddof=1"?

Thomas

---
Prof. (FH) PD Dr. Thomas Haslwanter
School of Applied Health and Social Sciences
University of Applied Sciences Upper Austria
FH OÖ Studienbetriebs GmbH
Garnisonstraße 21
4020 Linz/Austria
Tel.: +43 (0)5 0804 -52170
Fax: +43 (0)5 0804 -52171
E-Mail: thomas.haslwan...@fh-linz.at<mailto:thomas.haslwan...@fh-linz.at>
Web: me-research.fh-linz.at<http://work.thaslwanter.at>
or work.thaslwanter.at

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to