Sebastian Haase wrote:
On Thursday 21 September 2006 15:28, Tim Hochberg wrote:
David M. Cooke wrote:
On Thu, 21 Sep 2006 11:34:42 -0700
Tim Hochberg [EMAIL PROTECTED] wrote:
Tim Hochberg wrote:
Robert Kern wrote:
David M. Cooke wrote:
Tim Hochberg wrote:
It would probably be nice to expose the
Kahan sum and maybe even the raw_kahan_sum somewhere.
What about using it for .sum() by default? What is the speed hit anyway?
In any case, having it available would be nice.
I'm on the fence on using the array dtype for the
On 9/22/06, Tim Hochberg [EMAIL PROTECTED] wrote:
Sebastian Haase wrote: On Thursday 21 September 2006 15:28, Tim Hochberg wrote: David M. Cooke wrote: On Thu, 21 Sep 2006 11:34:42 -0700 Tim Hochberg
[EMAIL PROTECTED] wrote: Tim Hochberg wrote: Robert Kern wrote:
David M. Cooke wrote: On Wed,
On Thu, 21 Sep 2006 11:34:42 -0700
Tim Hochberg [EMAIL PROTECTED] wrote:
Tim Hochberg wrote:
Robert Kern wrote:
David M. Cooke wrote:
On Wed, Sep 20, 2006 at 03:01:18AM -0500, Robert Kern wrote:
Let me offer a third path: the algorithms used for .mean()
David M. Cooke wrote:
Conclusions:
- If you're going to calculate everything in single precision, use Kahan
summation. Using it in double-precision also helps.
- If you can use a double-precision accumulator, it's much better than any of
the techniques in single-precision only.
- for
David M. Cooke wrote:
On Thu, 21 Sep 2006 11:34:42 -0700
Tim Hochberg [EMAIL PROTECTED] wrote:
Tim Hochberg wrote:
Robert Kern wrote:
David M. Cooke wrote:
On Wed, Sep 20, 2006 at 03:01:18AM -0500, Robert Kern wrote:
Let
On Thursday 21 September 2006 15:28, Tim Hochberg wrote:
David M. Cooke wrote:
On Thu, 21 Sep 2006 11:34:42 -0700
Tim Hochberg [EMAIL PROTECTED] wrote:
Tim Hochberg wrote:
Robert Kern wrote:
David M. Cooke wrote:
On Wed, Sep 20, 2006 at 03:01:18AM -0500, Robert Kern wrote:
Let me
Sebastian Haase wrote:
Robert Kern wrote:
Sebastian Haase wrote:
I know that having too much knowledge of the details often makes one
forget what the newcomers will do and expect.
Please be more careful with such accusations. Repeated frequently, they can
become quite insulting.
I did
Robert Kern wrote:
This was not supposed to be a scientific statement -- I'm (again)
thinking of our students that not always appreciate the full
complexity
of computational numerics and data types and such.
They need to appreciate the complexity of computational numerics if
they are
Robert Kern wrote:
Sebastian Haase wrote:
Robert Kern wrote:
Sebastian Haase wrote:
I know that having too much knowledge of the details often makes one
forget what the newcomers will do and expect.
Please be more careful with such accusations. Repeated
On Wed, Sep 20, 2006 at 03:01:18AM -0500, Robert Kern wrote:
Let me offer a third path: the algorithms used for .mean() and .var() are
substandard. There are much better incremental algorithms that entirely avoid
the need to accumulate such large (and therefore precision-losing)
Sebastian Haase wrote:
The best I can hope for is a sound default for most (practical) cases...
I still think that 80bit vs. 128bit vs 96bit is rather academic for most
people ... most people seem to only use float64 and then there are some
that use float32 (like us) ...
I fully agree with
On 9/20/06, David M. Cooke [EMAIL PROTECTED] wrote:
On Wed, Sep 20, 2006 at 03:01:18AM -0500, Robert Kern wrote: Let me offer a third path: the algorithms used for .mean() and .var() are substandard. There are much better incremental algorithms that entirely avoid
the need to accumulate such
David M. Cooke wrote:
On Wed, Sep 20, 2006 at 03:01:18AM -0500, Robert Kern wrote:
Let me offer a third path: the algorithms used for .mean() and .var() are
substandard. There are much better incremental algorithms that entirely
avoid
the need to accumulate such large (and therefore
Robert Kern wrote:
David M. Cooke wrote:
On Wed, Sep 20, 2006 at 03:01:18AM -0500, Robert Kern wrote:
Let me offer a third path: the algorithms used for .mean() and .var() are
substandard. There are much better incremental algorithms that entirely
avoid
the need to accumulate
[Sorry, this version should have less munged formatting since I clipped
the comments. Oh, and the Kahan sum algorithm was grabbed from
wikipedia, not mathworld]
Tim Hochberg wrote:
Robert Kern wrote:
David M. Cooke wrote:
On Wed, Sep 20, 2006 at 03:01:18AM -0500, Robert Kern
Robert Kern wrote:
David M. Cooke wrote:
On Wed, Sep 20, 2006 at 03:01:18AM -0500, Robert Kern wrote:
Let me offer a third path: the algorithms used for .mean() and .var() are
substandard. There are much better incremental algorithms that entirely
avoid
the need to accumulate
Sebastian Haase wrote:
Hello all,
I just had someone from my lab coming to my desk saying:
My god - SciPy is really stupid
An array with only positive numbers claims to have a negative mean !! ?
I was asking about this before ... the reason was of course that her array was
of dtype
On Tuesday 19 September 2006 15:48, Travis Oliphant wrote:
Sebastian Haase wrote:
snip
can we please change dtype to default to float64 !?
The default is float64 now (as long as you are not using
numpy.oldnumeric).
I suppose more appropriately, we could reduce over float for integer
Sebastian Haase wrote:
On Tuesday 19 September 2006 15:48, Travis Oliphant wrote:
Sebastian Haase wrote:
snip
can we please change dtype to default to float64 !?
The default is float64 now (as long as you are not using
numpy.oldnumeric).
I suppose more appropriately, we
Sebastian Haase wrote:
(don't know how to say this for complex types !? Are here real and imag
treated separately / independently ?)
Yes. For mean(), there's really no alternative. Scalar variance is not a
well-defined concept for complex numbers, but treating the real and imaginary
parts
On 9/19/06, Travis Oliphant [EMAIL PROTECTED] wrote:
Sebastian Haase wrote:On Tuesday 19 September 2006 15:48, Travis Oliphant wrote:Sebastian Haase wrote:snipcan we please change dtype to default to float64 !?
The default is float64 now (as long as you are not usingnumpy.oldnumeric).I suppose
On Tuesday 19 September 2006 17:17, Travis Oliphant wrote:
Sebastian Haase wrote:
On Tuesday 19 September 2006 15:48, Travis Oliphant wrote:
Sebastian Haase wrote:
snip
can we please change dtype to default to float64 !?
The default is float64 now (as long as you are not using
Charles R Harris wrote:
Speed depends on the achitecture. Float is a trifle slower than double
on my Athlon64, but faster on PPC750. I don't know about other
machines. I think there is a good argument for accumlating in double
and converting to float for output if required.
Yes there is.
Sebastian Haase wrote:
I still would argue that getting a good (smaller rounding errors) answer
should be the default -- if speed is wanted, then *that* could be still
specified by explicitly using dtype=float32 (which would also be a possible
choice for int32 input) .
So you are
Travis Oliphant wrote:
Sebastian Haase wrote:
I still would argue that getting a good (smaller rounding errors) answer
should be the default -- if speed is wanted, then *that* could be still
specified by explicitly using dtype=float32 (which would also be a possible
choice for int32
On 9/19/06, Charles R Harris [EMAIL PROTECTED] wrote:
On 9/19/06, Sebastian Haase
[EMAIL PROTECTED] wrote:
Travis Oliphant wrote: Sebastian Haase wrote: I still would argue that getting a good (smaller rounding errors) answer should be the default -- if speed is wanted, then *that* could be still
Sebastian Haase wrote:
I know that having too much knowledge of the details often makes one
forget what the newcomers will do and expect.
Please be more careful with such accusations. Repeated frequently, they can
become quite insulting.
We are only talking
about people that will a) work
Robert Kern wrote:
Sebastian Haase wrote:
I know that having too much knowledge of the details often makes one
forget what the newcomers will do and expect.
Please be more careful with such accusations. Repeated frequently, they can
become quite insulting.
I did not mean to insult
29 matches
Mail list logo