Hi,
The following example demonstrates a rather unexpected result:
import numpy
x = numpy.array( complex( 1.0 , 1.0 ) , numpy.object )
print x.real
(1+1j)
print x.imag
0
Shouldn't real and imag return an error in such a situation?
Thanks,
Mike
Hi,
Are there any plans to add support for decimal floating point
arithmetic, as defined in the 2008 revision of the IEEE 754 standard
[0], in numpy?
Thanks for any info.
Best wishes,
Mike
[0] http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4610935tag=1
On Wed, 8 Sep 2010 09:43:56 -0600, Charles R Harris wrote:
On Wed, Sep 8, 2010 at 9:26 AM, Michael Gilbert michael.s.gilb...@gmail.com
wrote:
Hi,
Are there any plans to add support for decimal floating point
arithmetic, as defined in the 2008 revision of the IEEE 754 standard
[0
On Wed, Sep 8, 2010 at 12:23 PM, Charles R Harris wrote:
On Wed, Sep 8, 2010 at 9:46 AM, Michael Gilbert
michael.s.gilb...@gmail.com wrote:
On Wed, 8 Sep 2010 09:43:56 -0600, Charles R Harris wrote:
On Wed, Sep 8, 2010 at 9:26 AM, Michael Gilbert
michael.s.gilb...@gmail.com
wrote
On Wed, 8 Sep 2010 15:04:17 -0500, Robert Kern wrote:
On Wed, Sep 8, 2010 at 14:44, Michael Gilbert
michael.s.gilb...@gmail.com wrote:
On Wed, Sep 8, 2010 at 12:23 PM, Charles R Harris wrote:
On Wed, Sep 8, 2010 at 9:46 AM, Michael Gilbert
michael.s.gilb...@gmail.com wrote:
On Wed
On Wed, 8 Sep 2010 22:20:30 +0200, Sandro Tosi wrote:
On Wed, Sep 8, 2010 at 22:10, Michael Gilbert
michael.s.gilb...@gmail.com wrote:
Here is an example:
0.3/3.0 - 0.1
-1.3877787807814457e-17
mpmath.mpf( '0.3' )/mpmath.mpf( '3.0' ) - mpmath.mpf( '0.1' )
mpf
On Wed, 8 Sep 2010 15:44:02 -0400, Michael Gilbert wrote:
On Wed, Sep 8, 2010 at 12:23 PM, Charles R Harris wrote:
On Wed, Sep 8, 2010 at 9:46 AM, Michael Gilbert
michael.s.gilb...@gmail.com wrote:
On Wed, 8 Sep 2010 09:43:56 -0600, Charles R Harris wrote:
On Wed, Sep 8, 2010 at 9
On Wed, Sep 8, 2010 at 5:35 PM, Michael Gilbert wrote:
On Wed, 8 Sep 2010 15:44:02 -0400, Michael Gilbert wrote:
On Wed, Sep 8, 2010 at 12:23 PM, Charles R Harris wrote:
On Wed, Sep 8, 2010 at 9:46 AM, Michael Gilbert
michael.s.gilb...@gmail.com wrote:
On Wed, 8 Sep 2010 09:43:56
Hi,
I've been using numpy's float96 class lately, and I've run into some
strange precision errors. See example below:
import numpy
numpy.version.version
'1.5.0'
sys.version
'3.1.2 (release31-maint, Jul 8 2010, 01:16:48) \n[GCC 4.4.4]'
x = numpy.array( [0.01] , numpy.float32 )
On Wed, 1 Sep 2010 21:15:22 + (UTC), Pauli Virtanen wrote:
Wed, 01 Sep 2010 16:26:59 -0400, Michael Gilbert wrote:
I've been using numpy's float96 class lately, and I've run into some
strange precision errors.
[clip]
x = numpy.array( [0.01] , numpy.float96 )
[clip]
I would expect
On Thu, 8 Apr 2010 09:06:16 +0200, ioannis syntychakis wrote:
thanks for all your answers.
Now i can get al the values above the 150, but i would also like to have
their positions in de matrix.
excample:
[[1. 4. 5. 6. 7. 1
2. 5. 7. 8. 9. 3
3. 5. 7. 1. 3. 7]]
so, if i now look for
On Wed, 7 Apr 2010 16:40:24 +0200, ioannis syntychakis wrote:
Hallo Everybody,
I am new in this mail list and python. But I am working at something and I
need your help.
I have a very big matrix. What I want is to search in that matrix for values
above the (for example:) 150. If there are
Hi,
I am applying Monte Carlo for a problem involving mixed deterministic
and random values. In order to avoid a lot of special handling and
corner cases, I am using using numpy arrays full of a single value to
represent the deterministic quantities.
Anyway, I found that the standard deviation
hi,
when using numpy.random.multivariate_normal, would it make sense to warn
the user that they have entered a non-physical covariance matrix? i was
recently working on a problem and getting very strange results until i
finally realized that i had actually entered a bogus covariance matrix.
its
On Tue, 15 Sep 2009 13:26:23 -0500, Robert Kern wrote:
On Tue, Sep 15, 2009 at 12:50, Charles R
Harrischarlesr.har...@gmail.com wrote:
On Tue, Sep 15, 2009 at 11:38 AM, Michael Gilbert
michael.s.gilb...@gmail.com wrote:
hi,
when using numpy.random.multivariate_normal, would
On Tue, 15 Sep 2009 13:17:43 -0600, Charles R Harris wrote:
On Tue, Sep 15, 2009 at 12:57 PM, Michael Gilbert
michael.s.gilb...@gmail.com wrote:
On Tue, 15 Sep 2009 13:26:23 -0500, Robert Kern wrote:
On Tue, Sep 15, 2009 at 12:50, Charles R
Harrischarlesr.har...@gmail.com wrote
On Tue, 15 Sep 2009 13:31:21 -0600, Charles R Harris wrote:
On Tue, Sep 15, 2009 at 1:28 PM, Michael Gilbert
michael.s.gilb...@gmail.com wrote:
On Tue, 15 Sep 2009 13:17:43 -0600, Charles R Harris wrote:
On Tue, Sep 15, 2009 at 12:57 PM, Michael Gilbert
michael.s.gilb...@gmail.com
On Thu, 26 Mar 2009 16:56:13 -0700 Lutz Maibaum wrote:
Hello,
I just started to use python and numpy for some numerical analysis. I
have a question about the definition of the inverse Fourier transform.
The user gives the formula (p.180)
x[m] = Sum_k X[k] exp(j 2pi k m / n)
where
On Mon, 9 Mar 2009 18:21:45 -0400 Michael S. Gilbert wrote:
On Mon, 9 Mar 2009 21:45:42 +0100, Mark Bakker wrote:
Hello -
I tried to figure this out from the list, but haven't succeeded yet.
I have a simple FORTRAN binary file.
It contains:
1 integer
1 float
1 array with 16
On Sun, 1 Mar 2009 16:12:14 -0500 Gideon Simpson wrote:
So I have some data sets of about 16 floating point numbers stored
in text files. I find that loadtxt is rather slow. Is this to be
expected? Would it be faster if it were loading binary data?
i have run into this as well.
On Sun, 1 Mar 2009 14:29:54 -0500 Michael Gilbert wrote:
i have rewritten loadtxt to be smarter about allocating memory, but
it is slower overall and doesn't support all of the original
arguments/options (yet).
i had meant to say that my version is slower for smaller data sets (when
you
According to wikipedia [1], some common Mersenne twister algorithms
use a linear congruential gradient (LCG) to generate seeds. LCGs have
been known to produce poor random numbers. Does numpy's Mersenne
twister do this? And if so, is this potentially a problem?
Bruce Carneal did some tests of robustness and speed for various normal
generators. I don't know what his final tests showed for Box-Muller. IIRC,
it had some failures but nothing spectacular. The tests were pretty
stringent and based on using the erf to turn the normal distribution into a
Exactly, change task_helper.py to
import numpy as np
def task(x):
import os
print Hi, I'm, os.getpid()
return np.random.random(x)
and note the output
Hi, I'm 16197
Hi, I'm 16198
Hi, I'm 16199
Hi, I'm 16199
[ 0.58175647 0.16293922 0.30488182
Hello,
I have been reading that there may be potential issues with the
Box-Muller transform, which is used by the numpy.random.normal()
function. Supposedly, since f*x1 and f*x2 are not independent variables, then
the individual elements (corresponding to f*x1 and f*x2 ) of the
distribution
25 matches
Mail list logo