Hello,
else:
val11[i][j], val22[i][j] = integrate.quad(lambda x: F1(x)*F2(x), 0, pi)
But, this calculation takes so long time, let's say about 1 hour
(theoretically)... Is there any better way to easily and fast calculate
the process such as [ F( i ) for i in xlist ] or something like
Thank you for your answers.
Chris Barker wrote:
for consistency with the rest of the numpy types
Then, why do numpy.complex64(A), numpy.complex128(A),
numpy.uint8(A),... all work with arrays? It's very convenient that it
works like this! It's awkward that numpy.complex(A) is the only one
The syntax numpy.complex(A) seems to be the most natural and obvious
thing a user would want for casting an array A to complex values.
Expressions like A.astype(complex), array(A, dtype=complex),
numpy.complex128(A) are less obvious, especially the last two ones,
which look a bit far-fetched.
if you want to write to a string, why not use .tostring()?
Because A.tostring() returns the binary data, while I would like the text
representation.
More precisely, I would like to use A.tofile(sep=\t).
Yes, this is a known shortcoming of .tofile().
Is it worth filing a bug report ?
--
Ah, I found a workaround: savetxt() can work with a StringIO
- savetxt(file_buffer, A)
This is only a workaround. I still think A.tofile() should be capable of
writing into a StringIO.
--
O.C.
___
NumPy-Discussion mailing list
Hello,
as said in the subject, the following code produces an error. Is it normal ?
**
A = r_[1]
file_buffer = StringIO()
A.tofile(file_buffer)
IOError: first argument must be a string or open file
Hello,
I observe the following behavior:
numpy.r_[True, False] - array([1, 0], dtype=int8)
numpy.r_[True] - array([ True], dtype=bool)
I would expect the first line to give a boolean array:
array([ True, False], dtype=bool)
Is it normal? Is it a bug?
--
O.C.
numpy.__version__ =
Hello,
I have two lists of numpy matrices : LM = [M_i, i=1..N] and LN = [N_i, i =1..N]
and I would like to compute the list of the products : LP = [M_i * N_i,
i=1..N].
I can do :
P=[]
for i in range(N) :
P.append(LM[i]*LN[i])
But this is not vectorized. Is there a faster solution ?
Can
Hello,
I have data files where the decimal separator is a comma. Can I import this
data with numpy.loadtxt ?
Notes :
- I tried to set the locale LC_NUMERIC=fr_FR.UTF-8 but it didn't change
anything.
- Python 2.5.2, Numpy 1.1.0
Have a nice day,
O.C.
Créez votre adresse électronique
Thank you for the answers,
I am now disturbed by this result :
In [1]: import numpy
In [2]: numpy.fromstring(abcd, dtype = float, sep = ' ')
Out[2]: array([ 0.])
Shouldn't it raise an exception ValueError ? (because abcd is not a float)
Regards,
O.C.
Créez votre adresse électronique
Hello,
I would like to build a big ndarray by adding rows progressively.
I considered the following functions : append, concatenate, vstack and the like.
It appears to me that they all create a new array (which requires twice the
memory).
Is there a method for just adding a row to a ndarray
Shouldn't it raise an exception ValueError ? (because abcd is not a float)
I don't think so, but it shouldn't return a zero either.
That call should mean: scan this whitespace separated string for as many
floating point numbers as it has. There are none, so it should return
and empty
Hello,
the following code drives python into an endless loop :
import numpy
numpy.fromstring(abcd, dtype = float, sep = ' ')
I think the function numpy.fromstring is lacking an adequate error handling for
that case.
Is it a bug ?
Regards,
--
O.C.
Python 2.5.2
Debian Lenny
Créez
13 matches
Mail list logo