efiring wrote:
On 05/14/2010 11:03 AM, Dr. Phillip M. Feldman wrote:
snip It is perfectly reasonable to have an algorithm that uses values
sorted along
the last axis, even if that dimension sometimes turns out to be one.
Eric
Excellent point! I agree. Case closed.
Phillip
Robert Kern-2 wrote:
On Wed, May 12, 2010 at 20:19, Dr. Phillip M. Feldman
pfeld...@verizon.net wrote:
When operating on an array whose last dimension is unity, the default
behavior of argsort is not very useful:
|6 x=random.random((4,1))
|7 shape(x)
7 (4, 1)
|8
Warren Weckesser-3 wrote:
A couple questions:
How many floats will you be storing?
When you test for membership, will you want to allow for a numerical
tolerance, so that if the value 1 - 0.7 is added to the set, a test for
the value 0.3 returns True? (0.3 is actually
When operating on an array whose last dimension is unity, the default
behavior of argsort is not very useful:
|6 x=random.random((4,1))
|7 shape(x)
7 (4, 1)
|8 argsort(x)
8
array([[0],
[0],
[0],
[0]])
|9 argsort(x,axis=0)
I have an application that involves managing sets of floats. I can use
Python's built-in set type, but a data structure that is optimized for
fixed-size objects that can be compared without hashing should be more
efficient than a more general set construct. Is something like this
available?
--
Anne Archibald-2 wrote:
on a 32-bit machine,
the space overhead is roughly a 32-bit object pointer or two for each
float, plus about twice the number of floats times 32-bit pointers for
the table.
Hello Anne,
I'm a bit confused by the above. It sounds as though the hash table
approach
When I issue the command
np.lookfor('bessel')
I get the following:
Search results for 'bessel'
---
numpy.i0
Modified Bessel function of the first kind, order 0.
numpy.kaiser
Return the Kaiser window.
numpy.random.vonmises
Draw samples from a von Mises
Anne Archibald wrote:
2009/11/29 Dr. Phillip M. Feldman pfeld...@verizon.net:
All of the statistical packages that I am currently using and have used
in
the past (Matlab, Minitab, R, S-plus) calculate standard deviation using
the
sqrt(1/(n-1)) normalization, which gives a result
Pauli Virtanen-3 wrote:
I'd think that downcasting is different from dropping the imaginary part.
There are many ways (in fact, an unlimited number) to downcast from complex
to real. Here are three possibilities:
- Take the real part.
- Take the magnitude (root-mean-square of the real
Pauli Virtanen-3 wrote:
Nevertheless, I can't really regard dropping the imaginary part a
significant issue.
I am amazed that anyone could say this. For anyone who works with Fourier
transforms, or with electrical circuits, or with electromagnetic waves,
dropping the imaginary part is a
David Warde-Farley-2 wrote:
A less harmful solution (if a solution is warranted, which is for the
Council of the Elders to
decide) would be to treat the Python complex type as a special case, so
that the .real attribute is accessed instead of trying to cast to float.
There are
Stéfan van der Walt wrote:
Would it be possible to, optionally, throw an exception?
S.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I'm certain that it is
.
Dr. Phillip M. Feldman
--
View this message in context:
http://old.nabble.com/Assigning-complex-values-to-a-real-array-tp22383353p26705737.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.
___
NumPy-Discussion mailing list
NumPy
Robert Kern-2 wrote:
snip
Downcasting data is a necessary operation sometimes. We explicitly
made a choice a long time ago to allow this.
Robert Kern
This might be the time to recognize that that was a bad choice and reverse
it. It is not clear to me why downcasting from complex
to
explicitly downcast from complex to float, and that if he/she fails to do
this, that an exception be triggered.
Dr. Phillip M. Feldman
--
View this message in context:
http://old.nabble.com/Assigning-complex-values-to-a-real-array-tp22383353p26688453.html
Sent from the Numpy-discussion mailing list
All of the statistical packages that I am currently using and have used in
the past (Matlab, Minitab, R, S-plus) calculate standard deviation using the
sqrt(1/(n-1)) normalization, which gives a result that is unbiased when
sampling from a normally-distributed population. NumPy uses the
All of the statistical packages that I am currently using and have used in
the past (Matlab, Minitab, R, S-plus) calculate standard deviation using the
sqrt(1/(n-1)) normalization, which gives a result that is unbiased when
sampling from a normally-distributed population. NumPy uses the
I opened ticket #1302 to make the following enhancement request:
I'd like to see hstack and vstack promote 1-D arguments to 2-D when this is
necessary to make the dimensions match. In the following example, c_ works
as expected while hstack does not:
[~]|8 x
8
array([[1, 2, 3],
[4,
I've defined the following one-line function that uses numpy.where:
def sin_half_period(x): return where(0.0 = x = pi, sin(x), 0.0)
When I try to use this function, I get an error message:
In [4]: z=linspace(0,2*pi,9)
In [5]: sin_half_period(z)
I've been reading the online NumPy tutorial at the following URL:
http://numpy.scipy.org/numpydoc/numpy-10.html
When I try the following example, I get an error message:
In [1]: a=arange(10)
In [2]: a.itemsize()
---
I have a 1-D array and would like to generate a list of indices for which a
given condition is satisfied. What is the cleanest way to do this?
--
View this message in context:
http://www.nabble.com/how-to-find-array-indices-at-which-a-condition-is-satisfied--tp25072656p25072656.html
Sent from
I'd like to be able to make a slice of a 3-dimensional array, doing something
like the following:
Y= X[A, B, C]
where A, B, and C are lists of indices. This works, but has an unexpected
side-effect. When A, B, or C is a length-1 list, Y has fewer dimensions than
X. Is there a way to do the
With Python/NumPy, is there a way to get the maximum element of an array and
also the index of the element having that value, at a single shot? (One can
do this in Matlab via a statement like the following:
[x_max,ndx]= max(x)
--
View this message in context:
Although I've used Matlab for many years and am quite new to Python, I'm
already convinced that the Python/NumPy combination is more powerful and
flexible than the Matlab base, and that it generally takes less Python code
to get the same job done. There is, however, at least one thing that is
I'm using the Enthought Python Distribution. When I define a matrix and
transpose it, it appears that the result is no longer a matrix (see below).
This is both surprising and disappointing. Any suggestions will be
appreciated.
In [16]: A=matrix([[1,2,3],[4,5,6],[7,8,9]])
In [17]:
25 matches
Mail list logo