Michael Katz michaeladamkatz at yahoo.com writes:
Yes, thanks, np.in1d is what I needed. I didn't know how to find that.
Did you check in the documentation? If so, where did you check? Would you have
found it if it was in the 'See also' section of where()?
Ideally, I would like in1d to always be the right answer to this problem. It
should be easy to put in an if statement to switch to a kern_in()-type
function
in the case of large ar1 but small ar2. I will do some timing tests and make
a
patch.
I uploaded a timing test and a patch
Gael Varoquaux gael.varoquaux at normalesup.org writes:
Let say that you have a dataset that is in a 3D array, where axis 0
corresponds to days, axis 1 to hours of the day, and axis 2 to
temperature, you might want to have the mean of the temperature in each
day, which would be in current
Rob Speer rspeer at MIT.EDU writes:
It's not just about the rows: a 2-D datarray can also index by
columns, an operation that has no equivalent in a 1-D array of records
like your example.
rec['305'] effectively indexes by column. This is one the main attractions of
structured/record arrays.
Robert Kern robert.kern at gmail.com writes:
Please install Fernando's datarray package, play with it, read its
documentation, then come back with objections or alternatives. I
really don't think you understand what is being proposed.
Well the discussion has been pretty confusing. For
Warren Weckesser warren.weckesser at enthought.com writes:
Benjamin Root wrote:
Brad, I think you are doing it the right way, but I think what is
happening is that the reshape() call on the sliced array is forcing a
copy to be made first. The fact that the copy has to be made twice
This is an intentional feature, not a bug.
Chris
Ah, ok, thanks. I missed the explanation in the doc string because I'm using
version 1.3 and forgot to check the web docs.
For the record, this was my bug: I read a fits binary table with pyfits. One of
the table fields was a chararray
This inconsistency is fixed in Numpy 1.4 (which included a major
overhaul of chararrays). in1d will perform the auto
whitespace-stripping on chararrays, but not on regular ndarrays of strings.
Great, thanks.
Pyfits continues to use chararray since not doing so would break
existing code
Shailendra shailendra.vikas at gmail.com writes:
Hi All,
I want to make a function which should be like this
code
cordinates1=(x1,y1) # x1 and y1 are x-cord and y-cord of a large
number of points
cordinates2=(x2,y2) # similar to condinates1
indices1,indices2=
Eric Emsellem eemselle at eso.org writes:
Hi
I would like to test whether strings in a numpy S array are in a given list
but
I don't manage to do so. Any hint is welcome.
===
# So here is an example of what I would like to do
# I
Nils Wagner nwagner at iam.uni-stuttgart.de writes:
Hi David,
you are right. It's a proprietary library.
I found a header file (*.h) including prototype
declarations of externally callable procedures.
How can I proceed ?
Apparently you can use ctypes to access fortran libraries. See
Pierre GM pgmdevlist at gmail.com writes:
It has been suggested (ticket #1262) to change the default dtype=float to
dtype=None in np.genfromtxt.
Any thoughts ?
I agree dtype=None should be default for the reasons given in the ticket.
How do we handle the backwards-incompatible change?
Hi,
I've written some release notes (below) describing the changes to
arraysetops.py. If someone with commit access could check that these sound ok
and add them to the release notes file, that would be great.
Cheers,
Neil
New features
Improved set operations
Charles R Harris charlesr.harris at gmail.com writes:
People import these functions -- yes, they shouldn't do that -- and the python
builtin versions are overloaded, causing hard to locate errors.
While I would love less duplication in the numpy namespace, I don't think the
small gain here is
josef.pktd at gmail.com writes:
Good point. With the return_inverse solution, is unique() guaranteed
to give back the same array of unique values in the same (presumably
sorted) order? That is, for two arrays A and B which have elements
only drawn from a set S, is all(unique(A) ==
John [H2O] washakie at gmail.com writes:
What I am trying to do (obviously?) is find all the values of X that fall
within a time range.
Specifically, one point I do not understand is why the following two methods
fail:
-- 196 ind = np.where( (t1 Y[:,0] t2) ) #same result
Pierre GM pgmdevlist at gmail.com writes:
What about
'formats':[eval(b) for b in event_format]
Should it fail, try something like:
dtype([(x,eval(b)) for (x,b) in zip(event_fields, event_format)])
At least you force dtype to have the same nb of names formats.
You could use
data =
Robert Cimrman cimrman3 at ntc.zcu.cz writes:
Hi Neil,
This sounds good. If you don't have time to do it, I don't mind having
a go at writing
a patch to implement these changes (deprecate the existing unique1d, rename
unique1d to unique and add the set approach from the old unique, and
David Cournapeau david at ar.media.kyoto-u.ac.jp writes:
(Continuing the discussion initiated in the neighborhood iterator thread)
Hi,
I would like to gather people's opinion on what to target for numpy
1.4.0.
Are there any other features people would like to put into numpy for
David Cournapeau cournape at gmail.com writes:
David Cournapeau wrote:
(Continuing the discussion initiated in the neighborhood iterator
thread)
- Chuck suggested to drop python 2.6 support from now on. I am
against it without a very strong and detailed rationale, because many
Shivaraj M S shivraj.ms at gmail.com writes:
Hello,I just came across 'all' and 'alltrue' functions in fromnumeric.py
They are one and same.IMHO,alltrue = all would be
sufficient.Regards___
Shivaraj--
There are other duplications too:
np.all
np.alltrue
np.any
np.sometrue
What about merging unique and unique1d? They're essentially identical for
an
array input, but unique uses the builtin set() for non-array inputs and so
is
around 2x faster in this case - see below. Is it worth accepting a speed
regression for unique to get rid of the function
Robert Cimrman cimrman3 at ntc.zcu.cz writes:
Hi,
I am starting a new thread, so that it reaches the interested people.
Let us discuss improvements to arraysetops (array set operations) at [1]
(allowing non-unique arrays as function arguments, better naming
conventions and
Robert Cimrman cimrman3 at ntc.zcu.cz writes:
I'd really like to see the setmember1d_nu function in ticket 1036 get into
numpy. There's a patch waiting for review that including tests:
http://projects.scipy.org/numpy/ticket/1036
Is there anything I can do to help get it applied?
Robert Cimrman cimrman3 at ntc.zcu.cz writes:
Anne Archibald wrote:
1. add a keyword argument to intersect1d assume_unique; if it is not
present, check for uniqueness and emit a warning if not unique
2. change the warning to an exception
Optionally:
3. change the meaning of the
Thanks for the summary! I'm +1 on points 1, 2 and 3.
+0 for points 4 and 5 (assume_unique keyword and renaming arraysetops).
Neil
PS. I think you mean deprecate, not depreciate :)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
Hi all,
I posted this message couple of days ago, but gmane grouped it with an old
thread and it hasn't shown up on the front page. So here it is again...
I'd really like to see the setmember1d_nu function in ticket 1036 get into
numpy. There's a patch waiting for review that including tests:
Robert Cimrman cimrman3 at ntc.zcu.cz writes:
Re-hi!
Robert Cimrman wrote:
Hi all,
I have added to the ticket [1] a script that compares the proposed
setmember1d_nu() implementations of Neil and Kim. Comments are welcome!
[1] http://projects.scipy.org/numpy/ticket/1036
I
Andrea Gavana andrea.gavana at gmail.com writes:
this should be a very easy question but I am trying to make a
script run as fast as possible, so please bear with me if the solution
is easy and I just overlooked it.
That's weird, I was trying to solve exactly the same problem a couple of
josef.pktd at gmail.com writes:
setmember1d is very fast compared to the other solutions for large b.
However, setmember1d requires that both arrays only have unique elements.
So it doesn't work if, for example, your first array is a data vector
with member ship in different groups
Robert Kern robert.kern at gmail.com writes:
Do you mind if we just add you to the THANKS.txt file, and consider
you as a NumPy Developer per the LICENSE.txt as having released that
code under the numpy license? If we're dotting our i's and crossing
our t's legally, that's a bit more
Robert Cimrman cimrman3 at ntc.zcu.cz writes:
Hi Neil!
I would like to add your function to arraysetops.py - is it ok? Just the
name would be changed to setmember1d_nu, to follow the naming in the
module (like intersect1d_nu).
Thank you,
r.
That's fine! There's no licence
- Should we have a separate User manual and a Reference manual, or only
a single manual?
Are there still plans to write a 10 page 'Getting started with NumPy'
document? I think this would be very useful. Ideally a 'getting started'
document, the docstrings, and a reference manual is all the
Ok, thanks.
I meant the amount of vertical space between lines of text - for
example, the gaps between parameter values and their description, or
the large spacing between both lines of text and and the text boxes in
the examples section. If other people agree it's a problem, I thought
the
A new copy of the reference guide is now available at
http://mentat.za.net/numpy/refguide/
It there a reason why there's so much vertical space between all of the text
sections? I find the docstrings much easier to read in the editor:
Thanks Joe for the excellent post. It mirrors my experience with
Python and Numpy very eloquently, and I think it presents a good
argument against the excessive use of namespaces. I'm not so worried
about N. vs np. though - I use the same method Matthew Brett suggests.
If I'm going to use, say,
I'm just a numpy user, but for what it's worth, I would much prefer to
have a single numpy namespace with a small as possible number of
objects inside that namespace. To me, 'as small as possible' means
that it only includes the array and associated array manipulation
functions (searchsorted,
Do we really need these functions in numpy? I mean it's just
multiplying/dividing the value by pi/180 (who knows why they're in the
math module..). Numpy doesn't have asin, acos, or atan either (they're
arcsin, arcos and arctan) so it isn't a superset of the math module.
I would like there to be
38 matches
Mail list logo