Nathaniel Smith <njs <at> pobox.com> writes:

> On Fri, Aug 27, 2010 at 1:35 PM, Robert Kern <robert.kern <at> gmail.com> 
wrote:
> > As valid gets larger, in1d() will catch up but for smallish sizes of
> > valid, which I suspect given the "non-numeric" nature of the OP's (Hi,
> > Brett!) request, kern_in() is usually better.
> 
> Oh well, I was just guessing based on algorithmic properties. Sounds
> like there might be some optimizations possible to in1d then, if
> anyone had a reason to care .
> 

Ideally, I would like in1d to always be the right answer to this problem. It 
should be easy to put in an if statement to switch to a kern_in()-type function 
 
in the case of large ar1 but small ar2.  I will do some timing tests and make a 
patch.

Incidentally, the timing tests done when in1d was introduced only considered 
the 
case when len(ar1) = len(ar2). In this case the current in_1d is pretty much 
always faster than kern_in().

Neil

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to