Hi all,

On my system, np.nanpercentile()  is orders of magnitude (>100x) slower than np.percentile().
I use numpy 1.23.1

Wondering if there is a way to speed it up.

I came across this workaround for 3D arrays:
https://krstn.eu/np.nanpercentile()-there-has-to-be-a-faster-way/

But I would need a generalized solution that works on N dimensions.
So I started adopting the above - but wondering if I am reinventing the wheel here?

Is there already a python package that implements a speedier nanpercentile for numpy? (similar idea as the 'Bottleneck' package?)
Or other known workarounds to achieve the same result?

Best regards,
Aron

_______________________________________________
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com

Reply via email to