Compare (on Python3 -- for Python2, read "xrange" instead of "range"):

In [2]: %timeit np.array(range(1000000), np.int64)
10 loops, best of 3: 156 ms per loop

In [3]: %timeit np.arange(1000000, dtype=np.int64)
1000 loops, best of 3: 853 µs per loop


Note that while iterating over a range is not very fast, it is still much
better than the array creation:

In [4]: from collections import deque

In [5]: %timeit deque(range(1000000), 1)
10 loops, best of 3: 25.5 ms per loop


On one hand, special cases are awful. On the other hand, the range builtin
is probably important enough to deserve a special case to make this
construction faster. Or not? I initially opened this as
https://github.com/numpy/numpy/issues/7233 but it was suggested there that
this should be discussed on the ML first.

(The real issue which prompted this suggestion: I was building sparse
matrices using scipy.sparse.csc_matrix with some indices specified using
range, and that construction step turned out to take a significant portion
of the time because of the calls to np.array).

Antony
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to