Currently in docstring the description of dtype argument for np.array says
dtype : data-type, optional
> The desired data-type for the array. If not given, then the type will
> be determined as the minimum type required to hold the objects in the
> sequence. This argument can only be used to 'upcast' the array. For
> downcasting, use the .astype(t) method.
But I found this description somewhat misleading for integer types. Is this
generally true that "the type will
be determined as the minimum type required to hold the objects in the
As I always thought for integer arrays first `np.int_` is assumed and if it
is not enough the bigger dtype s are used, and finally falling to `object`
dtype. This question comes from this discussion:
Also somewhat (un)related (but I was always curious), why `float` type was
chosen as a default for dtype?
Why np.int_ was not chosen and will this change be a good idea (in general,
without taking into account backward compatibility :-))? There is some
discussion here: https://github.com/numpy/numpy/issues/10405.
p.s.: For some constructors the signature looks as follows (in IPython
zeros(shape, dtype=float, order='C')
Return a new array of given shape and type, filled with zeros.
empty(shape, dtype=float, order='C')
Return a new array of given shape and type, without initializing entries.
But for `ones` the dtype is none instead of float and looks different:
Signature: np.ones(shape, dtype=None, order='C')
Return a new array of given shape and type, filled with ones.
With kind regards,
NumPy-Discussion mailing list