On Thu, Dec 11, 2014 at 10:53 AM, Stephan Hoyer <sho...@gmail.com> wrote:
>
> On Thu, Dec 11, 2014 at 8:17 AM, Sebastian Berg <
> sebast...@sipsolutions.net> wrote:
>
>> One option
>> would also be to have something like:
>>
>> np.common_shape(*arrays)
>> np.broadcast_to(array, shape)
>> # (though I would like many arrays too)
>>
>> and then broadcast_ar rays could be implemented in terms of these two.
>>
>
> It looks like np.broadcast let's us write the common_shape function very
> easily;
>
> def common_shape(*args):
>     return np.broadcast(*args).shape
>
> And it's also very fast:
> 1000000 loops, best of 3: 1.04 µs per loop
>
> So that does seem like a feasible refactor/simplification for
> np.broadcast_arrays.
>
> Sebastian -- if you're up for writing np.broadcast_to in C, that's great!
> If you're not sure if you'll be able to get around to that in the near
> future, I'll submit my PR with a Python implementation (which will have
> tests that will be useful in any case).
>

np.broadcast is the Python object of the old iterator. It may be a better
idea to write all of these functions using the new one, np.nditer:

def common_shape(*args):
    return np.nditer(args).shape[::-1]  # Yes, you do need to reverse it!

And in writing 'broadcast_to', rather than rewriting the broadcasting
logic, you could check the compatibility of the shape with something like:

np.nditer((arr,), itershape=shape)  # will raise ValueError if shapes
incompatible

After that, all that would be left is some prepending of zero strides, and
some zeroing of strides of shape 1 dimensions before calling as_strided

Jaime

-- 
(\__/)
( O.o)
( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes
de dominación mundial.
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to