Zha0q1 opened a new issue #18953:
URL: https://github.com/apache/incubator-mxnet/issues/18953


   ```
   import mxnet
   from mxnet import np, npx
   
   A = np.ones((10))
   B = np.broadcast_to(A, (2, 10))
   print(B.shape)
   # errors out
   #C = npx.broadcast_like(A, B)
   # works
   C = npx.broadcast_like(A.reshape(1, 10), B)
   print(C.shape)
   ```
   
   np.broadcast_to can broadcast (x, ) to (2, x) but npx.braodcast_like cannot. 
   ```
   ubuntu@ip-172-31-10-124:~/incubator-mxnet$ python broadcast_comparision.py 
   [22:53:51] ../src/storage/storage.cc:198: Using Pooled (Naive) 
StorageManager for CPU
   (2, 10)
   Traceback (most recent call last):
     File "broadcast_comparision.py", line 8, in <module>
       C = npx.broadcast_like(A, B)
     File "<string>", line 61, in broadcast_like
     File "/home/ubuntu/incubator-mxnet/python/mxnet/_ctypes/ndarray.py", line 
91, in _imperative_invoke
       ctypes.byref(out_stypes)))
     File "/home/ubuntu/incubator-mxnet/python/mxnet/base.py", line 246, in 
check_call
       raise get_last_ffi_error()
   mxnet.base.MXNetError: Traceback (most recent call last):
     File "../src/operator/tensor/./broadcast_reduce_op.h", line 446
   MXNetError: Check failed: lhs_shape.ndim() == rhs_shape.ndim() (1 vs. 2) : 
Operand of shape [10] cannot be broadcasted to [2,10]
   ```
   
   Is there any reason to make them different? Otherwise should we make them 
consistent?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to