The data.shape is (2, 3, 128, 128) where batch_size is data.shape[0]. 
The loss.shape is like data.flatten().
If I increase batch_size, the loss.shape will still be [x>65536, 1].
How to fix this problem?

[ Full content available at: 
https://github.com/apache/incubator-mxnet/issues/12751 ]
This message was relayed via gitbox.apache.org for devnull@infra.apache.org

Reply via email to