## Description
Currently dot product for float16 is supported only on GPU (was added 
https://github.com/apache/incubator-mxnet/issues/10531), but is not supported 
on CPU. It would be great to add it to CPU for fast inference.



Package used (Python/R/Scala/Julia):
Python, mxnet 1.3.0

## Error Message:
```
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/anaconda3/lib/python3.6/site-packages/mxnet/ndarray/ndarray.py", 
line 189, in __repr__
    return '\n%s\n<%s %s @%s>' % (str(self.asnumpy()),
  File "/home/anaconda3/lib/python3.6/site-packages/mxnet/ndarray/ndarray.py", 
line 1972, in asnumpy
    ctypes.c_size_t(data.size)))
  File "/home/anaconda3/lib/python3.6/site-packages/mxnet/base.py", line 252, 
in check_call
    raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: [13:42:46] src/operator/tensor/./dot-inl.h:83: Check 
failed: outputs[0].type_flag_ == kFloat32 || outputs[0].type_flag_ == kFloat64 
|| (outputs[0].type_flag_ == kFloat16 && ctx.run_ctx.ctx.dev_mask() == 
mshadow::gpu::kDevMask) dot only supports float32/float64 for CPU, and 
float16/float32/float64 for GPU
```

## Minimum reproducible example
```
a = mx.nd.array([1,2,3], dtype="float16")
b = mx.nd.array([1,2,3], dtype="float16")
mx.nd.dot(a, b.T)
```
## Steps to reproduce

1. Run the Minimum reproducible example and see the exception

## What have you tried to solve it?

1. There is nothing can be done, only either work with GPU or do float32 math.

[ Full content available at: 
https://github.com/apache/incubator-mxnet/issues/12700 ]
This message was relayed via gitbox.apache.org for [email protected]

Reply via email to