KhurramPirov commented on issue #15780: FP16 gemm on cpu not implemented!
URL:
https://github.com/apache/incubator-mxnet/issues/15780#issuecomment-520361369
@TaoLv Will appreciate you with any new advice, because the research is
stuck because of this.
KhurramPirov commented on issue #15780: FP16 gemm on cpu not implemented!
URL:
https://github.com/apache/incubator-mxnet/issues/15780#issuecomment-519540652
> @eric-haibin-lin @zhreshold Do you any experience on this? Train the model
with FP16 and then run inference with FP32? I'm afraid
KhurramPirov commented on issue #15780: FP16 gemm on cpu not implemented!
URL:
https://github.com/apache/incubator-mxnet/issues/15780#issuecomment-519529991
> Is your model trained with FP16? Can you double check if there are any
cast operators in the saved model?
yeah, I have for
KhurramPirov commented on issue #15780: FP16 gemm on cpu not implemented!
URL:
https://github.com/apache/incubator-mxnet/issues/15780#issuecomment-519497465
> Is it available to use FP32 GEMM to emulate FP16 GEMM temporarily?
>
> Although the performance is slow, users can test FP16
KhurramPirov commented on issue #15780: FP16 gemm on cpu not implemented!
URL:
https://github.com/apache/incubator-mxnet/issues/15780#issuecomment-519463953
@TaoLv my GPU supports FP16, as you see "FP16 gemm on cpu" it is error on
cpu support.
KhurramPirov commented on issue #15780: FP16 gemm on cpu not implemented!
URL:
https://github.com/apache/incubator-mxnet/issues/15780#issuecomment-519434862
> @KhurramPirov This is expected behavior on CPU where the FP16 GEMM doesn't
support.
> Seems your problem is the memory leak so