nju-luke commented on issue #10368: asscalar is very slow
URL:
https://github.com/apache/incubator-mxnet/issues/10368#issuecomment-378137784
The OOM was fixed by changed train_loss with `train_loss +=
nd.mean(loss_).asscalar()` .
Appreciations @reminisce
nju-luke commented on issue #10368: asscalar is very slow
URL:
https://github.com/apache/incubator-mxnet/issues/10368#issuecomment-378137784
The OOM was fixed by changing train_loss with `train_loss +=
nd.mean(loss_).asscalar()` .
Appreciations @reminisce
nju-luke commented on issue #10368: asscalar is very slow
URL:
https://github.com/apache/incubator-mxnet/issues/10368#issuecomment-378094033
@reminisce Thanks for the explanation about `asscalar`.
I'm sorry that I didn't make it clear about OOM. When I said iteration, it
means the
nju-luke commented on issue #10368: asscalar is very slow
URL:
https://github.com/apache/incubator-mxnet/issues/10368#issuecomment-377979496
Thank you for the answer.
And I have another problem, could you please help me fix it.
When I use mxnet with gpu context, the first iteration
nju-luke commented on issue #10368: asscalar is very slow
URL:
https://github.com/apache/incubator-mxnet/issues/10368#issuecomment-377979496
This is the version with time, and the third out put time.
``` python
print(datetime.datetime.now())
train_loss =