[GitHub] nju-luke commented on issue #10368: asscalar is very slow

2018-04-03 Thread GitBox
nju-luke commented on issue #10368: asscalar is very slow URL: https://github.com/apache/incubator-mxnet/issues/10368#issuecomment-378137784 The OOM was fixed by changed train_loss with `train_loss += nd.mean(loss_).asscalar()` . Appreciations @reminisce

[GitHub] nju-luke commented on issue #10368: asscalar is very slow

2018-04-03 Thread GitBox
nju-luke commented on issue #10368: asscalar is very slow URL: https://github.com/apache/incubator-mxnet/issues/10368#issuecomment-378137784 The OOM was fixed by changing train_loss with `train_loss += nd.mean(loss_).asscalar()` . Appreciations @reminisce

[GitHub] nju-luke commented on issue #10368: asscalar is very slow

2018-04-02 Thread GitBox
nju-luke commented on issue #10368: asscalar is very slow URL: https://github.com/apache/incubator-mxnet/issues/10368#issuecomment-378094033 @reminisce Thanks for the explanation about `asscalar`. I'm sorry that I didn't make it clear about OOM. When I said iteration, it means the

[GitHub] nju-luke commented on issue #10368: asscalar is very slow

2018-04-02 Thread GitBox
nju-luke commented on issue #10368: asscalar is very slow URL: https://github.com/apache/incubator-mxnet/issues/10368#issuecomment-377979496 Thank you for the answer. And I have another problem, could you please help me fix it. When I use mxnet with gpu context, the first iteration

[GitHub] nju-luke commented on issue #10368: asscalar is very slow

2018-04-02 Thread GitBox
nju-luke commented on issue #10368: asscalar is very slow URL: https://github.com/apache/incubator-mxnet/issues/10368#issuecomment-377979496 This is the version with time, and the third out put time. ``` python print(datetime.datetime.now()) train_loss =