altosaar commented on issue #10508: MXNet much slower than TensorFlow
URL: 
https://github.com/apache/incubator-mxnet/issues/10508#issuecomment-380738642
 
 
   Dang, I knew it was a silly bug on my end–thanks for catching that 
@ThomasDelteil :) I just pushed the 
[fix](https://github.com/altosaar/variational-autoencoder/commit/ab76990a616afdc9aec25e0995e90db962a2ffc7).
 You're right, should have caught that by realizing that millions of 
iterations/s is very unreasonable.
   
   Here are the new timings I get:
   
   TensorFlow 1.7.0 CPU:
   
   ```
   Iteration: 1000 ELBO: -131.288 s/iter: 5.380e-03
   Iteration: 2000 ELBO: -122.167 s/iter: 5.253e-03
   ```
   
   TensorFlow 1.7.0 GPU:
   
   ```
   Iteration: 1000 ELBO: -142.142 s/iter: 3.681e-03
   Iteration: 2000 ELBO: -114.007 s/iter: 3.725e-03
   ```
   
   This matches the MXNet timings on GPU 👍 :) and it's awesome that it's a lot 
faster on CPU!
   
   P.S. I agree examples/s is good in some cases. For generative models, I find 
time/iteration more informative (the convergence of the objective should be 
measured in number of parameter updates, not epochs, so this is what I focus 
on). 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to