haojin2 commented on issue #14830: Use env var to enforce safe accumulation in 
ReduceAxesCompute
URL: https://github.com/apache/incubator-mxnet/pull/14830#issuecomment-493005925
 
 
   @anirudh2290 Fundamentally nothing would guarantee any benefit for all 
models. One would design a model or configure the system in a way that he/she 
believes would benefit his/her goal most, and we, as a lower-level library, are 
simply presenting the choices for them (with correct implementation and 
abundant doc) rather than guaranteeing anything for them.
   I think this piece of doc is serving its sole purpose: describing what will 
happen to accumulations if it's turned on, i.e. the accumulations will be done 
in a higher-precision type. If we add such a "disclaimer" to this very env var, 
do we then also do that for all other features/operators/env vars? For example, 
is adding "Using a BatchNorm layer in your model may not necessarily improve 
the accuracy of it" in BatchNorm's doc really giving extra information to the 
users?
   I do understand where you're coming from about the influence of this env var 
may impose on the final accuracy of a model, so regarding the issue #14722 that 
actually triggered this fix here, I actually tried the same benchmark script on 
my end on a p2.8xlarge instance and tested on the branch of this very PR (Also 
there's a bug in the script that prevents it from running with py3, and my fix 
is here: https://github.com/awslabs/deeplearning-benchmark/pull/70): 
   ```
   ubuntu@ip-162-32-28-44:~/deeplearning-benchmark$ export 
MXNET_SAFE_ACCUMULATION=1
   ubuntu@ip-162-32-28-44:~/deeplearning-benchmark$ python 
word_language_model/word_language_model.py --gpus 8 --nhid 650 --emsize 650 
--dropout 0.5 --epochs 40 --data word_language_model/data/ptb. --mode 
imperative --kvstore device
   INFO:root:[Epoch 0] time cost 23.01s, valid loss 6.56, valid ppl 703.38
   INFO:root:test loss 6.53, test ppl 683.61
   INFO:root:[Epoch 1] time cost 19.86s, valid loss 6.16, valid ppl 473.41
   INFO:root:test loss 6.13, test ppl 459.76
   INFO:root:[Epoch 2] time cost 20.12s, valid loss 5.75, valid ppl 313.67
   INFO:root:test loss 5.72, test ppl 304.20
   INFO:root:[Epoch 3] time cost 19.85s, valid loss 5.57, valid ppl 262.83
   INFO:root:test loss 5.54, test ppl 254.41
   INFO:root:[Epoch 4] time cost 19.90s, valid loss 5.46, valid ppl 234.12
   INFO:root:test loss 5.43, test ppl 228.92
   INFO:root:[Epoch 5] time cost 19.96s, valid loss 5.31, valid ppl 202.62
   INFO:root:test loss 5.28, test ppl 195.97
   INFO:root:[Epoch 6] time cost 19.80s, valid loss 5.24, valid ppl 187.75
   INFO:root:test loss 5.21, test ppl 182.98
   INFO:root:[Epoch 7] time cost 19.93s, valid loss 5.16, valid ppl 174.70
   INFO:root:test loss 5.13, test ppl 169.26
   INFO:root:[Epoch 8] time cost 19.92s, valid loss 5.11, valid ppl 166.22
   INFO:root:test loss 5.08, test ppl 161.02
   INFO:root:[Epoch 9] time cost 19.88s, valid loss 5.07, valid ppl 158.64
   INFO:root:test loss 5.03, test ppl 153.36
   INFO:root:[Epoch 10] time cost 19.93s, valid loss 5.03, valid ppl 153.05
   INFO:root:test loss 5.00, test ppl 147.75
   INFO:root:[Epoch 11] time cost 19.87s, valid loss 4.99, valid ppl 147.12
   INFO:root:test loss 4.95, test ppl 141.37
   INFO:root:[Epoch 12] time cost 19.84s, valid loss 4.98, valid ppl 146.02
   INFO:root:test loss 4.94, test ppl 140.14
   INFO:root:[Epoch 13] time cost 19.86s, valid loss 4.94, valid ppl 139.35
   INFO:root:test loss 4.89, test ppl 133.62
   INFO:root:[Epoch 14] time cost 19.87s, valid loss 4.91, valid ppl 136.01
   INFO:root:test loss 4.87, test ppl 130.37
   INFO:root:[Epoch 15] time cost 19.83s, valid loss 4.90, valid ppl 133.98
   INFO:root:test loss 4.86, test ppl 129.10
   INFO:root:[Epoch 16] time cost 19.92s, valid loss 4.90, valid ppl 133.75
   INFO:root:test loss 4.85, test ppl 127.97
   INFO:root:[Epoch 17] time cost 19.92s, valid loss 4.87, valid ppl 130.34
   INFO:root:test loss 4.83, test ppl 125.31
   INFO:root:[Epoch 18] time cost 19.93s, valid loss 4.86, valid ppl 129.32
   INFO:root:test loss 4.82, test ppl 123.99
   INFO:root:[Epoch 19] time cost 19.89s, valid loss 4.84, valid ppl 126.92
   INFO:root:test loss 4.81, test ppl 122.15
   INFO:root:[Epoch 20] time cost 19.93s, valid loss 4.83, valid ppl 125.82
   INFO:root:test loss 4.79, test ppl 120.79
   INFO:root:[Epoch 21] time cost 19.91s, valid loss 4.84, valid ppl 126.45
   INFO:root:[Epoch 22] time cost 20.03s, valid loss 4.81, valid ppl 122.33
   INFO:root:test loss 4.76, test ppl 117.16
   INFO:root:[Epoch 23] time cost 19.94s, valid loss 4.80, valid ppl 122.05
   INFO:root:test loss 4.76, test ppl 116.82
   INFO:root:[Epoch 24] time cost 19.94s, valid loss 4.80, valid ppl 121.71
   INFO:root:test loss 4.76, test ppl 116.42
   INFO:root:[Epoch 25] time cost 19.80s, valid loss 4.80, valid ppl 121.15
   INFO:root:test loss 4.75, test ppl 115.90
   INFO:root:[Epoch 26] time cost 19.74s, valid loss 4.80, valid ppl 121.27
   INFO:root:[Epoch 27] time cost 19.98s, valid loss 4.80, valid ppl 121.00
   INFO:root:test loss 4.75, test ppl 115.62
   INFO:root:[Epoch 28] time cost 19.84s, valid loss 4.79, valid ppl 120.81
   INFO:root:test loss 4.75, test ppl 115.50
   INFO:root:[Epoch 29] time cost 19.95s, valid loss 4.79, valid ppl 120.73
   INFO:root:test loss 4.75, test ppl 115.44
   INFO:root:[Epoch 30] time cost 19.87s, valid loss 4.79, valid ppl 120.61
   INFO:root:test loss 4.75, test ppl 115.37
   INFO:root:[Epoch 31] time cost 19.89s, valid loss 4.79, valid ppl 120.48
   INFO:root:test loss 4.75, test ppl 115.22
   INFO:root:[Epoch 32] time cost 19.90s, valid loss 4.79, valid ppl 120.31
   INFO:root:test loss 4.75, test ppl 115.10
   INFO:root:[Epoch 33] time cost 19.84s, valid loss 4.79, valid ppl 120.31
   INFO:root:test loss 4.75, test ppl 115.09
   INFO:root:[Epoch 34] time cost 19.87s, valid loss 4.79, valid ppl 120.30
   INFO:root:test loss 4.75, test ppl 115.05
   INFO:root:[Epoch 35] time cost 19.93s, valid loss 4.79, valid ppl 120.22
   INFO:root:test loss 4.74, test ppl 114.98
   INFO:root:[Epoch 36] time cost 19.86s, valid loss 4.79, valid ppl 120.15
   INFO:root:test loss 4.74, test ppl 114.92
   INFO:root:[Epoch 37] time cost 19.84s, valid loss 4.79, valid ppl 120.06
   INFO:root:test loss 4.74, test ppl 114.85
   INFO:root:[Epoch 38] time cost 19.89s, valid loss 4.79, valid ppl 120.01
   INFO:root:test loss 4.74, test ppl 114.81
   INFO:root:[Epoch 39] time cost 19.73s, valid loss 4.79, valid ppl 119.95
   INFO:root:test loss 4.74, test ppl 114.75
   INFO:root:Best test loss 4.74, test ppl 114.75
   ubuntu@ip-162-32-28-44:~/deeplearning-benchmark$ export 
MXNET_SAFE_ACCUMULATION=0
   ubuntu@ip-162-32-28-44:~/deeplearning-benchmark$ python 
word_language_model/word_language_model.py --gpus 8 --nhid 650 --emsize 650 
--dropout 0.5 --epochs 40 --data word_language_model/data/ptb. --mode 
imperative --kvstore device
   INFO:root:[Epoch 0] time cost 22.86s, valid loss 6.56, valid ppl 703.38
   INFO:root:test loss 6.53, test ppl 683.61
   INFO:root:[Epoch 1] time cost 19.97s, valid loss 6.16, valid ppl 473.41
   INFO:root:test loss 6.13, test ppl 459.76
   INFO:root:[Epoch 2] time cost 19.79s, valid loss 5.75, valid ppl 313.67
   INFO:root:test loss 5.72, test ppl 304.20
   INFO:root:[Epoch 3] time cost 19.73s, valid loss 5.57, valid ppl 262.83
   INFO:root:test loss 5.54, test ppl 254.41
   INFO:root:[Epoch 4] time cost 19.75s, valid loss 5.46, valid ppl 234.12
   INFO:root:test loss 5.43, test ppl 228.92
   INFO:root:[Epoch 5] time cost 19.75s, valid loss 5.31, valid ppl 202.62
   INFO:root:test loss 5.28, test ppl 195.97
   INFO:root:[Epoch 6] time cost 19.78s, valid loss 5.24, valid ppl 187.75
   INFO:root:test loss 5.21, test ppl 182.98
   INFO:root:[Epoch 7] time cost 19.77s, valid loss 5.16, valid ppl 174.70
   INFO:root:test loss 5.13, test ppl 169.26
   INFO:root:[Epoch 8] time cost 19.75s, valid loss 5.11, valid ppl 166.22
   INFO:root:test loss 5.08, test ppl 161.02
   INFO:root:[Epoch 9] time cost 19.68s, valid loss 5.07, valid ppl 158.64
   INFO:root:test loss 5.03, test ppl 153.36
   INFO:root:[Epoch 10] time cost 19.73s, valid loss 5.03, valid ppl 153.05
   INFO:root:test loss 5.00, test ppl 147.75
   INFO:root:[Epoch 11] time cost 19.78s, valid loss 4.99, valid ppl 147.12
   INFO:root:test loss 4.95, test ppl 141.37
   INFO:root:[Epoch 12] time cost 19.72s, valid loss 4.98, valid ppl 146.02
   INFO:root:test loss 4.94, test ppl 140.14
   INFO:root:[Epoch 13] time cost 19.76s, valid loss 4.94, valid ppl 139.35
   INFO:root:test loss 4.89, test ppl 133.62
   INFO:root:[Epoch 14] time cost 19.74s, valid loss 4.91, valid ppl 136.01
   INFO:root:test loss 4.87, test ppl 130.37
   INFO:root:[Epoch 15] time cost 19.76s, valid loss 4.90, valid ppl 133.98
   INFO:root:test loss 4.86, test ppl 129.10
   INFO:root:[Epoch 16] time cost 19.76s, valid loss 4.90, valid ppl 133.75
   INFO:root:test loss 4.85, test ppl 127.97
   INFO:root:[Epoch 17] time cost 19.73s, valid loss 4.87, valid ppl 130.34
   INFO:root:test loss 4.83, test ppl 125.31
   INFO:root:[Epoch 18] time cost 19.78s, valid loss 4.86, valid ppl 129.32
   INFO:root:test loss 4.82, test ppl 123.99
   INFO:root:[Epoch 19] time cost 19.76s, valid loss 4.84, valid ppl 126.92
   INFO:root:test loss 4.81, test ppl 122.15
   INFO:root:[Epoch 20] time cost 19.79s, valid loss 4.83, valid ppl 125.82
   INFO:root:test loss 4.79, test ppl 120.79
   INFO:root:[Epoch 21] time cost 19.76s, valid loss 4.84, valid ppl 126.45
   INFO:root:[Epoch 22] time cost 19.82s, valid loss 4.81, valid ppl 122.33
   INFO:root:test loss 4.76, test ppl 117.16
   INFO:root:[Epoch 23] time cost 19.78s, valid loss 4.80, valid ppl 122.05
   INFO:root:test loss 4.76, test ppl 116.82
   INFO:root:[Epoch 24] time cost 19.83s, valid loss 4.80, valid ppl 121.71
   INFO:root:test loss 4.76, test ppl 116.42
   INFO:root:[Epoch 25] time cost 19.87s, valid loss 4.80, valid ppl 121.15
   INFO:root:test loss 4.75, test ppl 115.90
   INFO:root:[Epoch 26] time cost 19.71s, valid loss 4.80, valid ppl 121.27
   INFO:root:[Epoch 27] time cost 19.91s, valid loss 4.80, valid ppl 121.00
   INFO:root:test loss 4.75, test ppl 115.62
   INFO:root:[Epoch 28] time cost 19.89s, valid loss 4.79, valid ppl 120.81
   INFO:root:test loss 4.75, test ppl 115.50
   INFO:root:[Epoch 29] time cost 19.80s, valid loss 4.79, valid ppl 120.73
   INFO:root:test loss 4.75, test ppl 115.44
   INFO:root:[Epoch 30] time cost 19.70s, valid loss 4.79, valid ppl 120.61
   INFO:root:test loss 4.75, test ppl 115.37
   INFO:root:[Epoch 31] time cost 19.78s, valid loss 4.79, valid ppl 120.48
   INFO:root:test loss 4.75, test ppl 115.22
   INFO:root:[Epoch 32] time cost 19.69s, valid loss 4.79, valid ppl 120.31
   INFO:root:test loss 4.75, test ppl 115.10
   INFO:root:[Epoch 33] time cost 19.78s, valid loss 4.79, valid ppl 120.31
   INFO:root:test loss 4.75, test ppl 115.09
   INFO:root:[Epoch 34] time cost 19.88s, valid loss 4.79, valid ppl 120.30
   INFO:root:test loss 4.75, test ppl 115.05
   INFO:root:[Epoch 35] time cost 19.70s, valid loss 4.79, valid ppl 120.22
   INFO:root:test loss 4.74, test ppl 114.98
   INFO:root:[Epoch 36] time cost 19.70s, valid loss 4.79, valid ppl 120.15
   INFO:root:test loss 4.74, test ppl 114.92
   INFO:root:[Epoch 37] time cost 19.86s, valid loss 4.79, valid ppl 120.06
   INFO:root:test loss 4.74, test ppl 114.85
   INFO:root:[Epoch 38] time cost 19.65s, valid loss 4.79, valid ppl 120.01
   INFO:root:test loss 4.74, test ppl 114.81
   INFO:root:[Epoch 39] time cost 19.75s, valid loss 4.79, valid ppl 119.95
   INFO:root:test loss 4.74, test ppl 114.75
   INFO:root:Best test loss 4.74, test ppl 114.75
   ubuntu@ip-162-32-28-44:~/deeplearning-benchmark$ python
   Python 3.6.7 (default, Oct 22 2018, 11:32:17) 
   [GCC 8.2.0] on linux
   Type "help", "copyright", "credits" or "license" for more information.
   >>> import mxnet as mx
   >>> mx
   <module 'mxnet' from '/home/ubuntu/3-mxnet/python/mxnet/__init__.py'>
   >>> quit()
   ubuntu@ip-162-32-28-44:~/deeplearning-benchmark$ cd ..
   ubuntu@ip-162-32-28-44:~$ cd 3-mxnet/
   ubuntu@ip-162-32-28-44:~/3-mxnet$ git branch
     master
   * safe_acc_envvar
   ```
   I'm not seeing any significant difference between the results with this env 
var on or off (expected behavior in my opinion), and final results from both 
trials seem to be within a good range. @nswamy Would you please also try this 
on your machine to see if it's the case?
   Back to what @anirudh2290 requested, I think that such an extra disclaimer 
is not needed due to:
   1) The doc for the env var already serves its purpose(describing the effect 
of the env var) well.
   2) There's no claim in the doc that some values of this env var would lead 
to more accurate models.
   3) The claimed possible accuracy loss is not reproducible on my end.
   
   Sorry that this is a long reply because there was too much info that I would 
want to include.
   Hao

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to