FoConrad opened a new issue #10563: Suboptimal  performance implementing PPO 
with Adam Optimizer
URL: https://github.com/apache/incubator-mxnet/issues/10563
 
 
   ## Description
   We noticed our gluon/MXNet [Proximal Policy 
Optimization](https://arxiv.org/abs/1707.06347) (PPO) implementation is 
under-performing compared to the OpenAI Baselines version in TensorFlow. Upon 
inspection it appears that this may be, in part, to the Adam optimizer in MXNet.
   
   This is seen by initializing both networks with the same parameters, and 
taking very much care that all the computation is equivalent (which can be seen 
by using another optimizer, aside from Adam, and noting the weights of the two 
networks progress with nearly the same values), and noting that the weights 
start to diverge significantly. The weight divergence only occurs on the Policy 
network (and occurs with or without the entropy term).
   
   ## Environment info (Required)
   Ubuntu 16.04
   CPU: Intel(R) Xeon(R) CPU E5-1620 v3 @ 3.50GHz
   MXNet version: 1.1.0
   TensorFlow version: 1.4.1
   Numpy version: 1.13.1
   
   ```
   Architecture:          x86_64
   CPU op-mode(s):        32-bit, 64-bit
   Byte Order:            Little Endian
   CPU(s):                8
   On-line CPU(s) list:   0-7
   Thread(s) per core:    2
   Core(s) per socket:    4
   Socket(s):             1
   NUMA node(s):          1
   Vendor ID:             GenuineIntel
   CPU family:            6
   Model:                 63
   Model name:            Intel(R) Xeon(R) CPU E5-1620 v3 @ 3.50GHz
   Stepping:              2
   CPU MHz:               3490.156
   CPU max MHz:           3600.0000
   CPU min MHz:           1200.0000
   BogoMIPS:              6984.53
   Virtualization:        VT-x
   L1d cache:             32K
   L1i cache:             32K
   L2 cache:              256K
   L3 cache:              10240K
   NUMA node0 CPU(s):     0-7
   Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx 
pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology 
nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est 
tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt 
tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm epb invpcid_single 
retpoline kaiser tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 
avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat 
pln pts
   ----------Python Info----------
   Version      : 3.5.2
   Compiler     : GCC 5.4.0 20160609
   Build        : ('default', 'Nov 23 2017 16:37:01')
   Arch         : ('64bit', 'ELF')
   ------------Pip Info-----------
   Version      : 9.0.3
   Directory    : /home/con/workspace/.../tenv/lib/python3.5/site-packages/pip
   ----------MXNet Info-----------
   Version      : 1.1.0
   Directory    : /home/con/workspace/.../tenv/lib/python3.5/site-packages/mxnet
   Commit Hash   : e29bb6f76365e45dd44e23941692c9d969959315
   ----------System Info----------
   Platform     : Linux-4.4.0-116-generic-x86_64-with-Ubuntu-16.04-xenial
   system       : Linux
   node         : Conrad-Tower
   release      : 4.4.0-116-generic
   version      : #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018
   ----------Hardware Info----------
   machine      : x86_64
   processor    : x86_64
   ----------Network Test----------
   Setting timeout: 10
   Timing for FashionMNIST: 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz,
 DNS: 0.0013 sec, LOAD: 0.4041 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0009 sec, 
LOAD: 0.0277 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0011 sec, LOAD: 
0.0108 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0009 sec, LOAD: 
0.0335 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0011 sec, LOAD: 
0.3374 sec.
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0010 
sec, LOAD: 0.5805 sec.
   
   ```
   
   I'm using Python 3.5.2.
   
   ## Minimum reproducible example
   Provided is the script mnist.py (below). This file requires TensorFlow and 
MXNet. It uses the MNIST dataset to reproduce the weight divergence (when 
initialized to the exact same weight values) between a simple gluon network and 
an equivalent TensorFlow one. This example was carefully crafted so that the 
TensorFlow code contained is in essence equivalent to OpenAI baselines 
implementation of PPO (some non-contributing factors were whittled away for a 
more simple example).
   
   The code, understandably, may not actually be training a significant MNIST 
classifier. Instead, it is meant to mirror the PPO policy objective, and track 
the weight progression.
   
   Find the code 
[here.](https://gist.github.com/FoConrad/29a51cdfa58c51cdab4df8e902d10207)
   
   ## Steps to reproduce
   
   1. ``python mnist.py # Shows weight divergence using Adam``
   2. ``python mnist.py --optimizer momentum # Shows weight divergence is 
insignificant with other optimizer``
   
   ## What have you tried to solve it?
   
   1. Using different optimizers: this solves the weight divergence, but 
performs worse than Adam.
   2. Used simpler loss functions: this slows divergence, even with the Adam 
optimizer, possibly to the point where the divergence is insignificant. 
However, using a simpler loss function is not an option when implementing PPO.
   3. Using the same initialization: in the debugging process we made sure both 
networks were initialized in the same fashion to ensure the difference in 
performance was not due to initialization. The minimal example also ensures 
that both networks are initialized the same.
   4. Removed parts of the PPO network: we isolated the issue as coming from 
the policy network in PPO. The loss function, which contains an entropy term, 
seems to be significant also. When just using the entropy term, everything is 
fine. When using the policy surrogate loss (alone or with the entropy term), we 
start to see divergence in the weights.
   5. Used a non-gluon implementation of softmax cross entropy loss: I 
calculate the softmax cross entropy loss in a few different ways, using more 
basic MXNet NDArray operations to ensure the problem wasn't with 
gluon.loss.SoftmaxCrossEntropyLoss (this was a suspicion as using a different 
loss (such as sigmoid for calculating the log probabilities) seems to mask the 
problem, in some cases).
   6. Tried using GPU instead of CPU: we moved the MXNet network to the GPU to 
see if the different implementation would resolve the problem (it did not).
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to