eric-haibin-lin opened a new issue #9310: model using contrib.SparseEmbedding 
returns inconsistent result between runs
URL: https://github.com/apache/incubator-mxnet/issues/9310
 
 
   Note: Providing complete information in the most concise form is the best 
way to get help. This issue template serves as the checklist for essential 
information to most of the technical issues and bug reports. For non-technical 
issues and feature requests, feel free to present the information in what you 
believe is the best form.
   
   For Q & A and discussion, please start a discussion thread at 
https://discuss.mxnet.io 
   
   ## Description
   Training the word-language-model with SparseEmbedding using fixed seed 
doesn't produce consistent loss across runs.  @ZiyueHuang 
   
   ## Environment info (Required)
   
   ```
   What to do:
   1. Download the diagnosis script from 
https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py
   2. Run the script using `python diagnose.py` and paste its output here.
   
   ```
   
   Package used (Python/R/Scala/Julia):
   (I'm using ...)
   
   For Scala user, please provide:
   1. Java version: (`java -version`)
   2. Maven version: (`mvn -version`)
   3. Scala runtime if applicable: (`scala -version`)
   
   For R user, please provide R `sessionInfo()`:
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio):
   
   MXNet commit hash: 5c3acff3b7bdb177a4731094faa724e31387715d
   (Paste the output of `git rev-parse HEAD` here.)
   
   Build config:
   (Paste the content of config.mk, or the build command.)
   
   ## Error Message:
   (Paste the complete error message, including stack trace.)
   
   ## Minimum reproducible example
   (If you are using your own code, please provide a short script that 
reproduces the error. Otherwise, please provide link to the existing example.)
   
   ## Steps to reproduce
   (Paste the commands you ran that produced the error.)
   
   1. replace 
https://github.com/apache/incubator-mxnet/blob/master/example/rnn/word_lm/model.py#L24-L26
 with the following code
   ```
       dense = True
       stype = 'default' if dense else 'row_sparse'
       weight = mx.sym.var("encoder_weight", init=mx.init.Uniform(0.1), 
stype=stype)
       EMB = mx.sym.Embedding if dense else mx.sym.contrib.SparseEmbedding
       embed = EMB(data=data, weight=weight, input_dim=vocab_size,
                   output_dim=num_embed, name='embed')
   ```
   replace 
https://github.com/apache/incubator-mxnet/blob/master/example/rnn/word_lm/train.py#L114
 with 
   ```
   module.update(max_norm=None)
   ```
   2. run `python train.py --emsize=200` with dense = True, the result of batch 
200 is always 
   ```
   2018-01-04 18:50:50,622 Iter[0] Batch [200]     Loss:  690.8467010
   ```
   
   3. run `python train.py --emsize=200` but change dense = False, the result 
varies:
   Run1:
   ```
   2018-01-04 18:49:28,784 Iter[0] Batch [200]     Loss:  600.4843252
   ```
   Run2:
   ```
   2018-01-04 18:49:08,760 Iter[0] Batch [200]     Loss:  669.0238662
   ```
   
   
   ## What have you tried to solve it?
   
   1. still happens if setting dropout=0
   2.
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to