jmerkow commented on issue #14421: Updating mxnet from 1.0.0, networks give 
different outputs
URL: 
https://github.com/apache/incubator-mxnet/issues/14421#issuecomment-494088153
 
 
   Doing some more analysis, it looks like there are some differences at the 
convolution layer described above, but the changes are relatively minor. 
However, there is a global pooling layer at the end of the network which seems 
to have a VERY VERY large difference.  I'm using the following to calculate 
error:
   
   ```
   for n in layers:
       x, y = output[n], mx140_outputs[n]
       print(n, np.mean([np.abs(((xi-yi)/(xi+1e-10)).sum()) for xi, yi in 
zip(x,y)]))
   ```
   
   the error values are all less than 0.1, except after the global pooling 
layer i get `global_avgpool_output 8.13365e+11`
   
   Was there some change to global pooling that would cause this?
   
   ```{u'attr': {u'global_pool': u'True',
      u'kernel': u'(8, 8)',
      u'pad': u'(1, 1)',
      u'pool_type': u'avg'},
     u'inputs': [[1229, 0, 0]],
     u'name': u'global_avgpool',
     u'op': u'Pooling'}```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to