Note: Providing complete information in the most concise form is the best way 
to get help. This issue template serves as the checklist for essential 
information to most of the technical issues and bug reports. For non-technical 
issues and feature requests, feel free to present the information in what you 
believe is the best form.

For Q & A and discussion, please start a discussion thread at 
https://discuss.mxnet.io 

## Description
The shape of Dropout's output doesn't match that of the input when following a 
MaxPool2D: 
https://mxnet.incubator.apache.org/api/python/gluon/nn.html?highlight=dropout#mxnet.gluon.nn.Dropout
 (see outputs). I believe this is a bug since Dropout documentation even says 
it should retain shape and the keras model I am converting this from keeps the 
same shape, but please let me know if I am missing something or if this is 
fixed.

## Environment info (Required)

```
----------Python Info----------
Version      : 3.6.5
Compiler     : GCC 7.2.0
Build        : ('default', 'Mar 29 2018 18:21:58')
Arch         : ('64bit', '')
------------Pip Info-----------
Version      : 18.0
Directory    : /home/cory/anaconda3/lib/python3.6/site-packages/pip
----------MXNet Info-----------
Version      : 1.2.1
Directory    : /home/cory/anaconda3/lib/python3.6/site-packages/mxnet
Commit Hash   : 106391a1f0ee012b1ea38764d711e76774ce77e1
----------System Info----------
Platform     : Linux-4.15.0-34-generic-x86_64-with-debian-stretch-sid
system       : Linux
node         : sprucemoose
release      : 4.15.0-34-generic
version      : #37~16.04.1-Ubuntu SMP Tue Aug 28 10:44:06 UTC 2018
----------Hardware Info----------
machine      : x86_64
processor    : x86_64
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                8
On-line CPU(s) list:   0-7
Thread(s) per core:    2
Core(s) per socket:    4
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 94
Model name:            Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz
Stepping:              3
CPU MHz:               4200.111
CPU max MHz:           4200.0000
CPU min MHz:           800.0000
BogoMIPS:              8016.00
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              8192K
NUMA node0 CPU(s):     0-7
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca 
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx 
pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl 
xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 
monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 
x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 
3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp 
tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep 
bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec 
xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp 
flush_l1d
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0101 sec, 
LOAD: 0.5889 sec.
Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.1829 sec, LOAD: 
0.3296 sec.
Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.1814 sec, LOAD: 
0.2691 sec.
Timing for FashionMNIST: 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz,
 DNS: 0.0288 sec, LOAD: 0.2088 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0113 sec, LOAD: 
0.5126 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0156 sec, LOAD: 
0.1057 sec.
```

Package used (Python/R/Scala/Julia):
Python: mxnet-cu91

## Error Message:
My output is

conv1 [(32, 16, 101, 101)]
conv1 [(32, 16, 101, 101)]
conv1 [(32, 16, 101, 101)]
pool1 [(32, 16, 50, 50)]
pool1 [(32, 16, 25, 25)] <------- here is the problem. The output shape 
probably should be (32, 16, 50, 50)

which should be similar to 

conv1 Tensor("conv2d_47/BiasAdd:0", shape=(?, 101, 101, 16), dtype=float32)
conv1 Tensor("add_19/add:0", shape=(?, 101, 101, 16), dtype=float32)
conv1 Tensor("activation_51/Relu:0", shape=(?, 101, 101, 16), dtype=float32)
pool1 Tensor("max_pooling2d_5/MaxPool:0", shape=(?, 50, 50, 16), dtype=float32)
pool1 Tensor("dropout_9/cond/Merge:0", shape=(?, 50, 50, 16), dtype=float32)


Note that the mxnet model is in nchw and the keras model is nhwc

## Minimum reproducible example

def build_model(input_layer, start_neurons, DropoutRatio = 0.5):
    # 101 -> 50
    k_size = (3, 3)
    same_padding = (k_size[0]//2, k_size[1]//2)
    #input_layer = mx.sym.transpose(input_layer, [0, 3, 1, 2])
    conv1 = mx.gluon.nn.Conv2D(start_neurons * 1, kernel_size=k_size, 
padding=same_padding)(input_layer)
    print('conv1', conv1.infer_shape(data=(32, 1, 101, 101))[1])
    conv1 = residual_block(conv1,start_neurons * 1)
    print('conv1', conv1.infer_shape(data=(32, 16, 101, 101))[1])
    conv1 = residual_block(conv1,start_neurons * 1, True)
    print('conv1', conv1.infer_shape(data=(32, 16, 101, 101))[1])
    pool1 = mx.gluon.nn.MaxPool2D()(conv1) #(2, 2)
    print('pool1', pool1.infer_shape(data=(32, 16, 101, 101))[1])
    pool1 = mx.gluon.nn.Dropout(DropoutRatio/2)(pool1)
    print('pool1', pool1.infer_shape(data=(32, 16, 50, 50))[1])

which is converted from 

def build_model(input_layer, start_neurons, DropoutRatio = 0.5):
    # 101 -> 50
    conv1 = Conv2D(start_neurons * 1, (3, 3), activation=None, 
padding="same")(input_layer)
    print('conv1', conv1)
    conv1 = residual_block(conv1,start_neurons * 1)
    print('conv1', conv1)
    conv1 = residual_block(conv1,start_neurons * 1, True)
    print('conv1', conv1)
    pool1 = MaxPooling2D((2, 2))(conv1)
    print('pool1', pool1)
    pool1 = Dropout(DropoutRatio/2)(pool1)
    print('pool1', pool1)

See attached notebook to reproduce

## Steps to reproduce

1. Download this notebook. 
[mxnet_kernel.ipynb.zip](https://github.com/apache/incubator-mxnet/files/2417812/mxnet_kernel.ipynb.zip)
2. Remove the data sections as you don't need the data to hit the problem.
3. Run the notebook.

## What have you tried to solve it?

1. Looking at parameters for mxnet MaxPool2D v keras MaxPooling2D as also with 
Dropout. AFAIK, params are the same.
2. Commenting out the preceding MaxPool2D (Dropout still reduces hw sizes).
3. I've started looking into the codebase and see the implementation in 
operator/nn/dropout-inl.h and operator/nn/dropout.cc. I've checked out 1.2.1 
(my mxnet version) and will likely continue debugging there.


[ Full content available at: 
https://github.com/apache/incubator-mxnet/issues/12669 ]
This message was relayed via gitbox.apache.org for [email protected]

Reply via email to