sravanbabuiitm opened a new issue #11829: Diff behavior of GlobalAvgPool1D 
compared to keras
URL: https://github.com/apache/incubator-mxnet/issues/11829
 
 
   I m trying to understand the functionality of 
https://mxnet.incubator.apache.org/api/python/gluon/nn.html#mxnet.gluon.nn.GlobalAvgPool1D
   
   The documentation doesnt seem complete for this, but I have noted 
discrepancy compared to the behavior of same api in keras.
   
   model = Sequential()
   
   
   model.add(Embedding(vocab_size,
                       embedding_dims,
                       weights=[embedding_matrix],
                       input_length=max_len_doc, trainable=False))
   model.add(GlobalAveragePooling1D())
   model.add(Dense(n_labels, activation='sigmoid'))
   model.compile(loss='categorical_crossentropy',
                 optimizer='adam',
                 metrics=['accuracy'])
   model.summary()
   
   Embedding layer followed by global average pooling layer summed along the 
column/features.
   Build model...
   _________________________________________________________________
   Layer (type)                 Output Shape              Param #   
   =================================================================
   embedding_2 (Embedding)      (None, 65, 300)           4239000   
   _________________________________________________________________
   global_average_pooling1d_2 ( (None, 300)               0         
   _________________________________________________________________
   dense_2 (Dense)              (None, 3)                 903       
   =================================================================
   Total params: 4,239,903
   Trainable params: 903
   Non-trainable params: 4,239,000
   ____________________________________________
   
   
   In Gluon, I have tried the same operation :
   
   embedding = nn.Embedding(20,5,sparse_grad=True, weight_initializer = 
mx.init.Uniform())
   embedding.initialize()
   pooling = gluon.nn.GlobalAvgPool1D()
   print(embedding(mx.nd.array([[1,3,7]])))
   print(pooling(embedding(mx.nd.array([[1,3,7]]))))
   
   
   Output : 
   
   [[[ 0.05667423 -0.02676572  0.0301794   0.02835616 -0.0437789 ]
     [ 0.01422429  0.01972841 -0.05923161 -0.01276907 -0.06303393]
     [ 0.05422723  0.04757585  0.00387447 -0.01476508  0.06609883]]]
   <NDArray 1x3x5 @cpu(0)>
   
   [[[ 0.00893304]
     [-0.02021638]
     [ 0.03140226]]]
   <NDArray 1x3x1 @cpu(0)>
   
   I was expecting to see 1X1X5 as output.
   
   It isn't even summing along the rows since when I did the following :
   
   np.sum([ 0.05667423, -0.02676572,  0.0301794,   0.02835616, -0.0437789 ])
   output : 0.04466516999999998
   
   can you add more documentation to the API and also on how it is different 
from libraries offering same API's ?
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to