[incubator-mxnet] branch master updated (c583e44 -> 0c5677e)

2019-11-04 Thread zhreshold
This is an automated email from the ASF dual-hosted git repository.

zhreshold pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from c583e44  fix requantize flaky test (#16709)
 add 0c5677e  Faster GPU NMS operator (#16542)

No new revisions were added by this update.

Summary of changes:
 src/operator/contrib/bounding_box.cc |   1 +
 src/operator/contrib/bounding_box.cu | 689 ++-
 src/operator/tensor/sort_op-inl.cuh  | 138 +--
 src/operator/tensor/sort_op.h|  50 ++-
 4 files changed, 840 insertions(+), 38 deletions(-)



[GitHub] [incubator-mxnet] zhreshold merged pull request #16542: Faster GPU NMS operator

2019-11-04 Thread GitBox
zhreshold merged pull request #16542: Faster GPU NMS operator
URL: https://github.com/apache/incubator-mxnet/pull/16542
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on issue #16725: Failed test: test_gluon_gpu.test_rnn_unroll_variant_length

2019-11-04 Thread GitBox
haojin2 commented on issue #16725: Failed test: 
test_gluon_gpu.test_rnn_unroll_variant_length
URL: 
https://github.com/apache/incubator-mxnet/issues/16725#issuecomment-549690366
 
 
   @ptrendx @DickJC123 Could you guys provide some insights to this issue? 
Seems like related to the fused ops


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] artor1os opened a new issue #16725: Failed test

2019-11-04 Thread GitBox
artor1os opened a new issue #16725: Failed test
URL: https://github.com/apache/incubator-mxnet/issues/16725
 
 
   test name: test_gluon_gpu.test_rnn_unroll_variant_length
   
   log:
   
   ```
   test_gluon_gpu.test_rnn_unroll_variant_length ... 
   Segmentation fault: 11
   
   Stack trace:
 [bt] (0) /work/mxnet/python/mxnet/../../lib/libmxnet.so(+0x515d559) 
[0x7fd123274559]
 [bt] (1) /lib/x86_64-linux-gnu/libc.so.6(+0x354b0) [0x7fd1978a44b0]
 [bt] (2) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(nnvm::Symbol::ListInputs(nnvm::Symbol::ListInputOption)
 const+0x24d) [0x7fd125ed873d]
 [bt] (3) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(nnvm::Symbol::ListInputNames[abi:cxx11](nnvm::Symbol::ListInputOption)
 const+0x2a) [0x7fd125ed93ba]
 [bt] (4) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(mxnet::FusedOp::GenerateCode(int,
 std::vector > const&, 
std::vector > const&, std::vector > const&, std::vector > const&, 
std::vector > const&, std::vector > const&, std::vector > 
const&, int, std::__cxx11::basic_string, 
std::allocator > const&, std::vector >*)+0x38c1) [0x7fd125a9e3c1]
 [bt] (5) /work/mxnet/python/mxnet/../../lib/libmxnet.so(void 
mxnet::FusedOp::Forward(nnvm::NodeAttrs const&, mxnet::OpContext 
const&, std::vector > const&, 
std::vector > const&, 
std::vector > const&)+0x2b1) 
[0x7fd125aa3631]
 [bt] (6) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(mxnet::imperative::PushFCompute(std::function > const&, std::vector > const&, std::vector > const&)> const&, nnvm::Op const*, 
nnvm::NodeAttrs const&, mxnet::Context const&, std::vector > const&, std::vector > const&, std::vector > const&, std::vector > const&, std::vector > const&, std::vector > const&, std::vector > 
const&)::{lambda(mxnet::RunContext)#1}::operator()(mxnet::RunContext) 
const+0x1423) [0x7fd1229795e3]
 [bt] (7) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(std::_Function_handler > const&, std::vector > const&, std::vector > const&)> const&, nnvm::Op const*, 
nnvm::NodeAttrs const&, mxnet::Context const&, std::vector > const&, std::vector > const&, std::vector > const&, std::vector > const&, std::vector > const&, std::vector > const&, std::vector > 
const&)::{lambda(mxnet::RunContext)#1}>::_M_invoke(std::_Any_data const&, 
mxnet::RunContext&&)+0x17) [0x7fd122979ac7]
 [bt] (8) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(std::_Function_handler::_M_invoke(std::_Any_data const&, 
mxnet::RunContext&&, mxnet::engine::CallbackOnComplete&&)+0x1ec) 
[0x7fd1230b119c]
   terminate called without an active exception
   /work/runtime_functions.sh: line 1106: 6 Aborted (core 
dumped) nosetests-3.4 $NOSE_COVERAGE_ARGUMENTS $NOSE_TIMER_ARGUMENTS 
--with-xunit --xunit-file nosetests_gpu.xml --verbose tests/python/gpu
   ```
   
   build link:
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-gpu/detail/PR-16720/5/pipeline


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience edited a comment on issue #16705: Dropout inconsistency bug

2019-11-04 Thread GitBox
sxjscience edited a comment on issue #16705: Dropout inconsistency bug
URL: 
https://github.com/apache/incubator-mxnet/issues/16705#issuecomment-549683831
 
 
   With the help of @xidulu , we have located the root cause of the issue:
   
   The bug is triggered because we have multiple parallel GPU random resources: 
https://github.com/apache/incubator-mxnet/blob/c583e44816a5e383493f35e69daaa92a47e40e39/src/resource.cc#L93-L94
   
   When we create a new Dropout Node, we will attach a random resource to the 
node: 
https://github.com/apache/incubator-mxnet/blob/c583e44816a5e383493f35e69daaa92a47e40e39/src/operator/nn/dropout.cc#L148-L164
 
   
   Since there are multiple random resources, we select one in a round-robin 
fashion. Each resource has it's specific seed, which results in the 
inconsistent behavior. 
https://github.com/apache/incubator-mxnet/blob/c583e44816a5e383493f35e69daaa92a47e40e39/src/resource.cc#L344-L351
   
   The simplest fix is to use 1 GPU random generator. Thus, setting 
`os.environ['MXNET_GPU_PARALLEL_RAND_COPY'] = '1'` will fix this problem:
   
   ```python
   import os
   
   os.environ['MXNET_GPU_PARALLEL_RAND_COPY'] = '1'
   
   import mxnet as mx
   import numpy as np
   import random
   from numpy.testing import assert_allclose
   
   base_y_np = None
   
   for nrepeat in [1, 2, 3, 4]:
   seed = 123
   mx.random.seed(seed)
   np.random.seed(seed)
   random.seed(seed)
   
   x = mx.nd.ones((3, 3), ctx=mx.gpu())
   for _ in range(nrepeat):
   y = mx.nd.Dropout(x, cudnn_off=True)
   with mx.autograd.record():
   y = mx.nd.Dropout(x, cudnn_off=True)
   y_np = y.asnumpy()
   if base_y_np is None:
   base_y_np = y_np
   else:
   assert_allclose(base_y_np, y_np)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-11-04 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 576bcbe  Bump the publish timestamp.
576bcbe is described below

commit 576bcbe1fc698b3c072e223c618b470ef0107284
Author: mxnet-ci 
AuthorDate: Tue Nov 5 06:39:37 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..166f31f
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue Nov  5 06:39:37 UTC 2019



[GitHub] [incubator-mxnet] sxjscience commented on issue #16705: Dropout inconsistency bug

2019-11-04 Thread GitBox
sxjscience commented on issue #16705: Dropout inconsistency bug
URL: 
https://github.com/apache/incubator-mxnet/issues/16705#issuecomment-549683831
 
 
   With the help of @xidulu , we have located the root cause of the issue:
   
   The bug is triggered because we have multiple parallel GPU random resources: 
https://github.com/apache/incubator-mxnet/blob/c583e44816a5e383493f35e69daaa92a47e40e39/src/resource.cc#L93-L94
   
   When we create a new Dropout Node, we will attach a random resource to the 
node: 
https://github.com/apache/incubator-mxnet/blob/c583e44816a5e383493f35e69daaa92a47e40e39/src/operator/nn/dropout.cc#L148-L164
 
   
   Since there are multiple random resources, we select one in a round-robin 
fashion. Each resource has it's specific seed, which results in the 
inconsistent behavior. 
https://github.com/apache/incubator-mxnet/blob/c583e44816a5e383493f35e69daaa92a47e40e39/src/resource.cc#L344-L351
   
   The simplest fix is to use 1 GPU random generator. Thus, setting 
`os.environ['MXNET_GPU_PARALLEL_RAND_COPY'] = '1'` will fix this problem:
   
   ```
   import os
   
   os.environ['MXNET_GPU_PARALLEL_RAND_COPY'] = '1'
   
   import mxnet as mx
   import numpy as np
   import random
   from numpy.testing import assert_allclose
   
   base_y_np = None
   
   for nrepeat in [1, 2, 3, 4]:
   seed = 123
   mx.random.seed(seed)
   np.random.seed(seed)
   random.seed(seed)
   
   x = mx.nd.ones((3, 3), ctx=mx.gpu())
   for _ in range(nrepeat):
   y = mx.nd.Dropout(x, cudnn_off=True)
   with mx.autograd.record():
   y = mx.nd.Dropout(x, cudnn_off=True)
   y_np = y.asnumpy()
   if base_y_np is None:
   base_y_np = y_np
   else:
   assert_allclose(base_y_np, y_np)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (b9f3b06 -> c583e44)

2019-11-04 Thread patriczhao
This is an automated email from the ASF dual-hosted git repository.

patriczhao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from b9f3b06  Updated logos. (#16719)
 add c583e44  fix requantize flaky test (#16709)

No new revisions were added by this update.

Summary of changes:
 tests/python/quantization/test_quantization.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[GitHub] [incubator-mxnet] pengzhao-intel merged pull request #16709: Fix requantize flaky test

2019-11-04 Thread GitBox
pengzhao-intel merged pull request #16709: Fix requantize flaky test
URL: https://github.com/apache/incubator-mxnet/pull/16709
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] vasusingla619 commented on issue #16596: How to initialize a CPU tensor in custom cu file?

2019-11-04 Thread GitBox
vasusingla619 commented on issue #16596: How to initialize a CPU tensor in 
custom cu file?
URL: 
https://github.com/apache/incubator-mxnet/issues/16596#issuecomment-549662413
 
 
   Thanks, this was solved!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] vasusingla619 closed issue #16596: How to initialize a CPU tensor in custom cu file?

2019-11-04 Thread GitBox
vasusingla619 closed issue #16596: How to initialize a CPU tensor in custom cu 
file?
URL: https://github.com/apache/incubator-mxnet/issues/16596
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stereomatchingkiss opened a new issue #16724: Example link of the image classification show 404

2019-11-04 Thread GitBox
stereomatchingkiss opened a new issue #16724: Example link of the image 
classification show 404
URL: https://github.com/apache/incubator-mxnet/issues/16724
 
 
   I keep getting error 404 when I try to read the image classification 
examples--https://mxnet.apache.org/tutorials/python/predict_image.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on issue #16699: Mixed data type binary ops

2019-11-04 Thread GitBox
reminisce commented on issue #16699: Mixed data type binary ops
URL: https://github.com/apache/incubator-mxnet/pull/16699#issuecomment-549655791
 
 
   @marcoabreu Appreciate your review. I can assure you that Windows is 
absolutely not excluded from supporting mixed-precision as Unix. @haojin2 has 
gone through thorough trial-and-error to make it work with Windows compilation 
tool chain, which I believe very few of us would be willing to get hands dirty 
to make this happen as compilation on Windows platforms is outside our domain 
knowledge. This is an extremely non-trivial task that took @haojin2 many 
day-and-nights to accomplish. So kudos to @haojin2 .
   
   We are trying to merge this to meet a deadline. If you feel your 
concerns/questions have not been addressed after @haojin2 's explanation, could 
you raise them so that we can help to close gap. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ptrendx commented on issue #16723: [Bug] fused_op does not support boolean type

2019-11-04 Thread GitBox
ptrendx commented on issue #16723: [Bug] fused_op does not support boolean type
URL: 
https://github.com/apache/incubator-mxnet/issues/16723#issuecomment-549639921
 
 
   I see, this is a newly added type. We will fix this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience opened a new issue #16723: Fuse_op does not support boolean type

2019-11-04 Thread GitBox
sxjscience opened a new issue #16723: Fuse_op does not support boolean type
URL: https://github.com/apache/incubator-mxnet/issues/16723
 
 
   @ptrendx I find that the FusedOp does not support the boolean type. The 
following script will trigger the error.
   
   ```python
   import mxnet as mx
   import numpy as np
   from mxnet.gluon import HybridBlock
   mx.npx.set_np()
   
   class Foo(HybridBlock):
   def __init__(self, prefix=None, params=None):
   super(Foo, self).__init__(prefix=prefix, params=params)
   
   def hybrid_forward(self, F, valid_length):
   mask = (F.np.ones((10,)) < valid_length).astype(np.float32)
   mask2 = (F.np.ones((10,)) < valid_length).astype(np.float32)
   mask = mask * F.np.expand_dims(mask2, axis=-1)
   return mask
   
   foo = Foo()
   foo.hybridize()
   out = foo(mx.np.ones((10,), ctx=mx.gpu()))
   print(out)
   ```
   
   Stack Trace:
   ```
   MXNetError: [02:32:00] src/operator/fusion/fused_op.cu:76: Unknown type enum 
7
   Stack trace:
 [bt] (0) 
/home/ubuntu/mxnet/python/mxnet/../../lib/libmxnet.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x32)
 [0x7f310563bed2]
 [bt] (1) 
/home/ubuntu/mxnet/python/mxnet/../../lib/libmxnet.so(mxnet::FusedOp::CheckShapesAndTypes(std::vector > const&, std::vector > const&, std::vector >*, 
std::vector >*, std::vector 
>*, std::vector >*, int*)+0x17b3) [0x7f310b4743c3]
 [bt] (2) /home/ubuntu/mxnet/python/mxnet/../../lib/libmxnet.so(void 
mxnet::FusedOp::Forward(nnvm::NodeAttrs const&, mxnet::OpContext 
const&, std::vector > const&, 
std::vector > const&, 
std::vector > const&)+0x1a0) 
[0x7f310b47df50]
 [bt] (3) 
/home/ubuntu/mxnet/python/mxnet/../../lib/libmxnet.so(mxnet::imperative::PushFCompute(std::function > const&, std::vector > const&, std::vector > const&)> const&, nnvm::Op const*, 
nnvm::NodeAttrs const&, mxnet::Context const&, std::vector > const&, std::vector > const&, std::vector > const&, std::vector > const&, std::vector > const&, std::vector > const&, std::vector > 
const&)::{lambda(mxnet::RunContext)#1}::operator()(mxnet::RunContext) 
const+0x1423) [0x7f3108a01733]
 [bt] (4) 
/home/ubuntu/mxnet/python/mxnet/../../lib/libmxnet.so(std::_Function_handler > const&, std::vector > const&, std::vector > const&)> const&, nnvm::Op const*, 
nnvm::NodeAttrs const&, mxnet::Context const&, std::vector > const&, std::vector > const&, std::vector > const&, std::vector > const&, std::vector > const&, std::vector > const&, std::vector > 
const&)::{lambda(mxnet::RunContext)#1}>::_M_invoke(std::_Any_data const&, 
mxnet::RunContext&&)+0x17) [0x7f3108a01c17]
 [bt] (5) 
/home/ubuntu/mxnet/python/mxnet/../../lib/libmxnet.so(std::_Function_handler::_M_invoke(std::_Any_data const&, 
mxnet::RunContext&&, mxnet::engine::CallbackOnComplete&&)+0x1bf) 
[0x7f310916266f]
 [bt] (6) 
/home/ubuntu/mxnet/python/mxnet/../../lib/libmxnet.so(mxnet::engine::ThreadedEngine::ExecuteOprBlock(mxnet::RunContext,
 mxnet::engine::OprBlock*)+0x995) [0x7f3109166475]
 [bt] (7) /home/ubuntu/mxnet/python/mxnet/../../lib/libmxnet.so(void 
mxnet::engine::ThreadedEnginePerDevice::GPUWorker<(dmlc::ConcurrentQueueType)0>(mxnet::Context,
 bool, 
mxnet::engine::ThreadedEnginePerDevice::ThreadWorkerBlock<(dmlc::ConcurrentQueueType)0>*,
 std::shared_ptr const&)+0x11d) [0x7f310917ed7d]
 [bt] (8) 
/home/ubuntu/mxnet/python/mxnet/../../lib/libmxnet.so(std::_Function_handler), 
mxnet::engine::ThreadedEnginePerDevice::PushToExecute(mxnet::engine::OprBlock*, 
bool)::{lambda()#4}::operator()() 
const::{lambda(std::shared_ptr)#1}>::_M_invoke(std::_Any_data
 const&, std::shared_ptr&&)+0x4e) [0x7f310917f02e]
   ```
   
   We can also manually disable the Fuse OP, which will generate the correct 
answer.
   
   ```python
   import mxnet as mx
   import numpy as np
   import os
   from mxnet.gluon import HybridBlock
   mx.npx.set_np()
   
   os.environ['MXNET_USE_FUSION'] = '0'
   
   class Foo(HybridBlock):
   def __init__(self, prefix=None, params=None):
   super(Foo, self).__init__(prefix=prefix, params=params)
   
   def hybrid_forward(self, F, valid_length):
   mask = (F.np.ones((10,)) < valid_length).astype(np.float32)
   mask2 = (F.np.ones((10,)) < valid_length).astype(np.float32)
   mask = mask * F.np.expand_dims(mask2, axis=-1)
   return mask
   
   foo = Foo()
   foo.hybridize()
   out = foo(mx.np.ones((10,), ctx=mx.gpu()))
   print(out)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] knjwhn commented on issue #16557: Where is the place that mxnet call cblas_gemm if I use openblas?

2019-11-04 Thread GitBox
knjwhn commented on issue #16557: Where is the place that mxnet call cblas_gemm 
if I use openblas?
URL: 
https://github.com/apache/incubator-mxnet/issues/16557#issuecomment-549632939
 
 
   > Hi @knjwhn,
   > 
   > 
https://github.com/apache/incubator-mxnet/blob/60d74bc948869588c2f143fd3d55231859dc979f/src/operator/linalg_impl.h#L149
   
   Thanks,I've done that.  and instead of float32 type . I wrote an int8 
openblas gemm function(A:u8/s8 B:u8/s8 C:int32),can it use in convolution 
calculation after quantization? Hope for your help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu commented on issue #16705: Dropout inconsistency bug

2019-11-04 Thread GitBox
xidulu commented on issue #16705: Dropout inconsistency bug
URL: 
https://github.com/apache/incubator-mxnet/issues/16705#issuecomment-549630402
 
 
   Clearly, dropout in inference mode affects the random state:
   ```
   >>> mx.random.seed(123)
   >>> mx.nd.Dropout(x, cudnn_off=True)
   
   [[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]]
   
   >>> mx.random.uniform(shape=(2,2),ctx=mx.gpu(0))
   
   [[0.6512425  0.11220306]
[0.86499107 0.68052745]]
   
   >>> mx.random.seed(123)
   >>> mx.random.uniform(shape=(2,2),ctx=mx.gpu(0))
   
   [[0.9423294  0.68506277]
[0.19981462 0.60299706]]
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 commented on issue #16612: Compilation fails in master Cuda 10.1.105 GCC 7.4 Ubuntu 18.04

2019-11-04 Thread GitBox
anirudh2290 commented on issue #16612: Compilation fails in master Cuda 
10.1.105 GCC 7.4 Ubuntu 18.04
URL: 
https://github.com/apache/incubator-mxnet/issues/16612#issuecomment-549627214
 
 
   I agree, it would be worth opening a PR to dmlc-core. Thanks @DickJC123 !


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #6493: Tutorials that need improvement

2019-11-04 Thread GitBox
ChaiBapchya commented on issue #6493: Tutorials that need improvement
URL: 
https://github.com/apache/incubator-mxnet/issues/6493#issuecomment-549626986
 
 
   I'm guessing the more tutorials we have (on varied topics) the better it is 
for our users. Personally, I'd interested in knowing all of these.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya opened a new pull request #16722: Remove unused files in Website doc

2019-11-04 Thread GitBox
ChaiBapchya opened a new pull request #16722: Remove unused files in Website doc
URL: https://github.com/apache/incubator-mxnet/pull/16722
 
 
   ## Description ##
   After the revamping of the MXNet website, we no longer need 
   ```
   python/mxnet/ndarray_doc.py
   python/mxnet/symbol_doc.py
   ```
   
   Realized this while closing my previous PR - #14243 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - [ ] Code is well-documented: 
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - deleted:python/mxnet/ndarray_doc.py
   - deleted:python/mxnet/symbol_doc.py
   
   @aaronmarkham @sojiadeshina Please confirm


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya closed pull request #14243: Fix commands to make doc consistent

2019-11-04 Thread GitBox
ChaiBapchya closed pull request #14243: Fix commands to make doc consistent
URL: https://github.com/apache/incubator-mxnet/pull/14243
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wuxun-zhang commented on issue #16184: Add large tensor nightly tests for MKL-DNN operators

2019-11-04 Thread GitBox
wuxun-zhang commented on issue #16184: Add large tensor nightly tests for 
MKL-DNN operators
URL: https://github.com/apache/incubator-mxnet/pull/16184#issuecomment-549621011
 
 
   @ChaiBapchya @marcoabreu Please take a look again and see if your concerns 
are properly resolved. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy edited a comment on issue #16612: Compilation fails in master Cuda 10.1.105 GCC 7.4 Ubuntu 18.04

2019-11-04 Thread GitBox
larroy edited a comment on issue #16612: Compilation fails in master Cuda 
10.1.105 GCC 7.4 Ubuntu 18.04
URL: 
https://github.com/apache/incubator-mxnet/issues/16612#issuecomment-549614983
 
 
   I think it would be user friendly to avoid obscure compilation errors for 
users if we can avoid it. Meaning I think it would be best to add a PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #16612: Compilation fails in master Cuda 10.1.105 GCC 7.4 Ubuntu 18.04

2019-11-04 Thread GitBox
larroy commented on issue #16612: Compilation fails in master Cuda 10.1.105 GCC 
7.4 Ubuntu 18.04
URL: 
https://github.com/apache/incubator-mxnet/issues/16612#issuecomment-549614983
 
 
   I think it would be user friendly to avoid obscure compilation errors for 
users if we can avoid it. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] DickJC123 commented on issue #16612: Compilation fails in master Cuda 10.1.105 GCC 7.4 Ubuntu 18.04

2019-11-04 Thread GitBox
DickJC123 commented on issue #16612: Compilation fails in master Cuda 10.1.105 
GCC 7.4 Ubuntu 18.04
URL: 
https://github.com/apache/incubator-mxnet/issues/16612#issuecomment-549614211
 
 
   And FYI, if you feel it worth trying to correct this for MXNet users on the 
original cuda 10.1, the fix to the problematic line in dmlc-core is:
   ```
 // nvcc fails to compile 'Singleton()->' on first cuda 10.1 release, 
fixed with update 1.
 (*Singleton()).RegisterDelete(ptr);
   ```
   Worth a PR?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on issue #16408: Add MXNet Ops for fast multihead attention

2019-11-04 Thread GitBox
aaronmarkham commented on issue #16408: Add MXNet Ops for fast multihead 
attention
URL: https://github.com/apache/incubator-mxnet/pull/16408#issuecomment-549613083
 
 
   > @aaronmarkham is the website preview functionality still working after the 
website upgrade? I cannot see the preview of this PR: 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-16408/11/index.html
   
   No, a variety of features of the new site didn't work on S3. 
   To preview, you need to follow the directions on the wiki, or use the 
devmenu features I added in this PR: 
https://github.com/apache/incubator-mxnet/pull/16514


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-11-04 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 80bc794  Bump the publish timestamp.
80bc794 is described below

commit 80bc7944826dcc1d69f5dd3ec6965d1300fef857
Author: mxnet-ci 
AuthorDate: Tue Nov 5 00:41:16 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..814ae06
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue Nov  5 00:41:16 UTC 2019



[GitHub] [incubator-mxnet] DickJC123 commented on issue #16685: Memory planner doesn't respect 'output independence'. More optimizations possible.

2019-11-04 Thread GitBox
DickJC123 commented on issue #16685: Memory planner doesn't respect 'output 
independence'.  More optimizations possible.
URL: 
https://github.com/apache/incubator-mxnet/issues/16685#issuecomment-549608267
 
 
   I have not begun to work on this, and my plate is fairly full, so someone 
else can jump in if they want. The issue can be fixed narrowly to match the 
failing case I posted, but I was hoping for someone to also understand/fix the 
'endemic' issue @samskalicky mentions in #16131.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] DickJC123 removed a comment on issue #16131: Fix for duplicate subgraph inputs/outputs

2019-11-04 Thread GitBox
DickJC123 removed a comment on issue #16131: Fix for duplicate subgraph 
inputs/outputs
URL: https://github.com/apache/incubator-mxnet/pull/16131#issuecomment-549511847
 
 
   I have not begun to work on this, and my plate is fairly full, so someone 
else can jump in if they want.  The issue can be fixed narrowly to match the 
failing case I posted, but I was hoping for someone to also understand/fix the 
'endemic' issue @samskalicky mentions in 
https://github.com/apache/incubator-mxnet/pull/16131.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16699: Mixed data type binary ops

2019-11-04 Thread GitBox
haojin2 commented on a change in pull request #16699: Mixed data type binary ops
URL: https://github.com/apache/incubator-mxnet/pull/16699#discussion_r342324211
 
 

 ##
 File path: src/operator/mshadow_op.h
 ##
 @@ -194,6 +194,100 @@ MXNET_BINARY_MATH_OP_NC(right, b);
 
 MXNET_BINARY_MATH_OP_NC(mul, a * b);
 
+#ifndef _WIN32
 
 Review comment:
   It was due to the `C1002: out of heap space` error we've encountered many 
times.
   We're not generating more kernels(code) on windows to prevent hitting that 
error on windows machines.
   If you still think windows is excluded I would only think you've not given 
the code changes a complete look:
   1. There's also some parts that we have `#ifdef _WIN32` such as: 
https://github.com/apache/incubator-mxnet/pull/16699/files#diff-c383124e9cb87f51ac456a96b799615aR73
   2. We also have parts that have `#else` blocks such as: 
https://github.com/apache/incubator-mxnet/pull/16699/files#diff-c383124e9cb87f51ac456a96b799615aR73
   
   This is indeed a workaround for an issue that we could not solve on our own. 
I've also tried with upgrading vs compiler locally and it does not get this 
issue out of our way so that's why we have different impls for this same 
feature, otherwise we only have to drop this new feature for windows users.
   It's good to be eager to learn, but IMHO blocking a PR without a complete 
look and a very solid reason is not a good (nor polite) way for demonstrating 
your eagerness.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16699: Mixed data type binary ops

2019-11-04 Thread GitBox
haojin2 commented on a change in pull request #16699: Mixed data type binary ops
URL: https://github.com/apache/incubator-mxnet/pull/16699#discussion_r342324211
 
 

 ##
 File path: src/operator/mshadow_op.h
 ##
 @@ -194,6 +194,100 @@ MXNET_BINARY_MATH_OP_NC(right, b);
 
 MXNET_BINARY_MATH_OP_NC(mul, a * b);
 
+#ifndef _WIN32
 
 Review comment:
   It was due to the `C1002: out of heap space` error we've encountered many 
times.
   We're not generating more kernels(code) on windows to prevent hitting that 
error on windows machines.
   If you still think windows is excluded I would only think you've not given 
the code changes a complete look:
   1. There's also some parts that we have `#ifdef _WIN32` such as: 
https://github.com/apache/incubator-mxnet/pull/16699/files#diff-c383124e9cb87f51ac456a96b799615aR73
   2. We also have parts that have `#else` blocks such as: 
https://github.com/apache/incubator-mxnet/pull/16699/files#diff-c383124e9cb87f51ac456a96b799615aR73
   This is indeed a workaround for an issue that we could not solve on our own. 
I've also tried with upgrading vs compiler locally and it does not get this 
issue out of our way so that's why we have different impls for this same 
feature, otherwise we only have to drop this new feature for windows users.
   It's good to be eager to learn, but IMHO blocking a PR without a complete 
look and a very solid reason is not a good (nor polite) way for demonstrating 
your eagerness.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #11535: installed mxnet-cu92 on ubuntu but can't run example code correctly

2019-11-04 Thread GitBox
ChaiBapchya commented on issue #11535: installed mxnet-cu92 on ubuntu but can't 
run example code correctly
URL: 
https://github.com/apache/incubator-mxnet/issues/11535#issuecomment-549593778
 
 
   @zhuotest you can confirm
   But @rohun-tripathi does this help - https://www.nvidia.com/drivers/beta ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on a change in pull request #16699: Mixed data type binary ops

2019-11-04 Thread GitBox
marcoabreu commented on a change in pull request #16699: Mixed data type binary 
ops
URL: https://github.com/apache/incubator-mxnet/pull/16699#discussion_r342319668
 
 

 ##
 File path: src/operator/mshadow_op.h
 ##
 @@ -194,6 +194,100 @@ MXNET_BINARY_MATH_OP_NC(right, b);
 
 MXNET_BINARY_MATH_OP_NC(mul, a * b);
 
+#ifndef _WIN32
 
 Review comment:
   I'm well aware of the unit tests passing on windows, thanks for the helpful 
hint.
   
   Still, can you elaborate which part exactly is not supported by the windows 
compiler? Basically the whole PR is excluding Windows and that seems off. 
Having an entirely different implementation for a different OS is not something 
I see regularly, so I'm eager to learn 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on a change in pull request #16699: Mixed data type binary ops

2019-11-04 Thread GitBox
marcoabreu commented on a change in pull request #16699: Mixed data type binary 
ops
URL: https://github.com/apache/incubator-mxnet/pull/16699#discussion_r342319668
 
 

 ##
 File path: src/operator/mshadow_op.h
 ##
 @@ -194,6 +194,100 @@ MXNET_BINARY_MATH_OP_NC(right, b);
 
 MXNET_BINARY_MATH_OP_NC(mul, a * b);
 
+#ifndef _WIN32
 
 Review comment:
   I'm well aware of the unit tests passing on windows, thanks for the helpful 
hint.
   
   Still, can you elaborate which part exactly is not supported by the windows 
compiler? Basically the whole PR is excluding Windows and that seems off.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #16612: Compilation fails in master Cuda 10.1 GCC 7.4 Ubuntu 18.04

2019-11-04 Thread GitBox
larroy commented on issue #16612: Compilation fails in master Cuda 10.1 GCC 7.4 
Ubuntu 18.04
URL: 
https://github.com/apache/incubator-mxnet/issues/16612#issuecomment-549589878
 
 
   I was able to upgrade and the problem went away with the updated CUDA. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16699: Mixed data type binary ops

2019-11-04 Thread GitBox
haojin2 commented on a change in pull request #16699: Mixed data type binary ops
URL: https://github.com/apache/incubator-mxnet/pull/16699#discussion_r342316290
 
 

 ##
 File path: src/operator/mshadow_op.h
 ##
 @@ -194,6 +194,100 @@ MXNET_BINARY_MATH_OP_NC(right, b);
 
 MXNET_BINARY_MATH_OP_NC(mul, a * b);
 
+#ifndef _WIN32
 
 Review comment:
   It's supported with a different implementation due to limitations of the 
windows vs compiler.
   The fact that the corresponding unit tests are not discriminative against 
windows machines and they passed both windows cpu and gpu checks means this 
feature is also supported on windows.
   Please do make sure you have some grasp of the big picture of a PR before 
you block one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on issue #16721: GPU is not enabled for mxnet based word embeddings.

2019-11-04 Thread GitBox
sxjscience commented on issue #16721: GPU is not enabled for mxnet based word 
embeddings.
URL: 
https://github.com/apache/incubator-mxnet/issues/16721#issuecomment-549585414
 
 
   @csharma Would you ask questions here https://discuss.mxnet.io/ ? The issue 
page is for bug reports.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience closed issue #16721: GPU is not enabled for mxnet based word embeddings.

2019-11-04 Thread GitBox
sxjscience closed issue #16721: GPU is not enabled for mxnet based word 
embeddings.
URL: https://github.com/apache/incubator-mxnet/issues/16721
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (5a2fce5 -> b9f3b06)

2019-11-04 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 5a2fce5  [WIP][New Op] Add deformable conv v2 (#16341)
 add b9f3b06  Updated logos. (#16719)

No new revisions were added by this update.

Summary of changes:
 docs/static_site/src/assets/img/logos.png | Bin 493103 -> 113062 bytes
 1 file changed, 0 insertions(+), 0 deletions(-)



[GitHub] [incubator-mxnet] marcoabreu commented on a change in pull request #16699: Mixed data type binary ops

2019-11-04 Thread GitBox
marcoabreu commented on a change in pull request #16699: Mixed data type binary 
ops
URL: https://github.com/apache/incubator-mxnet/pull/16699#discussion_r342309638
 
 

 ##
 File path: src/operator/mshadow_op.h
 ##
 @@ -194,6 +194,100 @@ MXNET_BINARY_MATH_OP_NC(right, b);
 
 MXNET_BINARY_MATH_OP_NC(mul, a * b);
 
+#ifndef _WIN32
 
 Review comment:
   Could you elaborate why Windows is not supported?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (5a2fce5 -> b9f3b06)

2019-11-04 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 5a2fce5  [WIP][New Op] Add deformable conv v2 (#16341)
 add b9f3b06  Updated logos. (#16719)

No new revisions were added by this update.

Summary of changes:
 docs/static_site/src/assets/img/logos.png | Bin 493103 -> 113062 bytes
 1 file changed, 0 insertions(+), 0 deletions(-)



[GitHub] [incubator-mxnet] marcoabreu merged pull request #16719: Updated landing page logos (adding Dely)

2019-11-04 Thread GitBox
marcoabreu merged pull request #16719: Updated landing page logos (adding Dely)
URL: https://github.com/apache/incubator-mxnet/pull/16719
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (bb6305d -> 5a2fce5)

2019-11-04 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from bb6305d  [MKLDNN] support mkldnn gelu (#16710)
 add 5a2fce5  [WIP][New Op] Add deformable conv v2 (#16341)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/contrib/cnn/conv_layers.py  | 180 ++-
 ...nl.h => modulated_deformable_convolution-inl.h} | 331 -
 ...tion.cc => modulated_deformable_convolution.cc} |  49 +-
 ...tion.cu => modulated_deformable_convolution.cu} |  19 +-
 .../contrib/nn/modulated_deformable_im2col.cuh | 541 +
 .../contrib/nn/modulated_deformable_im2col.h   | 291 +++
 tests/python/gpu/test_gluon_contrib_gpu.py |  27 +
 tests/python/unittest/test_contrib_operator.py |  38 +-
 tests/python/unittest/test_gluon_contrib.py|  30 ++
 9 files changed, 1338 insertions(+), 168 deletions(-)
 copy src/operator/contrib/{deformable_convolution-inl.h => 
modulated_deformable_convolution-inl.h} (54%)
 copy src/operator/contrib/{deformable_convolution.cc => 
modulated_deformable_convolution.cc} (61%)
 copy src/operator/contrib/{deformable_convolution.cu => 
modulated_deformable_convolution.cu} (68%)
 create mode 100644 src/operator/contrib/nn/modulated_deformable_im2col.cuh
 create mode 100644 src/operator/contrib/nn/modulated_deformable_im2col.h



[incubator-mxnet] branch master updated (bb6305d -> 5a2fce5)

2019-11-04 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from bb6305d  [MKLDNN] support mkldnn gelu (#16710)
 add 5a2fce5  [WIP][New Op] Add deformable conv v2 (#16341)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/contrib/cnn/conv_layers.py  | 180 ++-
 ...nl.h => modulated_deformable_convolution-inl.h} | 331 -
 ...tion.cc => modulated_deformable_convolution.cc} |  49 +-
 ...tion.cu => modulated_deformable_convolution.cu} |  19 +-
 .../contrib/nn/modulated_deformable_im2col.cuh | 541 +
 .../contrib/nn/modulated_deformable_im2col.h   | 291 +++
 tests/python/gpu/test_gluon_contrib_gpu.py |  27 +
 tests/python/unittest/test_contrib_operator.py |  38 +-
 tests/python/unittest/test_gluon_contrib.py|  30 ++
 9 files changed, 1338 insertions(+), 168 deletions(-)
 copy src/operator/contrib/{deformable_convolution-inl.h => 
modulated_deformable_convolution-inl.h} (54%)
 copy src/operator/contrib/{deformable_convolution.cc => 
modulated_deformable_convolution.cc} (61%)
 copy src/operator/contrib/{deformable_convolution.cu => 
modulated_deformable_convolution.cu} (68%)
 create mode 100644 src/operator/contrib/nn/modulated_deformable_im2col.cuh
 create mode 100644 src/operator/contrib/nn/modulated_deformable_im2col.h



[incubator-mxnet] branch master updated (bb6305d -> 5a2fce5)

2019-11-04 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from bb6305d  [MKLDNN] support mkldnn gelu (#16710)
 add 5a2fce5  [WIP][New Op] Add deformable conv v2 (#16341)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/contrib/cnn/conv_layers.py  | 180 ++-
 ...nl.h => modulated_deformable_convolution-inl.h} | 331 -
 ...tion.cc => modulated_deformable_convolution.cc} |  49 +-
 ...tion.cu => modulated_deformable_convolution.cu} |  19 +-
 .../contrib/nn/modulated_deformable_im2col.cuh | 541 +
 .../contrib/nn/modulated_deformable_im2col.h   | 291 +++
 tests/python/gpu/test_gluon_contrib_gpu.py |  27 +
 tests/python/unittest/test_contrib_operator.py |  38 +-
 tests/python/unittest/test_gluon_contrib.py|  30 ++
 9 files changed, 1338 insertions(+), 168 deletions(-)
 copy src/operator/contrib/{deformable_convolution-inl.h => 
modulated_deformable_convolution-inl.h} (54%)
 copy src/operator/contrib/{deformable_convolution.cc => 
modulated_deformable_convolution.cc} (61%)
 copy src/operator/contrib/{deformable_convolution.cu => 
modulated_deformable_convolution.cu} (68%)
 create mode 100644 src/operator/contrib/nn/modulated_deformable_im2col.cuh
 create mode 100644 src/operator/contrib/nn/modulated_deformable_im2col.h



[GitHub] [incubator-mxnet] sxjscience merged pull request #16341: [WIP][New Op] Add deformable conv v2

2019-11-04 Thread GitBox
sxjscience merged pull request #16341: [WIP][New Op] Add deformable conv v2
URL: https://github.com/apache/incubator-mxnet/pull/16341
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #16341: [WIP][New Op] Add deformable conv v2

2019-11-04 Thread GitBox
sxjscience commented on a change in pull request #16341: [WIP][New Op] Add 
deformable conv v2
URL: https://github.com/apache/incubator-mxnet/pull/16341#discussion_r342303707
 
 

 ##
 File path: tests/python/unittest/test_contrib_operator.py
 ##
 @@ -409,6 +409,42 @@ def test_op_mrcnn_mask_target():
 assert_almost_equal(mask_targets.asnumpy(), gt_mask_targets.asnumpy())
 assert_almost_equal(mask_cls.asnumpy(), gt_mask_cls.asnumpy())
 
+@with_seed()
+def test_modulated_deformable_convolution():
+for num_batch in [1, 2]:
+for num_channel_data, num_deformable_group in itertools.product([4, 
8], [1, 2]):
+for input_height, input_width in itertools.product([5, 6], [5, 6]):
+for dilate in [(1, 1), (2, 2)]:
+for grad_nodes in [['im_data'], ['offset_data'], 
['weight']]:
+output_height = input_height
+output_width = input_width
+im_data = np.random.rand(num_batch, num_channel_data, 
input_height, input_width)
+offset_data = \
+np.random.rand(num_batch, num_deformable_group * 3 
* 3 * 2, output_height, output_width)\
+* 0.8 + 0.1
+mask_data = np.random.rand(num_batch, 
num_deformable_group * 3 * 3, output_height, output_width)
+mask_data = 0.5 * (1 + np.tanh(0.5 * mask_data)) # 
sigmoid
+weight = np.random.normal(0, 0.001, (num_channel_data, 
num_channel_data, 3, 3))
+bias = np.zeros(num_channel_data)
+
+im_data_var = mx.symbol.Variable(name="im_data")
+offset_data_var = 
mx.symbol.Variable(name="offset_data")
+mask_data_var = mx.symbol.Variable(name="mask_data")
+weight_var = mx.symbol.Variable(name="weight")
+bias_var = mx.symbol.Variable(name="bias")
+op = 
mx.sym.contrib.ModulatedDeformableConvolution(name='test_op', data=im_data_var,
+   
offset=offset_data_var, mask=mask_data_var,
+   
weight=weight_var, bias=bias_var,
+   
num_filter=num_channel_data, pad=dilate,
+   
kernel=(3, 3), stride=(1, 1), dilate=dilate,
+   
num_deformable_group=num_deformable_group)
+if grad_nodes[0] == 'offset_data':
+# wider tolerance needed for coordinate 
differential
+rtol, atol = 1.0, 1e-2
 
 Review comment:
   I think it's because the gradient for the offset is not very accurate. We 
should use some other ways to test the offset_data.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #16341: [WIP][New Op] Add deformable conv v2

2019-11-04 Thread GitBox
sxjscience commented on a change in pull request #16341: [WIP][New Op] Add 
deformable conv v2
URL: https://github.com/apache/incubator-mxnet/pull/16341#discussion_r342303092
 
 

 ##
 File path: tests/python/unittest/test_contrib_operator.py
 ##
 @@ -409,6 +409,42 @@ def test_op_mrcnn_mask_target():
 assert_almost_equal(mask_targets.asnumpy(), gt_mask_targets.asnumpy())
 assert_almost_equal(mask_cls.asnumpy(), gt_mask_cls.asnumpy())
 
+@with_seed()
+def test_modulated_deformable_convolution():
+for num_batch in [1, 2]:
+for num_channel_data, num_deformable_group in itertools.product([4, 
8], [1, 2]):
+for input_height, input_width in itertools.product([5, 6], [5, 6]):
+for dilate in [(1, 1), (2, 2)]:
+for grad_nodes in [['im_data'], ['offset_data'], 
['weight']]:
+output_height = input_height
+output_width = input_width
+im_data = np.random.rand(num_batch, num_channel_data, 
input_height, input_width)
+offset_data = \
+np.random.rand(num_batch, num_deformable_group * 3 
* 3 * 2, output_height, output_width)\
+* 0.8 + 0.1
+mask_data = np.random.rand(num_batch, 
num_deformable_group * 3 * 3, output_height, output_width)
+mask_data = 0.5 * (1 + np.tanh(0.5 * mask_data)) # 
sigmoid
+weight = np.random.normal(0, 0.001, (num_channel_data, 
num_channel_data, 3, 3))
+bias = np.zeros(num_channel_data)
+
+im_data_var = mx.symbol.Variable(name="im_data")
+offset_data_var = 
mx.symbol.Variable(name="offset_data")
+mask_data_var = mx.symbol.Variable(name="mask_data")
+weight_var = mx.symbol.Variable(name="weight")
+bias_var = mx.symbol.Variable(name="bias")
+op = 
mx.sym.contrib.ModulatedDeformableConvolution(name='test_op', data=im_data_var,
+   
offset=offset_data_var, mask=mask_data_var,
+   
weight=weight_var, bias=bias_var,
+   
num_filter=num_channel_data, pad=dilate,
+   
kernel=(3, 3), stride=(1, 1), dilate=dilate,
+   
num_deformable_group=num_deformable_group)
+if grad_nodes[0] == 'offset_data':
+# wider tolerance needed for coordinate 
differential
+rtol, atol = 1.0, 1e-2
 
 Review comment:
   `rtol = 1.0` looks too large...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #16341: [WIP][New Op] Add deformable conv v2

2019-11-04 Thread GitBox
sxjscience commented on a change in pull request #16341: [WIP][New Op] Add 
deformable conv v2
URL: https://github.com/apache/incubator-mxnet/pull/16341#discussion_r342302212
 
 

 ##
 File path: src/operator/contrib/nn/modulated_deformable_im2col.cuh
 ##
 @@ -0,0 +1,541 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *** BEGIN Caffe Copyright Notice and Disclaimer 

+ *
+ * COPYRIGHT
+ *
+ * All contributions by the University of California:
+ * Copyright (c) 2014-2017 The Regents of the University of California 
(Regents)
+ * All rights reserved.
+ *
+ * All other contributions:
+ * Copyright (c) 2014-2017, the respective contributors
+ * All rights reserved.
+ *
+ * Caffe uses a shared copyright model: each contributor holds copyright over
+ * their contributions to Caffe. The project versioning records all such
+ * contribution and copyright details. If a contributor wants to further mark
+ * their specific copyright on a particular contribution, they should indicate
+ * their copyright solely in the commit message of the change when it is
+ * committed.
+ *
+ * LICENSE
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice, 
this
+ * list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 
AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 
IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE 
FOR
+ * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 
DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF 
THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * CONTRIBUTION AGREEMENT
+ *
+ * By contributing to the BVLC/caffe repository through pull-request, comment,
+ * or otherwise, the contributor releases their content to the
+ * license and copyright terms herein.
+ *
+ * END Caffe Copyright Notice and Disclaimer 

+ *
+ * Copyright (c) 2018 Microsoft
+ * Licensed under The MIT License [see LICENSE for details]
+ * \file modulated_deformable_im2col.cuh
+ * \brief Function definitions of converting an image to
+ * column matrix based on kernel, padding, dilation, and offset.
+ * These functions are mainly used in modulated deformable convolution 
operators.
+ * \ref: https://arxiv.org/abs/1811.11168
+ * \author Yuwen Xiong, Haozhi Qi, Jifeng Dai, Xizhou Zhu, Han Hu
+ */
+
+#ifndef MXNET_OPERATOR_CONTRIB_NN_MODULATED_DEFORMABLE_IM2COL_CUH_
+#define MXNET_OPERATOR_CONTRIB_NN_MODULATED_DEFORMABLE_IM2COL_CUH_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "../../mxnet_op.h"
+#include "../../../common/cuda_utils.h"
+
+
+
+namespace mxnet {
+namespace op {
+
+template 
+__device__ DType dmcn_im2col_bilinear(const DType* bottom_data, const int 
data_width,
+  const int height, const int width, DType h, DType w) {
+
+  int h_low = floor(h);
+  int w_low = floor(w);
+  int h_high = h_low + 1;
+  int w_high = w_low + 1;
+
+  DType lh = h - h_low;
+  DType lw = w - w_low;
+  DType hh = 1 - lh, hw = 1 - lw;
+
+  DType v1 = 0;
+  if (h_low >= 0 && w_low >= 0)
+v1 = bottom_data[h_low * data_width + w_low];
+  DType v2 = 0;
+  if (h_low >=0 && w_high <= width - 1)
+v2 = bottom_data[h_low * data_width + w_high];
+  DType v3 = 0;
+  if (h_high <= height - 1 && w_low >= 0)
+v3 = bottom_data[h_high * data_width + 

[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #16341: [WIP][New Op] Add deformable conv v2

2019-11-04 Thread GitBox
sxjscience commented on a change in pull request #16341: [WIP][New Op] Add 
deformable conv v2
URL: https://github.com/apache/incubator-mxnet/pull/16341#discussion_r342301961
 
 

 ##
 File path: src/operator/contrib/nn/modulated_deformable_im2col.cuh
 ##
 @@ -0,0 +1,541 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *** BEGIN Caffe Copyright Notice and Disclaimer 

+ *
+ * COPYRIGHT
+ *
+ * All contributions by the University of California:
+ * Copyright (c) 2014-2017 The Regents of the University of California 
(Regents)
+ * All rights reserved.
+ *
+ * All other contributions:
+ * Copyright (c) 2014-2017, the respective contributors
+ * All rights reserved.
+ *
+ * Caffe uses a shared copyright model: each contributor holds copyright over
+ * their contributions to Caffe. The project versioning records all such
+ * contribution and copyright details. If a contributor wants to further mark
+ * their specific copyright on a particular contribution, they should indicate
+ * their copyright solely in the commit message of the change when it is
+ * committed.
+ *
+ * LICENSE
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice, 
this
+ * list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 
AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 
IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE 
FOR
+ * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 
DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF 
THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * CONTRIBUTION AGREEMENT
+ *
+ * By contributing to the BVLC/caffe repository through pull-request, comment,
+ * or otherwise, the contributor releases their content to the
+ * license and copyright terms herein.
+ *
+ * END Caffe Copyright Notice and Disclaimer 

+ *
+ * Copyright (c) 2018 Microsoft
+ * Licensed under The MIT License [see LICENSE for details]
+ * \file modulated_deformable_im2col.cuh
+ * \brief Function definitions of converting an image to
+ * column matrix based on kernel, padding, dilation, and offset.
+ * These functions are mainly used in modulated deformable convolution 
operators.
+ * \ref: https://arxiv.org/abs/1811.11168
+ * \author Yuwen Xiong, Haozhi Qi, Jifeng Dai, Xizhou Zhu, Han Hu
+ */
+
+#ifndef MXNET_OPERATOR_CONTRIB_NN_MODULATED_DEFORMABLE_IM2COL_CUH_
+#define MXNET_OPERATOR_CONTRIB_NN_MODULATED_DEFORMABLE_IM2COL_CUH_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "../../mxnet_op.h"
+#include "../../../common/cuda_utils.h"
+
+
+
+namespace mxnet {
+namespace op {
+
+template 
+__device__ DType dmcn_im2col_bilinear(const DType* bottom_data, const int 
data_width,
+  const int height, const int width, DType h, DType w) {
+
+  int h_low = floor(h);
+  int w_low = floor(w);
+  int h_high = h_low + 1;
+  int w_high = w_low + 1;
+
+  DType lh = h - h_low;
+  DType lw = w - w_low;
+  DType hh = 1 - lh, hw = 1 - lw;
+
+  DType v1 = 0;
+  if (h_low >= 0 && w_low >= 0)
+v1 = bottom_data[h_low * data_width + w_low];
+  DType v2 = 0;
+  if (h_low >=0 && w_high <= width - 1)
+v2 = bottom_data[h_low * data_width + w_high];
+  DType v3 = 0;
+  if (h_high <= height - 1 && w_low >= 0)
+v3 = bottom_data[h_high * data_width + 

[GitHub] [incubator-mxnet] zhreshold commented on issue #16341: [WIP][New Op] Add deformable conv v2

2019-11-04 Thread GitBox
zhreshold commented on issue #16341: [WIP][New Op] Add deformable conv v2
URL: https://github.com/apache/incubator-mxnet/pull/16341#issuecomment-549568041
 
 
   CI passed, training convergence passed, can you guys help merge it since the 
furture gluoncv models depends on this PR? @eric-haibin-lin @sxjscience 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] DickJC123 commented on issue #16612: Compilation fails in master Cuda 10.1 GCC 7.4 Ubuntu 18.04

2019-11-04 Thread GitBox
DickJC123 commented on issue #16612: Compilation fails in master Cuda 10.1 GCC 
7.4 Ubuntu 18.04
URL: 
https://github.com/apache/incubator-mxnet/issues/16612#issuecomment-549554260
 
 
   Yes, I believe this is a problem present in the original cuda 10.1 release 
(10.1.105), fixed by 10.1 Update 1 (10.1.168).  Are you able to upgrade at 
least to this version, or are we looking for a work-around for 10.1.105?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] csharma commented on issue #16721: GPU is not enabled for mxnet based word embeddings.

2019-11-04 Thread GitBox
csharma commented on issue #16721: GPU is not enabled for mxnet based word 
embeddings.
URL: 
https://github.com/apache/incubator-mxnet/issues/16721#issuecomment-549549560
 
 
   Add the following line to make it run
   from bert_embedding import BertEmbedding
   
   Best,
   Cartik


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Jerryzcn commented on issue #16708: Training an FPN model using grad_req="add" causes rapid divergence, while manually implemented gradient accumulation works fine

2019-11-04 Thread GitBox
Jerryzcn commented on issue #16708: Training an FPN model using grad_req="add"  
causes rapid divergence, while manually implemented gradient accumulation works 
fine
URL: 
https://github.com/apache/incubator-mxnet/issues/16708#issuecomment-549547579
 
 
   There is also some bugs in grad accumulation as well


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on issue #16701: Hybridize, conditional operator, and loop gradient/trainer bug

2019-11-04 Thread GitBox
sxjscience commented on issue #16701: Hybridize, conditional operator, and loop 
gradient/trainer bug
URL: 
https://github.com/apache/incubator-mxnet/issues/16701#issuecomment-549542897
 
 
   @junrushao1994 @szha @zheng-da 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zheng-da commented on issue #16603: Significant slowdown in some DGL models

2019-11-04 Thread GitBox
zheng-da commented on issue #16603: Significant slowdown in some DGL models
URL: 
https://github.com/apache/incubator-mxnet/issues/16603#issuecomment-549542817
 
 
   I just tried the experiment again and there is no problem. The command to 
run the experiment:
   ```
   python3 train.py --model DistMult --dataset FB15k --batch_size 1024 
--neg_sample_size 256 --hidden_dim 2000 --gamma 500.0 --lr 0.1 --max_step 2000 
--gpu 0
   ```
   
   You can use the following commands to install MXNet. The problem is very 
easy to reproduce. You can install the MKLDNN version if you want. It makes no 
difference.
   ```
   pip3 install mxnet-cu100
   ```
   
   ```
   pip3 install --pre mxnet-cu100
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on issue #16721: GPU is not enabled for mxnet based word embeddings.

2019-11-04 Thread GitBox
sxjscience commented on issue #16721: GPU is not enabled for mxnet based word 
embeddings.
URL: 
https://github.com/apache/incubator-mxnet/issues/16721#issuecomment-549541781
 
 
   @csharma There is no `bert_embedding` here in MXNet.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] csharma commented on issue #16721: GPU is not enabled for mxnet based word embeddings.

2019-11-04 Thread GitBox
csharma commented on issue #16721: GPU is not enabled for mxnet based word 
embeddings.
URL: 
https://github.com/apache/incubator-mxnet/issues/16721#issuecomment-549541198
 
 
   Add this line,
   
   from bert_embedding import BertEmbedding
   
   Yes, the laptop I am using has GPU support with 2 cuda cores.
   
   Best,
   Cartik


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on issue #16721: GPU is not enabled for mxnet based word embeddings.

2019-11-04 Thread GitBox
sxjscience commented on issue #16721: GPU is not enabled for mxnet based word 
embeddings.
URL: 
https://github.com/apache/incubator-mxnet/issues/16721#issuecomment-549538072
 
 
   @csharma Did the machine you are using have GPU support? Also, the python 
code you provided is not runnable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on issue #16705: Dropout inconsistency bug

2019-11-04 Thread GitBox
sxjscience commented on issue #16705: Dropout inconsistency bug
URL: 
https://github.com/apache/incubator-mxnet/issues/16705#issuecomment-549534705
 
 
   @DickJC123 You may see that I've manually set `cudnn_off=True`. Also, I 
think https://github.com/apache/incubator-mxnet/pull/16532 will solve this 
problem.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on issue #16705: Dropout inconsistency bug

2019-11-04 Thread GitBox
sxjscience commented on issue #16705: Dropout inconsistency bug
URL: 
https://github.com/apache/incubator-mxnet/issues/16705#issuecomment-549533626
 
 
   @DickJC123 The answer should be different because these two dropouts should 
share the same internal random number generator and the random state will be 
updated accordingly.
   
   For the inconsistency bug mentioned in this issue, it's not exactly related 
to the seeding problem.
   
   For example, consider the following script:
   ```python
   import mxnet as mx
   mx.random.seed(123)
   x = mx.nd.ones((10, 10))
   
   y = mx.nd.Dropout(x, cudnn_off=True)
   with mx.autograd.record():
  y = mx.nd.Dropout(x, cudnn_off=True)
   ```
   The first `y = mx.nd.Dropout(x, cudnn_off=True)` is not surrounded by 
`autograd`, and should not update the random state. However, in the current 
implementation 
(https://github.com/apache/incubator-mxnet/blob/bb6305d11d4383af2022e53ad94d6a1d5d93cb00/src/operator/nn/dropout-inl.h#L495),
 the `rand()` function will still be called when the node is constructed.. 
Thus, running `y = mx.nd.Dropout(x, cudnn_off=True)` outside the `train` loop 
will still interfere the random state.
   
   This means, the following two code snippets will obtain different results:
   - Case 1
   ```python
   import mxnet as mx
   mx.random.seed(123)
   x = mx.nd.ones((3, 3), ctx=mx.gpu())
   
   y = mx.nd.Dropout(x, cudnn_off=True)
   with mx.autograd.record():
  y = mx.nd.Dropout(x, cudnn_off=True)
   print(y)
   ```
   ```
   [[0. 2. 0.]
[0. 0. 2.]
[0. 2. 0.]]
   
   ```
   
   - Case 2
   ```python
   import mxnet as mx
   mx.random.seed(123)
   x = mx.nd.ones((3, 3), ctx=mx.gpu())
   
   with mx.autograd.record():
  y = mx.nd.Dropout(x, cudnn_off=True)
   print(y)
   ```
   ```
   [[0. 0. 2.]
[0. 0. 2.]
[0. 2. 0.]]
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] csharma opened a new issue #16721: GPU is not enabled for mxnet based word embeddings.

2019-11-04 Thread GitBox
csharma opened a new issue #16721: GPU is not enabled for mxnet based word 
embeddings.
URL: https://github.com/apache/incubator-mxnet/issues/16721
 
 
   Hi,
   
   bert_embedding = BertEmbedding(mx.gpu(0))
   
   causes the following error.
   Exception has occurred: MXNetError
   [15:02:22] C:\Jenkins\workspace\mxnet\mxnet\src\ndarray\ndarray.cc:1295: GPU 
is not enabled
   
   Please help
   best regards,
   Cartik

   ## Environment
   pip install mxnet-cu90
   Run in python code 
   >import mxnet as mx
   bert_embedding = BertEmbedding(mx.gpu(0))
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] DickJC123 closed issue #16670: cuDNN RNN dtype_with_fallback_ calc needs update

2019-11-04 Thread GitBox
DickJC123 closed issue #16670: cuDNN RNN dtype_with_fallback_ calc needs update
URL: https://github.com/apache/incubator-mxnet/issues/16670
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] DickJC123 commented on issue #16705: Dropout inconsistency bug

2019-11-04 Thread GitBox
DickJC123 commented on issue #16705: Dropout inconsistency bug
URL: 
https://github.com/apache/incubator-mxnet/issues/16705#issuecomment-549520723
 
 
   What behavior do we expect from a model that has two Dropouts, where no 
seeds have been set explicitly in advance?  Are the dropout patterns identical 
or different?
   
   If the answer is 'different', then I would think that by setting the seeds 
in advance, the two-Dropout model would then have repeatable behavior, but the 
Dropouts would continue to be different.
   
   Also, feel free @sxjscience to chime in on the discussion of PR 
https://github.com/apache/incubator-mxnet/pull/16532.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stu1130 commented on issue #16670: cuDNN RNN dtype_with_fallback_ calc needs update

2019-11-04 Thread GitBox
stu1130 commented on issue #16670: cuDNN RNN dtype_with_fallback_ calc needs 
update
URL: 
https://github.com/apache/incubator-mxnet/issues/16670#issuecomment-549518502
 
 
   @DickJC123 do you think we can close the issue?
   My though is that there are two things in the issue, first one was addressed 
but the the second one which is enabling the Tensor Cores by default is not 
tackled yet so I kept it open. But feel free to close it and maybe create 
enabling Tensor Cores issue if you want.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] nickguletskii commented on issue #16718: Cleaner API for utilizing all GPUs if available

2019-11-04 Thread GitBox
nickguletskii commented on issue #16718: Cleaner API for utilizing all GPUs if 
available
URL: 
https://github.com/apache/incubator-mxnet/issues/16718#issuecomment-549517741
 
 
   I think it would be better to introduce a separate function called 
`mxnet.all_gpus(): List[mxnet.Context]`, instead of adding a parameter to 
`mxnet.gpu`. This way, the return type of `mxnet.gpu` will remain 
`mxnet.Context`, instead of becoming `Union[mxnet.Context, 
List[mxnet.Context]]`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ddavydenko commented on issue #16666: Disable python logging verbose from C++ implementation

2019-11-04 Thread GitBox
ddavydenko commented on issue #1: Disable python logging verbose from C++ 
implementation
URL: 
https://github.com/apache/incubator-mxnet/issues/1#issuecomment-549515989
 
 
   @mxnet-label-bot add [Feature request]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ddavydenko commented on issue #16666: Disable python logging verbose from C++ implementation

2019-11-04 Thread GitBox
ddavydenko commented on issue #1: Disable python logging verbose from C++ 
implementation
URL: 
https://github.com/apache/incubator-mxnet/issues/1#issuecomment-549515804
 
 
   @deHsien , this would be a feature request as currently this not supported.
   @mxnet-label-bot add ["Feature Request"]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ddavydenko commented on issue #16677: What mode does PRELU support?

2019-11-04 Thread GitBox
ddavydenko commented on issue #16677: What mode does PRELU support?
URL: 
https://github.com/apache/incubator-mxnet/issues/16677#issuecomment-549515021
 
 
   @mxnet-label-bot add [Question]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] DickJC123 commented on issue #16131: Fix for duplicate subgraph inputs/outputs

2019-11-04 Thread GitBox
DickJC123 commented on issue #16131: Fix for duplicate subgraph inputs/outputs
URL: https://github.com/apache/incubator-mxnet/pull/16131#issuecomment-549511847
 
 
   I have not begun to work on this, and my plate is fairly full, so someone 
else can jump in if they want.  The issue can be fixed narrowly to match the 
failing case I posted, but I was hoping for someone to also understand/fix the 
'endemic' issue @samskalicky mentions in 
https://github.com/apache/incubator-mxnet/pull/16131.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-11-04 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 264081a  Bump the publish timestamp.
264081a is described below

commit 264081a94a73a0abfe2663d8b3052b9fbccc3abe
Author: mxnet-ci 
AuthorDate: Mon Nov 4 18:38:17 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..936c107
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Mon Nov  4 18:38:17 UTC 2019



[GitHub] [incubator-mxnet] aaronmarkham commented on issue #16719: Updated landing page logos (adding Dely)

2019-11-04 Thread GitBox
aaronmarkham commented on issue #16719: Updated landing page logos (adding Dely)
URL: https://github.com/apache/incubator-mxnet/pull/16719#issuecomment-549480769
 
 
   Flaky test failure... reported and restarted the test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on issue #16238: [Flaky] test_convolution_multiple_streams

2019-11-04 Thread GitBox
aaronmarkham commented on issue #16238: [Flaky] 
test_convolution_multiple_streams
URL: 
https://github.com/apache/incubator-mxnet/issues/16238#issuecomment-549480414
 
 
   Failed here: 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-gpu/detail/PR-16719/1/pipeline
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #16716: [Numpy][WIP] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-04 Thread GitBox
reminisce commented on a change in pull request #16716: [Numpy][WIP] Fix 
collect_params().zero_grad() in gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#discussion_r342187118
 
 

 ##
 File path: python/mxnet/gluon/parameter.py
 ##
 @@ -904,7 +904,11 @@ def zero_grad(self):
 return
 
 for arr in arrays.values():
-mx.nd.reset_arrays(*arr, num_arrays=len(arr))
+if is_np_array():
+for ele in arr:
+ele[:] = 0
+else:
+mx.nd.reset_arrays(*arr, num_arrays=len(arr))
 
 Review comment:
   Sounds good to me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #16716: [Numpy][WIP] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-04 Thread GitBox
sxjscience commented on a change in pull request #16716: [Numpy][WIP] Fix 
collect_params().zero_grad() in gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#discussion_r342182321
 
 

 ##
 File path: python/mxnet/gluon/parameter.py
 ##
 @@ -904,7 +904,11 @@ def zero_grad(self):
 return
 
 for arr in arrays.values():
-mx.nd.reset_arrays(*arr, num_arrays=len(arr))
+if is_np_array():
+for ele in arr:
+ele[:] = 0
 
 Review comment:
   Nice catch! I was not aware of that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on a change in pull request #16477: added more tests to verify support for large vector

2019-11-04 Thread GitBox
marcoabreu commented on a change in pull request #16477: added more tests to 
verify support for large vector
URL: https://github.com/apache/incubator-mxnet/pull/16477#discussion_r342182341
 
 

 ##
 File path: tests/nightly/test_large_vector.py
 ##
 @@ -708,6 +708,174 @@ def test_full():
 assert a[-1] == 3
 
 
+def test_astype():
+x = create_vector(size=LARGE_X//4)
+x = nd.tile(x, 4)
+y = x.astype('int32')
+assert y.dtype == np.int32
+assert y[-1] == LARGE_X//4-1
+
+
+def test_cast():
+x = create_vector(size=LARGE_X//4)
+x = nd.tile(x, 4)
+y = nd.cast(x, np.int32)
+assert y.dtype == np.int32
+assert y[-1] == LARGE_X//4-1
+
+
+def test_repeat():
+x = create_vector(size=LARGE_X//2)
+y = nd.repeat(x, repeats=2, axis = 0)
+assert y.shape[0] == LARGE_X
+assert y[1] == 0
+assert y[LARGE_X-1] == LARGE_X//2-1
+
+
+def create_input_for_rounding_ops():
+# Creates an vector with values (-LARGE/2  -2, -1, 0, 1, 2,  , 
LARGE/2-1)
+# then divides each element by 2 i.e (-LARGE/4  -1, -0.5, 0, 0.5, 1, 
 , LARGE/4-1)
+inp = nd.arange(-LARGE_X//2, LARGE_X//2, dtype=np.float64)
+inp = inp/2
+return inp
+
+
+def assert_correctness_of_rounding_ops(output, mid, expected_vals):
+# checks verifies 5 values at the middle positions of the input vector
+# i.e mid-2, mid-1, mid, mid+1, mid+2
+output_idx_to_inspect = [mid-2, mid-1, mid, mid+1, mid+2]
+for i in range(len(output_idx_to_inspect)):
+assert output[output_idx_to_inspect[i]] == expected_vals[i]
+
+
+def test_rounding_ops():
+x = create_input_for_rounding_ops()
+
+def test_ceil():
+y = nd.ceil(x)
+# expected ouput for middle 5 values after applying ceil()
+expected_output = [-1, 0, 0, 1, 1]
+assert_correctness_of_rounding_ops(y, LARGE_X//2, expected_output)
+
+def test_fix():
+y = nd.fix(x)
+# expected ouput for middle 5 values after applying fix()
+expected_output = [-1, 0, 0, 0, 1]
+assert_correctness_of_rounding_ops(y, LARGE_X//2, expected_output)
+
+def test_floor():
+y = nd.floor(x)
+# expected ouput for middle 5 values after applying floor()
+expected_output = [-1, -1, 0, 0, 1]
+assert_correctness_of_rounding_ops(y, LARGE_X//2, expected_output)
+
+def test_rint():
 
 Review comment:
   nosetests generally looks for functions starting with "test_", thus this 
function could be mistaken for being a standalone test


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #16716: [Numpy][WIP] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-04 Thread GitBox
sxjscience commented on a change in pull request #16716: [Numpy][WIP] Fix 
collect_params().zero_grad() in gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#discussion_r342181564
 
 

 ##
 File path: python/mxnet/gluon/parameter.py
 ##
 @@ -904,7 +904,11 @@ def zero_grad(self):
 return
 
 for arr in arrays.values():
-mx.nd.reset_arrays(*arr, num_arrays=len(arr))
+if is_np_array():
+for ele in arr:
+ele[:] = 0
+else:
+mx.nd.reset_arrays(*arr, num_arrays=len(arr))
 
 Review comment:
   I've checked the source code. The new approach should be fine as long as we 
use `cudaMemsetAsync` for implementing `ele[()] = 0`. In fact, 
`reset_arrays.cc` lies in the `contrib` folder and there is no need to add it 
to numpy.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #16716: [Numpy][WIP] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-04 Thread GitBox
reminisce commented on a change in pull request #16716: [Numpy][WIP] Fix 
collect_params().zero_grad() in gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#discussion_r342179463
 
 

 ##
 File path: python/mxnet/gluon/parameter.py
 ##
 @@ -904,7 +904,11 @@ def zero_grad(self):
 return
 
 for arr in arrays.values():
-mx.nd.reset_arrays(*arr, num_arrays=len(arr))
+if is_np_array():
+for ele in arr:
+ele[:] = 0
+else:
+mx.nd.reset_arrays(*arr, num_arrays=len(arr))
 
 Review comment:
   Can you add an alias `_npi_reset_arrays` in `reset_arrays.cc`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #16716: [Numpy][WIP] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-04 Thread GitBox
reminisce commented on a change in pull request #16716: [Numpy][WIP] Fix 
collect_params().zero_grad() in gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#discussion_r342178728
 
 

 ##
 File path: python/mxnet/gluon/parameter.py
 ##
 @@ -904,7 +904,11 @@ def zero_grad(self):
 return
 
 for arr in arrays.values():
-mx.nd.reset_arrays(*arr, num_arrays=len(arr))
+if is_np_array():
+for ele in arr:
+ele[:] = 0
 
 Review comment:
   Need to use `ele[()] = 0` here for supporting zero-dim ndarrays as well. 
`slice(None)` is not allowed as an index for those ndarrays.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #16638: [WIP] [Numpy] Add sampling method for bernoulli

2019-11-04 Thread GitBox
reminisce commented on a change in pull request #16638: [WIP] [Numpy] Add 
sampling method for bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#discussion_r342175246
 
 

 ##
 File path: python/mxnet/symbol/numpy_extension/random.py
 ##
 @@ -0,0 +1,57 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Namespace for operators used in Gluon dispatched by F=symbol."""
+
+from __future__ import absolute_import
+from ...context import current_context
+from .. import _internal as _npi
 
 Review comment:
   change this to
   ```python
   from ..numpy import _internal as _npi
   ```
   Same for `ndarray/numpy_extentions`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 commented on issue #16612: Compilation fails in master Cuda 10.1 GCC 7.4 Ubuntu 18.04

2019-11-04 Thread GitBox
anirudh2290 commented on issue #16612: Compilation fails in master Cuda 10.1 
GCC 7.4 Ubuntu 18.04
URL: 
https://github.com/apache/incubator-mxnet/issues/16612#issuecomment-549426440
 
 
   @hubutui Looks like your issue is unrelated.  I don't see issue related to 
ThreadLocalStore in your log.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-11-04 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 1019614  Bump the publish timestamp.
1019614 is described below

commit 10196146512077387071ffc8a7698169e47d7ca1
Author: mxnet-ci 
AuthorDate: Mon Nov 4 12:38:41 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..88e710d
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Mon Nov  4 12:38:41 UTC 2019



[GitHub] [incubator-mxnet] hubutui commented on issue #16612: Compilation fails in master Cuda 10.1 GCC 7.4 Ubuntu 18.04

2019-11-04 Thread GitBox
hubutui commented on issue #16612: Compilation fails in master Cuda 10.1 GCC 
7.4 Ubuntu 18.04
URL: 
https://github.com/apache/incubator-mxnet/issues/16612#issuecomment-549326491
 
 
   I got a similar issue with ArchLinux, cuda 10.1.243, gcc 8.3.0, opencv 
4.1.2. Here is my build log.
   
   
[mxnet-buildlog.txt](https://github.com/apache/incubator-mxnet/files/3803891/mxnet-buildlog.txt)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] artor1os commented on a change in pull request #16720: [Numpy] Implement numpy operator 'average'

2019-11-04 Thread GitBox
artor1os commented on a change in pull request #16720: [Numpy] Implement numpy 
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341986855
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -3386,6 +3386,95 @@ def argmin(a, axis=None, out=None):
 """
 return _npi.argmin(a, axis=axis, keepdims=False, out=out)
 
 
 Review comment:
   ```suggestion
   
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] artor1os commented on a change in pull request #16720: [Numpy] Implement numpy operator 'average'

2019-11-04 Thread GitBox
artor1os commented on a change in pull request #16720: [Numpy] Implement numpy 
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341986579
 
 

 ##
 File path: python/mxnet/numpy/multiarray.py
 ##
 @@ -5320,6 +5320,92 @@ def argmin(a, axis=None, out=None):
 """
 return _mx_nd_np.argmin(a, axis, out)
 
+@set_module('mxnet.numpy')
+def average(a, axis=None, weights=None, returned=False, out=None):
+"""
+Compute the weighted average along the specified axis.
+
+Parameters
+
+a : ndarray
+Array containing data to be averaged.
+axis : None or int or tuple of ints, optional
+Axis or axes along which to average a.
+The default, axis=None, will average over
+all of the elements of the input array.
+If axis is negative it counts from the last to the first axis.
+New in version 1.7.0.
+If axis is a tuple of ints, averaging is
+performed on all of the axes specified in the tuple
+instead of a single axis or all the axes as before.
+weights : ndarray, optional
+An array of weights associated with the values in a, must be the same 
dtype with a.
+Each value in a contributes to the average according to its associated 
weight.
+The weights array can either be 1-D (in which case its length must be
+the size of a along the given axis) or of the same shape as a.
+If weights=None, then all data in a are assumed to have a weight equal 
to one.
+The 1-D calculation is: avg = sum(a * weights) / sum(weights)
+The only constraint on weights is that sum(weights) must not be 0.
+returned : bool, optional
+Default is False.
+If True, the tuple (average, sum_of_weights) is returned,
+otherwise only the average is returned.
+If weights=None, sum_of_weights is equivalent to
+the number of elements over which the average is taken.
+out : ndarray, optional
+If provided, the calculation is done into this array.
+
+Returns
+
+retval, [sum_of_weights] : ndarray
+Return the average along the specified axis.
+When returned is True, return a tuple with the average as the first 
element
+and the sum of the weights as the second element. sum_of_weights is of 
the same type as retval.
+If a is integral, the result dtype will be float32, otherwise it will 
be the same as dtype of a.
+
+Raises
+
+MXNetError
+- When all weights along axis sum to zero.
+- When the length of 1D weights is not the same as the shape of a 
along axis.
+- When given 1D weights, the axis is not specified or is not int.
+- When the shape of weights and a differ, but weights are not 1D.
+
+See also
+
+mean
+
+Notes
+
+This function differs from the original `numpy.average`
+`_ in
+the following way(s):
+
+- Does not guarantee the same behavior with numpy when given float16 dtype 
and overflow happens
+- Does not support complex dtype
+- The dtypes of a and weights must be the same
+- Integral a results in float32 returned dtype, not float64
+
+Examples
+
+>>> data = np.arange(1, 5)
+>>> data
+array([1., 2., 3., 4.])
+>>> np.average(data)
+array(2.5)
+>>> np.average(np.arange(1, 11), weights=np.arange(10, 0, -1))
+array(4.)
+>>> data = np.arange(6).reshape((3,2))
+>>> data
+array([[0., 1.],
+   [2., 3.],
+   [4., 5.]])
+>>> weights = np.array([0.25, 0.75])
+array([0.25, 0.75])
+>>> np.average(data, axis=1, weights=weights)
+array([0.75, 2.75, 4.75])
+"""
+return _mx_nd_np.average(a, axis=axis, weights=weights, returned=returned, 
out=out)
 
 
 Review comment:
   ```suggestion
   
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] artor1os commented on a change in pull request #16720: [Numpy] Implement numpy operator 'average'

2019-11-04 Thread GitBox
artor1os commented on a change in pull request #16720: [Numpy] Implement numpy 
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341986777
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -3386,6 +3386,95 @@ def argmin(a, axis=None, out=None):
 """
 return _npi.argmin(a, axis=axis, keepdims=False, out=out)
 
+@set_module('mxnet.ndarray.numpy')
+def average(a, axis=None, weights=None, returned=False, out=None):
+"""
+Compute the weighted average along the specified axis.
+
+Parameters
+
+a : ndarray
+Array containing data to be averaged.
+axis : None or int or tuple of ints, optional
+Axis or axes along which to average a.
+The default, axis=None, will average over
+all of the elements of the input array.
+If axis is negative it counts from the last to the first axis.
+New in version 1.7.0.
+If axis is a tuple of ints, averaging is
+performed on all of the axes specified in the tuple
+instead of a single axis or all the axes as before.
+weights : ndarray, optional
+An array of weights associated with the values in a, must be the same 
dtype with a.
+Each value in a contributes to the average according to its associated 
weight.
+The weights array can either be 1-D (in which case its length must be
+the size of a along the given axis) or of the same shape as a.
+If weights=None, then all data in a are assumed to have a weight equal 
to one.
+The 1-D calculation is: avg = sum(a * weights) / sum(weights)
+The only constraint on weights is that sum(weights) must not be 0.
+returned : bool, optional
+Default is False.
+If True, the tuple (average, sum_of_weights) is returned,
+otherwise only the average is returned.
+If weights=None, sum_of_weights is equivalent to
+the number of elements over which the average is taken.
+out : ndarray, optional
+If provided, the calculation is done into this array.
+
+Returns
+
+retval, [sum_of_weights] : ndarray
+Return the average along the specified axis.
+When returned is True, return a tuple with the average as the first 
element
+and the sum of the weights as the second element. sum_of_weights is of 
the same type as retval.
+If a is integral, the result dtype will be float32, otherwise it will 
be the same as dtype of a.
+
+Raises
+
+MXNetError
+- When all weights along axis sum to zero.
+- When the length of 1D weights is not the same as the shape of a 
along axis.
+- When given 1D weights, the axis is not specified or is not int.
+- When the shape of weights and a differ, but weights are not 1D.
+
+See also
+
+mean
+
+Notes
+
+This function differs from the original `numpy.average`
+`_ in
+the following way(s):
+
+- Does not guarantee the same behavior with numpy when given float16 dtype 
and overflow happens
+- Does not support complex dtype
+- The dtypes of a and weights must be the same
+- Integral a results in float32 returned dtype, not float64
+
+Examples
+
+>>> data = np.arange(1, 5)
+>>> data
+array([1., 2., 3., 4.])
+>>> np.average(data)
+array(2.5)
+>>> np.average(np.arange(1, 11), weights=np.arange(10, 0, -1))
+array(4.)
+>>> data = np.arange(6).reshape((3,2))
+>>> data
+array([[0., 1.],
+   [2., 3.],
+   [4., 5.]])
+>>> weights = np.array([0.25, 0.75])
+array([0.25, 0.75])
+>>> np.average(data, axis=1, weights=weights)
+array([0.75, 2.75, 4.75])
+"""
+if weights is None:
+return _npi.average(a, axis=axis, weights=None, returned=returned, 
weighted=False, out=out)
+else:
+return _npi.average(a, axis=axis, weights=weights, returned=returned, 
out=out)
 
 
 Review comment:
   ```suggestion
   
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] artor1os commented on a change in pull request #16720: [Numpy] Implement numpy operator 'average'

2019-11-04 Thread GitBox
artor1os commented on a change in pull request #16720: [Numpy] Implement numpy 
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341986704
 
 

 ##
 File path: python/mxnet/numpy/multiarray.py
 ##
 @@ -5320,6 +5320,92 @@ def argmin(a, axis=None, out=None):
 """
 return _mx_nd_np.argmin(a, axis, out)
 
 
 Review comment:
   ```suggestion
   
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] artor1os commented on a change in pull request #16720: [Numpy] Implement numpy operator 'average'

2019-11-04 Thread GitBox
artor1os commented on a change in pull request #16720: [Numpy] Implement numpy 
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341986397
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -3355,6 +3355,94 @@ def argmin(a, axis=None, out=None):
 """
 return _npi.argmin(a, axis=axis, keepdims=False, out=out)
 
+def average(a, axis=None, weights=None, returned=False, out=None):
+"""
+Compute the weighted average along the specified axis.
+
+Parameters
+
+a : _Symbol
+Array containing data to be averaged.
+axis : None or int or tuple of ints, optional
+Axis or axes along which to average a.
+The default, axis=None, will average over
+all of the elements of the input array.
+If axis is negative it counts from the last to the first axis.
+New in version 1.7.0.
+If axis is a tuple of ints, averaging is
+performed on all of the axes specified in the tuple
+instead of a single axis or all the axes as before.
+weights : _Symbol, optional
+An array of weights associated with the values in a, must be the same 
dtype with a.
+Each value in a contributes to the average according to its associated 
weight.
+The weights array can either be 1-D (in which case its length must be
+the size of a along the given axis) or of the same shape as a.
+If weights=None, then all data in a are assumed to have a weight equal 
to one.
+The 1-D calculation is: avg = sum(a * weights) / sum(weights)
+The only constraint on weights is that sum(weights) must not be 0.
+returned : bool, optional
+Default is False.
+If True, the tuple (average, sum_of_weights) is returned,
+otherwise only the average is returned.
+If weights=None, sum_of_weights is equivalent to
+the number of elements over which the average is taken.
+out : _Symbol, optional
+If provided, the calculation is done into this array.
+
+Returns
+
+retval, [sum_of_weights] : _Symbol
+Return the average along the specified axis.
+When returned is True, return a tuple with the average as the first 
element
+and the sum of the weights as the second element. sum_of_weights is of 
the same type as retval.
+If a is integral, the result dtype will be float32, otherwise it will 
be the same as dtype of a.
+
+Raises
+
+MXNetError
+- When all weights along axis sum to zero.
+- When the length of 1D weights is not the same as the shape of a 
along axis.
+- When given 1D weights, the axis is not specified or is not int.
+- When the shape of weights and a differ, but weights are not 1D.
+
+See also
+
+mean
+
+Notes
+
+This function differs from the original `numpy.average`
+`_ in
+the following way(s):
+
+- Does not guarantee the same behavior with numpy when given float16 dtype 
and overflow happens
+- Does not support complex dtype
+- The dtypes of a and weights must be the same
+- Integral a results in float32 returned dtype, not float64
+
+Examples
+
+>>> data = np.arange(1, 5)
+>>> data
+array([1., 2., 3., 4.])
+>>> np.average(data)
+array(2.5)
+>>> np.average(np.arange(1, 11), weights=np.arange(10, 0, -1))
+array(4.)
+>>> data = np.arange(6).reshape((3,2))
+>>> data
+array([[0., 1.],
+   [2., 3.],
+   [4., 5.]])
+>>> weights = np.array([0.25, 0.75])
+array([0.25, 0.75])
+>>> np.average(data, axis=1, weights=weights)
+array([0.75, 2.75, 4.75])
+"""
+if weights is None:
+return _npi.average(a, axis=axis, weights=None, returned=returned, 
weighted=False,out=out)
+else:
+return _npi.average(a, axis=axis, weights=weights, returned=returned, 
out=out)
 
 
 Review comment:
   ```suggestion
   
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] artor1os commented on a change in pull request #16720: [Numpy] Implement numpy operator 'average'

2019-11-04 Thread GitBox
artor1os commented on a change in pull request #16720: [Numpy] Implement numpy 
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341986171
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -3355,6 +3355,94 @@ def argmin(a, axis=None, out=None):
 """
 return _npi.argmin(a, axis=axis, keepdims=False, out=out)
 
 
 Review comment:
   ```suggestion
   
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (94aab39 -> bb6305d)

2019-11-04 Thread patriczhao
This is an automated email from the ASF dual-hosted git repository.

patriczhao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 94aab39  [Quantization] Enhance gluon quantization API (#16695)
 add bb6305d  [MKLDNN] support mkldnn gelu (#16710)

No new revisions were added by this update.

Summary of changes:
 src/operator/nn/mkldnn/mkldnn_act.cc|  5 -
 src/operator/subgraph/mkldnn/mkldnn_conv_property.h |  3 ++-
 tests/python/mkl/test_subgraph.py   | 17 ++---
 3 files changed, 20 insertions(+), 5 deletions(-)



[GitHub] [incubator-mxnet] pengzhao-intel merged pull request #16710: [MKLDNN] support mkldnn gelu

2019-11-04 Thread GitBox
pengzhao-intel merged pull request #16710: [MKLDNN] support mkldnn gelu
URL: https://github.com/apache/incubator-mxnet/pull/16710
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] fumingxing2015 closed issue #16562: Same model but different time-consuming

2019-11-04 Thread GitBox
fumingxing2015 closed issue #16562: Same model  but different time-consuming
URL: https://github.com/apache/incubator-mxnet/issues/16562
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy operator 'average'

2019-11-04 Thread GitBox
haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy 
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341962676
 
 

 ##
 File path: src/operator/numpy/np_broadcast_reduce_op.h
 ##
 @@ -398,6 +399,353 @@ void ReduceAxesComputeWithWorkspaceImpl(const OpContext& 
ctx,
   });
 }
 
+struct NumpyWeightedAverageParam : public 
dmlc::Parameter {
+  dmlc::optional> axis;
+  bool returned;
+  bool weighted;
+
+  DMLC_DECLARE_PARAMETER(NumpyWeightedAverageParam) {
+DMLC_DECLARE_FIELD(axis)
+  .set_default(dmlc::optional>())
+  .describe("Axis or axes along which a average is performed. The default, 
axis=None, will average "
+"all of the elements of the input array. If axis is negative 
it counts from the "
+"last to the first axis.");
+DMLC_DECLARE_FIELD(returned)
+  .set_default(false)
+  .describe("If True, the tuple (average, sum_of_weights) is returned,"
+"otherwise only the average is returned."
+"If weights=None, sum_of_weights is equivalent to"
+"the number of elements over which the average is taken.");
+DMLC_DECLARE_FIELD(weighted)
+  .set_default(true)
+  .describe("Auxiliary flag to deal with none weights.");
+  }
+};
+
+inline bool NumpyWeightedAverageShape(const nnvm::NodeAttrs& attrs,
+  std::vector *in_attrs,
+  std::vector *out_attrs) {
+  const NumpyWeightedAverageParam& param = 
nnvm::get(attrs.parsed);
+  CHECK_EQ(in_attrs->size(), (param.weighted ? 2U : 1U));
+  CHECK_EQ(out_attrs->size(), 2U);
+  if (!shape_is_known(in_attrs->at(0))) {
+return false;
+  }
+
+  const TShape& a_shape = (*in_attrs)[0];
+  SHAPE_ASSIGN_CHECK(*out_attrs, 0,
+ NumpyReduceAxesShapeImpl(a_shape, param.axis, false));
+
+  if (param.weighted) {
+const TShape& w_shape = (*in_attrs)[1];
+if (w_shape.ndim() != a_shape.ndim()) {
+  CHECK_EQ(w_shape.ndim(), 1U) << "1D weights expected when shapes of a 
and weights differ.";
+
+  CHECK_EQ(param.axis.has_value(), true) << "Axis must be specified when 
shapes of a and weights differ.";
+
+  mxnet::Tuple axes(param.axis.value());
+
+  CHECK_EQ(axes.ndim(), 1U) << "Axis must be int when shapes of a and 
weights differ.";
+
+  int red_axis = axes[0] < 0 ? axes[0] + a_shape.ndim() : axes[0];
+
+  CHECK_EQ(a_shape[red_axis], w_shape[0]) << "Length of weights not 
compatible with specified "
+ "axis.";
+
+  SHAPE_ASSIGN_CHECK(*out_attrs, 1,
+ NumpyReduceAxesShapeImpl(w_shape, 
dmlc::optional>(), false));
+} else {
+  for (int i = 0; i < w_shape.ndim(); i++) {
+CHECK_EQ(w_shape[i], a_shape[i]);
+  }
+  SHAPE_ASSIGN_CHECK(*out_attrs, 1,
+ NumpyReduceAxesShapeImpl(w_shape, param.axis, false));
+}
+  } else {
+SHAPE_ASSIGN_CHECK(*out_attrs, 1, TShape(0, -1));
+  }
+
+  return shape_is_known(out_attrs->at(0)) && shape_is_known(out_attrs->at(1));
+}
+
+template
+struct avg_grad_a_kernel {
+  template
+  MSHADOW_XINLINE static void Map(int i,
+  DType* out,
+  const DType* w,
+  const DType* scl,
+  const DType* ograd,
+  const mshadow::Shape<6>& small,
+  const mshadow::Shape<6>& big) {
+// partial a = w / sum(w)
+size_t big_idx = i;
+size_t small_idx = i;
+size_t big_stride = 1;
+size_t small_stride = 1;
+size_t red_axis_idx = 0;
+for (int axis = 5; axis >= 0; --axis) {
+  size_t axis_idx = big_idx % big[axis];
+  small_idx -= axis_idx * big_stride;
+  if (small[axis] != 1) {
+small_idx += axis_idx * small_stride;
+  } else if (onedim && small[axis] != big[axis]) {
+red_axis_idx = axis_idx;
+  }
+  big_idx /= big[axis];
+  big_stride *= big[axis];
+  small_stride *= small[axis];
+}
+if (onedim) {
+  KERNEL_ASSIGN(out[i], req, (ograd[small_idx] * (w[red_axis_idx] / 
*scl)));
+} else {
+  KERNEL_ASSIGN(out[i], req, (ograd[small_idx] * (w[i] / scl[small_idx])));
+}
+  }
+};
+
+template
+struct avg_grad_w_kernel {
+  template
+  MSHADOW_XINLINE static void Map(int i,
+  DType* out,
+  const DType* a,
+  const DType* scl,
+  const DType* sum_of_wa,
+  const DType* ograd,
+  const mshadow::Shape<6>& small,
+  const mshadow::Shape<6>& big) {
+// partial w = (a * sum(w) - sum(a*w)) / (sum(w) * sum(w))
+size_t big_idx = i;
+size_t 

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy operator 'average'

2019-11-04 Thread GitBox
haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy 
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341962909
 
 

 ##
 File path: src/operator/numpy/np_broadcast_reduce_op.h
 ##
 @@ -398,6 +399,353 @@ void ReduceAxesComputeWithWorkspaceImpl(const OpContext& 
ctx,
   });
 }
 
+struct NumpyWeightedAverageParam : public 
dmlc::Parameter {
+  dmlc::optional> axis;
+  bool returned;
+  bool weighted;
+
+  DMLC_DECLARE_PARAMETER(NumpyWeightedAverageParam) {
+DMLC_DECLARE_FIELD(axis)
+  .set_default(dmlc::optional>())
+  .describe("Axis or axes along which a average is performed. The default, 
axis=None, will average "
+"all of the elements of the input array. If axis is negative 
it counts from the "
+"last to the first axis.");
+DMLC_DECLARE_FIELD(returned)
+  .set_default(false)
+  .describe("If True, the tuple (average, sum_of_weights) is returned,"
+"otherwise only the average is returned."
+"If weights=None, sum_of_weights is equivalent to"
+"the number of elements over which the average is taken.");
+DMLC_DECLARE_FIELD(weighted)
+  .set_default(true)
+  .describe("Auxiliary flag to deal with none weights.");
+  }
+};
+
+inline bool NumpyWeightedAverageShape(const nnvm::NodeAttrs& attrs,
+  std::vector *in_attrs,
+  std::vector *out_attrs) {
+  const NumpyWeightedAverageParam& param = 
nnvm::get(attrs.parsed);
+  CHECK_EQ(in_attrs->size(), (param.weighted ? 2U : 1U));
+  CHECK_EQ(out_attrs->size(), 2U);
+  if (!shape_is_known(in_attrs->at(0))) {
+return false;
+  }
+
+  const TShape& a_shape = (*in_attrs)[0];
+  SHAPE_ASSIGN_CHECK(*out_attrs, 0,
+ NumpyReduceAxesShapeImpl(a_shape, param.axis, false));
+
+  if (param.weighted) {
+const TShape& w_shape = (*in_attrs)[1];
+if (w_shape.ndim() != a_shape.ndim()) {
+  CHECK_EQ(w_shape.ndim(), 1U) << "1D weights expected when shapes of a 
and weights differ.";
+
+  CHECK_EQ(param.axis.has_value(), true) << "Axis must be specified when 
shapes of a and weights differ.";
+
+  mxnet::Tuple axes(param.axis.value());
+
+  CHECK_EQ(axes.ndim(), 1U) << "Axis must be int when shapes of a and 
weights differ.";
+
+  int red_axis = axes[0] < 0 ? axes[0] + a_shape.ndim() : axes[0];
+
+  CHECK_EQ(a_shape[red_axis], w_shape[0]) << "Length of weights not 
compatible with specified "
+ "axis.";
+
+  SHAPE_ASSIGN_CHECK(*out_attrs, 1,
+ NumpyReduceAxesShapeImpl(w_shape, 
dmlc::optional>(), false));
+} else {
+  for (int i = 0; i < w_shape.ndim(); i++) {
+CHECK_EQ(w_shape[i], a_shape[i]);
+  }
+  SHAPE_ASSIGN_CHECK(*out_attrs, 1,
+ NumpyReduceAxesShapeImpl(w_shape, param.axis, false));
+}
+  } else {
+SHAPE_ASSIGN_CHECK(*out_attrs, 1, TShape(0, -1));
+  }
+
+  return shape_is_known(out_attrs->at(0)) && shape_is_known(out_attrs->at(1));
+}
+
+template
+struct avg_grad_a_kernel {
+  template
+  MSHADOW_XINLINE static void Map(int i,
+  DType* out,
+  const DType* w,
+  const DType* scl,
+  const DType* ograd,
+  const mshadow::Shape<6>& small,
+  const mshadow::Shape<6>& big) {
+// partial a = w / sum(w)
+size_t big_idx = i;
+size_t small_idx = i;
+size_t big_stride = 1;
+size_t small_stride = 1;
+size_t red_axis_idx = 0;
+for (int axis = 5; axis >= 0; --axis) {
+  size_t axis_idx = big_idx % big[axis];
+  small_idx -= axis_idx * big_stride;
+  if (small[axis] != 1) {
+small_idx += axis_idx * small_stride;
+  } else if (onedim && small[axis] != big[axis]) {
+red_axis_idx = axis_idx;
+  }
+  big_idx /= big[axis];
+  big_stride *= big[axis];
+  small_stride *= small[axis];
+}
+if (onedim) {
+  KERNEL_ASSIGN(out[i], req, (ograd[small_idx] * (w[red_axis_idx] / 
*scl)));
+} else {
+  KERNEL_ASSIGN(out[i], req, (ograd[small_idx] * (w[i] / scl[small_idx])));
+}
+  }
+};
+
+template
+struct avg_grad_w_kernel {
+  template
+  MSHADOW_XINLINE static void Map(int i,
+  DType* out,
+  const DType* a,
+  const DType* scl,
+  const DType* sum_of_wa,
+  const DType* ograd,
+  const mshadow::Shape<6>& small,
+  const mshadow::Shape<6>& big) {
+// partial w = (a * sum(w) - sum(a*w)) / (sum(w) * sum(w))
+size_t big_idx = i;
+size_t 

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy operator 'average'

2019-11-04 Thread GitBox
haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy 
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341961812
 
 

 ##
 File path: src/operator/numpy/np_broadcast_reduce_op.h
 ##
 @@ -398,6 +399,353 @@ void ReduceAxesComputeWithWorkspaceImpl(const OpContext& 
ctx,
   });
 }
 
+struct NumpyWeightedAverageParam : public 
dmlc::Parameter {
+  dmlc::optional> axis;
+  bool returned;
+  bool weighted;
+
+  DMLC_DECLARE_PARAMETER(NumpyWeightedAverageParam) {
+DMLC_DECLARE_FIELD(axis)
+  .set_default(dmlc::optional>())
+  .describe("Axis or axes along which a average is performed. The default, 
axis=None, will average "
+"all of the elements of the input array. If axis is negative 
it counts from the "
+"last to the first axis.");
+DMLC_DECLARE_FIELD(returned)
+  .set_default(false)
+  .describe("If True, the tuple (average, sum_of_weights) is returned,"
+"otherwise only the average is returned."
+"If weights=None, sum_of_weights is equivalent to"
+"the number of elements over which the average is taken.");
+DMLC_DECLARE_FIELD(weighted)
+  .set_default(true)
+  .describe("Auxiliary flag to deal with none weights.");
+  }
+};
+
+inline bool NumpyWeightedAverageShape(const nnvm::NodeAttrs& attrs,
+  std::vector *in_attrs,
+  std::vector *out_attrs) {
+  const NumpyWeightedAverageParam& param = 
nnvm::get(attrs.parsed);
+  CHECK_EQ(in_attrs->size(), (param.weighted ? 2U : 1U));
+  CHECK_EQ(out_attrs->size(), 2U);
+  if (!shape_is_known(in_attrs->at(0))) {
+return false;
+  }
+
+  const TShape& a_shape = (*in_attrs)[0];
+  SHAPE_ASSIGN_CHECK(*out_attrs, 0,
+ NumpyReduceAxesShapeImpl(a_shape, param.axis, false));
+
+  if (param.weighted) {
+const TShape& w_shape = (*in_attrs)[1];
+if (w_shape.ndim() != a_shape.ndim()) {
+  CHECK_EQ(w_shape.ndim(), 1U) << "1D weights expected when shapes of a 
and weights differ.";
+
+  CHECK_EQ(param.axis.has_value(), true) << "Axis must be specified when 
shapes of a and weights differ.";
+
+  mxnet::Tuple axes(param.axis.value());
+
+  CHECK_EQ(axes.ndim(), 1U) << "Axis must be int when shapes of a and 
weights differ.";
+
+  int red_axis = axes[0] < 0 ? axes[0] + a_shape.ndim() : axes[0];
+
+  CHECK_EQ(a_shape[red_axis], w_shape[0]) << "Length of weights not 
compatible with specified "
+ "axis.";
+
+  SHAPE_ASSIGN_CHECK(*out_attrs, 1,
+ NumpyReduceAxesShapeImpl(w_shape, 
dmlc::optional>(), false));
+} else {
+  for (int i = 0; i < w_shape.ndim(); i++) {
+CHECK_EQ(w_shape[i], a_shape[i]);
+  }
+  SHAPE_ASSIGN_CHECK(*out_attrs, 1,
+ NumpyReduceAxesShapeImpl(w_shape, param.axis, false));
+}
+  } else {
+SHAPE_ASSIGN_CHECK(*out_attrs, 1, TShape(0, -1));
+  }
+
+  return shape_is_known(out_attrs->at(0)) && shape_is_known(out_attrs->at(1));
+}
+
+template
+struct avg_grad_a_kernel {
+  template
+  MSHADOW_XINLINE static void Map(int i,
+  DType* out,
+  const DType* w,
+  const DType* scl,
+  const DType* ograd,
+  const mshadow::Shape<6>& small,
+  const mshadow::Shape<6>& big) {
+// partial a = w / sum(w)
+size_t big_idx = i;
+size_t small_idx = i;
+size_t big_stride = 1;
+size_t small_stride = 1;
+size_t red_axis_idx = 0;
+for (int axis = 5; axis >= 0; --axis) {
+  size_t axis_idx = big_idx % big[axis];
+  small_idx -= axis_idx * big_stride;
+  if (small[axis] != 1) {
+small_idx += axis_idx * small_stride;
+  } else if (onedim && small[axis] != big[axis]) {
+red_axis_idx = axis_idx;
+  }
+  big_idx /= big[axis];
+  big_stride *= big[axis];
+  small_stride *= small[axis];
+}
+if (onedim) {
+  KERNEL_ASSIGN(out[i], req, (ograd[small_idx] * (w[red_axis_idx] / 
*scl)));
+} else {
+  KERNEL_ASSIGN(out[i], req, (ograd[small_idx] * (w[i] / scl[small_idx])));
+}
+  }
+};
+
+template
+struct avg_grad_w_kernel {
+  template
+  MSHADOW_XINLINE static void Map(int i,
+  DType* out,
+  const DType* a,
+  const DType* scl,
+  const DType* sum_of_wa,
+  const DType* ograd,
+  const mshadow::Shape<6>& small,
+  const mshadow::Shape<6>& big) {
+// partial w = (a * sum(w) - sum(a*w)) / (sum(w) * sum(w))
+size_t big_idx = i;
+size_t 

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy operator 'average'

2019-11-04 Thread GitBox
haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy 
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341961427
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -597,6 +597,98 @@ def _test_np_exception(func, shape, dim):
 else:
 _test_np_exception(func, shape, dim)
 
+@with_seed()
+@use_np
+def test_np_average():
+class TestAverage(HybridBlock):
+def __init__(self, axis=None, returned=False):
+super(TestAverage, self).__init__()
+# necessary initializations
+self._axis = axis
+self._returned = returned
+ 
+def hybrid_forward(self, F, a, weights):
+return F.np.average(a, weights=weights, axis=self._axis, 
returned=self._returned)
+
+def avg_backward(a, w, avg, axes):
+# avg = sum(a * w) / sum(w)
+if axes is not None and not isinstance(axes, tuple) and axes < 0:
+axes += a.ndim
+if w is None:
+return [_np.ones(shape=a.shape, dtype=a.dtype)/(a.size/avg.size), 
None]
+onedim = a.ndim != w.ndim
+if onedim:
+new_shape = [a.shape[i] if i == axes else 1 for i in range(a.ndim)]
+w = w.reshape(new_shape)
+w = _np.broadcast_to(w, a.shape)
+   
+# partial a = w / sum(w)
+# partial w = (a*sum(w) - sum(a*w)) / (sum(w) * sum(w))
+scl = _np.sum(w, axis=axes, keepdims=True)
+a_grad = _np.divide(w, scl)
+w_grad = _np.divide(a*scl-_np.sum(a*w, axis=axes, keepdims=True), 
scl*scl)
+
+if onedim:
+axis = []
+for i in range(a.ndim):
+if i != axes:
+axis.append(i)
+w_grad = _np.sum(w_grad, axis=tuple(axis))
+return [a_grad, w_grad]
+
+tensor_shapes = [
+((3, 5), (3, 5), None),  # (a_shape, w_shape, axes)
+((4, 5, 6), (4, 5, 6), (0, 2)),
+((3,), (3,), 0),
+((2, 3), (3,), 1),
+((2, 3, 4), (2,), 0),
+((2, 3, 4), (3,), 1),
+((2, 3, 4), (4,), -1),
+((2, 3, 4, 5), (5,), 3)
+]
+
+for hybridize in [True, False]:
+for returned in [True, False]:
+for a_shape, w_shape, axes in tensor_shapes:
+for dtype in ['float32', 'float64']:
+for is_weighted in [True, False]:
+test_average = TestAverage(axes, returned)
+if hybridize:
+test_average.hybridize()
+a = np.random.uniform(-1.0, 1.0, size=a_shape, 
dtype=dtype)
+a.attach_grad()
+w = None
+np_w = None
+if is_weighted:
+w = np.random.uniform(-1.0, 1.0, size=w_shape, 
dtype=dtype)
+w.attach_grad()
+np_w = w.asnumpy()
+np_out = _np.average(a.asnumpy(), axis=axes, 
weights=np_w, returned=returned)
+with mx.autograd.record():
+mx_out = test_average(a, w)
+rtol = 1e-3
+atol = 1e-4
+if returned:
+np_out, np_sum_of_weights = np_out
+mx_out, mx_sum_of_weights = mx_out
+assert_almost_equal(mx_sum_of_weights.asnumpy(), 
np_sum_of_weights, rtol=rtol, atol=atol)
+assert mx_out.shape == np_out.shape
+assert_almost_equal(mx_out.asnumpy(), 
np_out.astype(dtype), rtol=rtol, atol=atol)
+mx_out.backward()
+# Code to get reference backward value
+a_grad, w_grad = avg_backward(a.asnumpy(), np_w, 
np_out, axes)
+assert_almost_equal(a.grad.asnumpy(), a_grad, 
rtol=rtol, atol=atol)
+if is_weighted:
+assert_almost_equal(w.grad.asnumpy(), w_grad, 
rtol=rtol*10, atol=atol*10)
+
+# Test imperative once again
+np_out = _np.average(a.asnumpy(), weights=np_w, 
axis=axes, returned=returned)
+mx_out = np.average(a, weights=w, axis=axes, 
returned=returned)
+if returned:
+np_out, np_sum_of_weights = np_out
+mx_out, mx_sum_of_weights = mx_out
+assert_almost_equal(mx_sum_of_weights.asnumpy(), 
np_sum_of_weights, rtol=rtol, atol=atol)
+assert_almost_equal(mx_out.asnumpy(), 
np_out.astype(dtype), rtol=rtol, atol=atol)
 
 Review comment:
   Same here.
   One more blank line below this line.
   2-line gaps between Python 

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy operator 'average'

2019-11-04 Thread GitBox
haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy 
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341961190
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -597,6 +597,98 @@ def _test_np_exception(func, shape, dim):
 else:
 _test_np_exception(func, shape, dim)
 
+@with_seed()
 
 Review comment:
   One more blank line above.
   2-line gaps between Python functions is recommended.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy operator 'average'

2019-11-04 Thread GitBox
haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy 
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341960828
 
 

 ##
 File path: src/operator/numpy/np_broadcast_reduce_op_value.cc
 ##
 @@ -249,6 +250,76 @@ inline bool IsIntType(const int dtype) {
   dtype == mshadow::kInt64);
 }
 
+inline bool NumpyWeightedAverageType(const nnvm::NodeAttrs& attrs,
+ std::vector *in_attrs,
+ std::vector *out_attrs) {
+  const NumpyWeightedAverageParam  = 
nnvm::get(attrs.parsed);
+  CHECK_EQ(in_attrs->size(), (param.weighted ? 2U : 1U));
+  CHECK_EQ(out_attrs->size(), 2U);
+
+  TYPE_ASSIGN_CHECK(*in_attrs, 0, out_attrs->at(0));
+  TYPE_ASSIGN_CHECK(*out_attrs, 0, in_attrs->at(0));
+  if (param.weighted) {
+TYPE_ASSIGN_CHECK(*in_attrs, 1, in_attrs->at(0));
+  }
+  TYPE_ASSIGN_CHECK(*out_attrs, 1, in_attrs->at(0));
+
+  return in_attrs->at(0) != -1 && out_attrs->at(0) != -1 &&
+  (!param.weighted || (in_attrs->at(1) != -1)) &&
 
 Review comment:
   Alignment:
   ```c++
 return in_attrs->at(0) != -1 && out_attrs->at(0) != -1 &&
(!param.weighted || (in_attrs->at(1) != -1)) &&
out_attrs->at(1) != -1;
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy operator 'average'

2019-11-04 Thread GitBox
haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy 
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341960103
 
 

 ##
 File path: src/operator/numpy/np_broadcast_reduce_op.h
 ##
 @@ -398,6 +399,353 @@ void ReduceAxesComputeWithWorkspaceImpl(const OpContext& 
ctx,
   });
 }
 
+struct NumpyWeightedAverageParam : public 
dmlc::Parameter {
+  dmlc::optional> axis;
+  bool returned;
+  bool weighted;
+
+  DMLC_DECLARE_PARAMETER(NumpyWeightedAverageParam) {
+DMLC_DECLARE_FIELD(axis)
+  .set_default(dmlc::optional>())
+  .describe("Axis or axes along which a average is performed. The default, 
axis=None, will average "
+"all of the elements of the input array. If axis is negative 
it counts from the "
+"last to the first axis.");
+DMLC_DECLARE_FIELD(returned)
+  .set_default(false)
+  .describe("If True, the tuple (average, sum_of_weights) is returned,"
+"otherwise only the average is returned."
+"If weights=None, sum_of_weights is equivalent to"
+"the number of elements over which the average is taken.");
+DMLC_DECLARE_FIELD(weighted)
+  .set_default(true)
+  .describe("Auxiliary flag to deal with none weights.");
+  }
+};
+
+inline bool NumpyWeightedAverageShape(const nnvm::NodeAttrs& attrs,
+  std::vector *in_attrs,
+  std::vector *out_attrs) {
+  const NumpyWeightedAverageParam& param = 
nnvm::get(attrs.parsed);
+  CHECK_EQ(in_attrs->size(), (param.weighted ? 2U : 1U));
+  CHECK_EQ(out_attrs->size(), 2U);
+  if (!shape_is_known(in_attrs->at(0))) {
+return false;
+  }
+
+  const TShape& a_shape = (*in_attrs)[0];
+  SHAPE_ASSIGN_CHECK(*out_attrs, 0,
+ NumpyReduceAxesShapeImpl(a_shape, param.axis, false));
+
+  if (param.weighted) {
+const TShape& w_shape = (*in_attrs)[1];
+if (w_shape.ndim() != a_shape.ndim()) {
+  CHECK_EQ(w_shape.ndim(), 1U) << "1D weights expected when shapes of a 
and weights differ.";
+
+  CHECK_EQ(param.axis.has_value(), true) << "Axis must be specified when 
shapes of a and weights differ.";
+
+  mxnet::Tuple axes(param.axis.value());
+
+  CHECK_EQ(axes.ndim(), 1U) << "Axis must be int when shapes of a and 
weights differ.";
+
+  int red_axis = axes[0] < 0 ? axes[0] + a_shape.ndim() : axes[0];
+
+  CHECK_EQ(a_shape[red_axis], w_shape[0]) << "Length of weights not 
compatible with specified "
+ "axis.";
+
+  SHAPE_ASSIGN_CHECK(*out_attrs, 1,
+ NumpyReduceAxesShapeImpl(w_shape, 
dmlc::optional>(), false));
+} else {
+  for (int i = 0; i < w_shape.ndim(); i++) {
+CHECK_EQ(w_shape[i], a_shape[i]);
+  }
+  SHAPE_ASSIGN_CHECK(*out_attrs, 1,
+ NumpyReduceAxesShapeImpl(w_shape, param.axis, false));
+}
+  } else {
+SHAPE_ASSIGN_CHECK(*out_attrs, 1, TShape(0, -1));
+  }
+
+  return shape_is_known(out_attrs->at(0)) && shape_is_known(out_attrs->at(1));
+}
+
+template
+struct avg_grad_a_kernel {
+  template
+  MSHADOW_XINLINE static void Map(int i,
+  DType* out,
+  const DType* w,
+  const DType* scl,
+  const DType* ograd,
+  const mshadow::Shape<6>& small,
+  const mshadow::Shape<6>& big) {
+// partial a = w / sum(w)
+size_t big_idx = i;
+size_t small_idx = i;
+size_t big_stride = 1;
+size_t small_stride = 1;
+size_t red_axis_idx = 0;
+for (int axis = 5; axis >= 0; --axis) {
+  size_t axis_idx = big_idx % big[axis];
+  small_idx -= axis_idx * big_stride;
+  if (small[axis] != 1) {
+small_idx += axis_idx * small_stride;
+  } else if (onedim && small[axis] != big[axis]) {
+red_axis_idx = axis_idx;
+  }
+  big_idx /= big[axis];
+  big_stride *= big[axis];
+  small_stride *= small[axis];
+}
+if (onedim) {
+  KERNEL_ASSIGN(out[i], req, (ograd[small_idx] * (w[red_axis_idx] / 
*scl)));
+} else {
+  KERNEL_ASSIGN(out[i], req, (ograd[small_idx] * (w[i] / scl[small_idx])));
+}
+  }
+};
+
+template
+struct avg_grad_w_kernel {
+  template
+  MSHADOW_XINLINE static void Map(int i,
+  DType* out,
+  const DType* a,
+  const DType* scl,
+  const DType* sum_of_wa,
+  const DType* ograd,
+  const mshadow::Shape<6>& small,
+  const mshadow::Shape<6>& big) {
+// partial w = (a * sum(w) - sum(a*w)) / (sum(w) * sum(w))
+size_t big_idx = i;
+size_t 

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy operator 'average'

2019-11-04 Thread GitBox
haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy 
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341959031
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -38,7 +38,8 @@
'argmin', 'std', 'var', 'indices', 'copysign', 'ravel', 'hanning', 
'hamming', 'blackman', 'flip',
'around', 'hypot', 'rad2deg', 'deg2rad', 'unique', 'lcm', 'tril', 
'identity', 'take',
'ldexp', 'vdot', 'inner', 'outer', 'equal', 'not_equal', 'greater', 
'less', 'greater_equal', 'less_equal',
-   'hsplit', 'rot90', 'einsum', 'true_divide', 'nonzero', 
'shares_memory', 'may_share_memory', 'diff', 'resize']
+   'hsplit', 'rot90', 'einsum', 'true_divide', 'nonzero', 
'shares_memory', 'may_share_memory', 'diff', 'resize',
+   'average']
 
 Review comment:
   Move both the `average` in this list and the definition of the function to 
the position after `mean`.
   Same for all other files.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >