[incubator-mxnet] branch master updated (90091b1 -> 5c136c9)

2019-09-15 Thread patriczhao
This is an automated email from the ASF dual-hosted git repository.

patriczhao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 90091b1  [Numpy] Numpy copysign (#15851)
 add 5c136c9  Update dmlc-core (#16149)

No new revisions were added by this update.

Summary of changes:
 3rdparty/dmlc-core| 2 +-
 amalgamation/amalgamation.py  | 3 ++-
 .../macros/src/main/scala/org/apache/mxnet/utils/CToScalaUtils.scala  | 4 ++--
 3 files changed, 5 insertions(+), 4 deletions(-)



[GitHub] [incubator-mxnet] pengzhao-intel merged pull request #16149: update dmlc-core

2019-09-15 Thread GitBox
pengzhao-intel merged pull request #16149: update dmlc-core
URL: https://github.com/apache/incubator-mxnet/pull/16149
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ZhennanQin commented on issue #16149: update dmlc-core

2019-09-15 Thread GitBox
ZhennanQin commented on issue #16149: update dmlc-core
URL: https://github.com/apache/incubator-mxnet/pull/16149#issuecomment-531649192
 
 
   @szha Finally, this got CI passed. Please review and merge. Thanks a lot.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] nacorti commented on issue #15320: Weird C++ Error / Bug when calling asnumpy() or exporting the weight of darknet53 while training

2019-09-15 Thread GitBox
nacorti commented on issue #15320: Weird C++ Error / Bug when calling asnumpy() 
or exporting the weight of darknet53 while training 
URL: 
https://github.com/apache/incubator-mxnet/issues/15320#issuecomment-531645128
 
 
   Same problem here, also in python3
   
   Using a custom dataset on a pretrained yolo3_darknet53_voc model from gluon 
model_zoo
   
   The value b in the error seems directly proportional to the batch size and 
inversely related to the number of workers, but I don't know what to change to 
make the 3549 value larger.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 edited a comment on issue #16016: [numpy] operator ravel, derive from reshape

2019-09-15 Thread GitBox
tingying2020 edited a comment on issue #16016: [numpy] operator ravel, derive 
from reshape
URL: https://github.com/apache/incubator-mxnet/pull/16016#issuecomment-531645036
 
 
   > @tingying2020 Would you rebase the code? I will merge after that.
   
   Rebased.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on issue #16016: [numpy] operator ravel, derive from reshape

2019-09-15 Thread GitBox
tingying2020 commented on issue #16016: [numpy] operator ravel, derive from 
reshape
URL: https://github.com/apache/incubator-mxnet/pull/16016#issuecomment-531645036
 
 
   > @tingying2020 Would you rebase the code? I will merge after that.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 closed pull request #16016: [numpy] operator ravel, derive from reshape

2019-09-15 Thread GitBox
tingying2020 closed pull request #16016: [numpy] operator ravel, derive from 
reshape
URL: https://github.com/apache/incubator-mxnet/pull/16016
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 opened a new pull request #16016: [numpy] operator ravel, derive from reshape

2019-09-15 Thread GitBox
tingying2020 opened a new pull request #16016: [numpy] operator ravel, derive 
from reshape
URL: https://github.com/apache/incubator-mxnet/pull/16016
 
 
   Numpy operator ravel which is the same as reshape(x, -1).
   
   @haojin2 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #16124: [numpy] [tvm] operator true_divide

2019-09-15 Thread GitBox
tingying2020 commented on a change in pull request #16124: [numpy] [tvm] 
operator true_divide
URL: https://github.com/apache/incubator-mxnet/pull/16124#discussion_r324511648
 
 

 ##
 File path: contrib/tvmop/core/umath.py
 ##
 @@ -0,0 +1,112 @@
+ # Licensed to the Apache Software Foundation (ASF) under one
+ # or more contributor license agreements.  See the NOTICE file
+ # distributed with this work for additional information
+ # regarding copyright ownership.  The ASF licenses this file
+ # to you under the Apache License, Version 2.0 (the
+ # "License"); you may not use this file except in compliance
+ # with the License.  You may obtain a copy of the License at
+ #
+ #   http://www.apache.org/licenses/LICENSE-2.0
+ #
+ # Unless required by applicable law or agreed to in writing,
+ # software distributed under the License is distributed on an
+ # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ # KIND, either express or implied.  See the License for the
+ # specific language governing permissions and limitations
+ # under the License.
+import tvm
+from .. import defop, AllTypes, AllTypesButHalf
+
+def compute_true_divide(dtype, ndim):
+A = tvm.placeholder([tvm.var() for _ in range(ndim)], name='A', 
dtype=dtype)
+B = tvm.placeholder([tvm.var() for _ in range(ndim)], name='B', 
dtype=dtype)
+if dtype in ["float16", "float32", "float64"]:
+C = tvm.compute([tvm.var() for _ in range(ndim)],
+lambda *index: A[index] / B[index], name='C')
+else:
+C = tvm.compute([tvm.var() for _ in range(ndim)],
+lambda *index: A[index].astype("float64") /
 
 Review comment:
   Changed to `float32` now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 commented on a change in pull request #16124: [numpy] [tvm] operator true_divide

2019-09-15 Thread GitBox
tingying2020 commented on a change in pull request #16124: [numpy] [tvm] 
operator true_divide
URL: https://github.com/apache/incubator-mxnet/pull/16124#discussion_r324511593
 
 

 ##
 File path: src/operator/numpy/np_true_divide.cc
 ##
 @@ -41,19 +46,97 @@ bool TrueDivideType(const nnvm::NodeAttrs& attrs,
 const int lhs_dtype = in_attrs->at(0);
 const int rhs_dtype = in_attrs->at(1);
 CHECK_EQ(lhs_dtype, rhs_dtype)
-<< "_true_divide currently only supports same dtype for dividend and 
divisor";
+  << "_true_divide currently only supports same dtype for dividend and 
divisor";
   }
-  auto is_float = [](const int dtype) {
-return dtype == mshadow::kFloat32 || dtype == mshadow::kFloat64 || dtype 
== mshadow::kFloat16;
-  };
-
-  for (const int dtype : *in_attrs) {
-CHECK(is_float(dtype)) << "_true_divide currently only supports float 
dtype";
+  if (IsIntType(in_attrs->at(0))) {
+TYPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::kFloat64);
 
 Review comment:
   Changed to `float32`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin commented on a change in pull request #16175: [Dataset] add shard API

2019-09-15 Thread GitBox
eric-haibin-lin commented on a change in pull request #16175: [Dataset] add 
shard API
URL: https://github.com/apache/incubator-mxnet/pull/16175#discussion_r324509765
 
 

 ##
 File path: python/mxnet/gluon/data/dataset.py
 ##
 @@ -84,14 +84,13 @@ def shard(self, num_shards, index):
 """
 assert index < num_shards, 'Shard index of out bound: %d out of 
%d'%(index, num_shards)
 assert num_shards > 0, 'Number of shards must be greater than 0'
+assert index >= 0, 'Index must be non-negative'
 length = len(self)
-shard_len = length // num_shards
+shard_len = (length + num_shards - 1) // num_shards
 # Compute the start index for this partition
 start = shard_len * index
 # Compute the end index for this partition
-end = start + shard_len
-if index == num_shards - 1:
-end = length
+end = start + shard_len if index < num_shards - 1 else length
 
 Review comment:
   Got it. Thanks a lot to point this out! I'll fix it 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #16175: [Dataset] add shard API

2019-09-15 Thread GitBox
wkcn commented on a change in pull request #16175: [Dataset] add shard API
URL: https://github.com/apache/incubator-mxnet/pull/16175#discussion_r324503936
 
 

 ##
 File path: python/mxnet/gluon/data/dataset.py
 ##
 @@ -84,14 +84,13 @@ def shard(self, num_shards, index):
 """
 assert index < num_shards, 'Shard index of out bound: %d out of 
%d'%(index, num_shards)
 assert num_shards > 0, 'Number of shards must be greater than 0'
+assert index >= 0, 'Index must be non-negative'
 length = len(self)
-shard_len = length // num_shards
+shard_len = (length + num_shards - 1) // num_shards
 # Compute the start index for this partition
 start = shard_len * index
 # Compute the end index for this partition
-end = start + shard_len
-if index == num_shards - 1:
-end = length
+end = start + shard_len if index < num_shards - 1 else length
 
 Review comment:
   When `length=6, num_shareds=4`, 
   `shared_len = (6 + 4 - 1) // 4 = 2`
   
   The interval of the four partitions are `[0, 2), [2, 4), [4, 6), [6, 6)`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-09-15 Thread anirudh2290
This is an automated email from the ASF dual-hosted git repository.

anirudh2290 pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 8af2e7a  Bump the publish timestamp.
8af2e7a is described below

commit 8af2e7af8523bd1c228405b59e4f91455fb66ef1
Author: mxnet-ci 
AuthorDate: Mon Sep 16 01:37:56 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..2f14945
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Mon Sep 16 01:37:56 UTC 2019



[GitHub] [incubator-mxnet] ZhennanQin commented on a change in pull request #16131: Fix for duplicate subgraph inputs/outputs

2019-09-15 Thread GitBox
ZhennanQin commented on a change in pull request #16131: Fix for duplicate 
subgraph inputs/outputs
URL: https://github.com/apache/incubator-mxnet/pull/16131#discussion_r324489273
 
 

 ##
 File path: src/operator/subgraph/subgraph_property.h
 ##
 @@ -296,8 +296,20 @@ class SubgraphProperty {
*/
   virtual void ConnectSubgraphOutputs(const nnvm::NodePtr subgraph_node,
   std::vector* 
output_entries) const {
+// Collapse output_entries pointing to same NodeEntry
 
 Review comment:
   You're right. But changing inside ConnectSubgraphOutputs() won't help for 
the backends that override this function, and because of duplicated output 
entries, most of backends had already overrided this function. Your proposal 
doesn't work for them. Even for newly added backends, they will most likely 
continue overriding ConnectSubgraphOutputs() because the outputs connection may 
have a different sequence. 
   
   To make things more clear as you requested, I suggest to introduce an new 
api, maybe called `ConnectSubgraphUniqueOutputs`, then the code would be like,
   ```
   virtual void ConnectSubgraphUniqueOutputs(const nnvm::NodePtr subgraph_node,
 std::vector* 
output_entries) const {
 size_t idx = 0;
 for (size_t i = 0; i < output_entries->size(); ++i) {
   *output_entries->at(i) = nnvm::NodeEntry{subgraph_node, idx++, 0};
   }
   
   virtual void ConnectSubgraphOutputs(const nnvm::NodePtr subgraph_node,
 std::vector* 
output_entries) const {
   std::vector unique_output_entires; // Collect the unique 
entry, let's collect the first entry
   std::unordered_map> 
output_entires_map; // Collect all duplicated
   // Collect above infomation
   for (auto entry_ptr : output_entries) {
 if (*entry_ptr is unique) { // replace this with nnvm::NodeEntryEqual logic
   unique_output_entires.push_back(entry_ptr);
 } else {
   output_entires_map[first_entry_ptr_of_it].push_back(entry_ptr);
 }
   }
   // Pass unique_output_entires to ConnectSubgraphUniqueOutputs and collect 
the change
   ConnectSubgraphUniqueOutputs(n, _output_entires);
   // Now we know how to make the change to all duplicated
   for (auto entry_ptr : unique_output_entires) {
 auto duplicated_entries = output_entires_map[entry_ptr];
 for (auto& i : duplicated_entries) {
   *i = *entry_ptr;
 }
   }
   ```
   Then backend developer can only override ConnectSubgraphUniqueOutputs() to 
avoid handle duplicated outputs.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on issue #16114: improve dataloader signals and messages

2019-09-15 Thread GitBox
wkcn commented on issue #16114: improve dataloader signals and messages
URL: https://github.com/apache/incubator-mxnet/pull/16114#issuecomment-531610885
 
 
   I have a suggestion: Dataloader does not terminate the program but print a 
warning when timeout. Users decide whether to terminate it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #16175: [Dataset] add shard API

2019-09-15 Thread GitBox
wkcn commented on a change in pull request #16175: [Dataset] add shard API
URL: https://github.com/apache/incubator-mxnet/pull/16175#discussion_r324485736
 
 

 ##
 File path: python/mxnet/gluon/data/dataset.py
 ##
 @@ -64,6 +64,37 @@ def filter(self, fn):
 from . import FilterSampler
 return _SampledDataset(self, FilterSampler(fn, self))
 
+def shard(self, num_shards, index):
+"""Returns a new dataset includes only 1/num_shards of this dataset.
+
+For distributed training, be sure to shard before you randomize the 
dataset
+(such as shuffle), if you want each worker to reach a unique subset.
+
+Parameters
+--
+num_shards : int
+A integer representing the number of data shards.
+index : int
+A integer representing the index of the current shard.
+
+Returns
+---
+Dataset
+The result dataset.
+"""
 
 Review comment:
   We also need the assertation `index >= 0`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #16175: [Dataset] add shard API

2019-09-15 Thread GitBox
wkcn commented on a change in pull request #16175: [Dataset] add shard API
URL: https://github.com/apache/incubator-mxnet/pull/16175#discussion_r324486007
 
 

 ##
 File path: python/mxnet/gluon/data/dataset.py
 ##
 @@ -64,6 +64,37 @@ def filter(self, fn):
 from . import FilterSampler
 return _SampledDataset(self, FilterSampler(fn, self))
 
+def shard(self, num_shards, index):
+"""Returns a new dataset includes only 1/num_shards of this dataset.
+
+For distributed training, be sure to shard before you randomize the 
dataset
+(such as shuffle), if you want each worker to reach a unique subset.
+
+Parameters
+--
+num_shards : int
+A integer representing the number of data shards.
+index : int
+A integer representing the index of the current shard.
+
+Returns
+---
+Dataset
+The result dataset.
+"""
+assert index < num_shards, 'Shard index of out bound: %d out of 
%d'%(index, num_shards)
+assert num_shards > 0, 'Number of shards must be greater than 0'
+length = len(self)
+shard_len = length // num_shards
+# Compute the start index for this partition
+start = shard_len * index
+# Compute the end index for this partition
+end = start + shard_len
+if index == num_shards - 1:
+end = length
 
 Review comment:
   The sampler is not uniform. If there are 199 samples and 100 partitions, 
there will be `1+99` samples in the last partition, but `1` sample in other 
partitions.
   
   The following implementation will be more uniform.
   ```python
   shared_len = length // num_shareds
   rest = length % num_shareds
   start = shared_len * index + min(index, rest)
   end = start + shared_len + (index < rest)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-09-15 Thread anirudh2290
This is an automated email from the ASF dual-hosted git repository.

anirudh2290 pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 81b2a9c  Bump the publish timestamp.
81b2a9c is described below

commit 81b2a9c5756bdbb5a7c4e9616d4568c38ebb0bad
Author: mxnet-ci 
AuthorDate: Sun Sep 15 21:14:04 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..0911cea
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sun Sep 15 21:14:04 UTC 2019



[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-09-15 Thread anirudh2290
This is an automated email from the ASF dual-hosted git repository.

anirudh2290 pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 4ee43b4  Bump the publish timestamp.
4ee43b4 is described below

commit 4ee43b4ccc5c653529689b845edc65478bcf6b98
Author: mxnet-ci 
AuthorDate: Sun Sep 15 19:42:33 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..0d31704
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sun Sep 15 19:42:33 UTC 2019



[GitHub] [incubator-mxnet] sxjscience edited a comment on issue #16016: [numpy] operator ravel, derive from reshape

2019-09-15 Thread GitBox
sxjscience edited a comment on issue #16016: [numpy] operator ravel, derive 
from reshape
URL: https://github.com/apache/incubator-mxnet/pull/16016#issuecomment-531588040
 
 
   @tingying2020 Would you rebase the code? I will merge after that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [Numpy] Numpy copysign (#15851)

2019-09-15 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 90091b1  [Numpy] Numpy copysign (#15851)
90091b1 is described below

commit 90091b155d6f53c070e3c406f9edc69f38d02e96
Author: Haozheng Fan 
AuthorDate: Mon Sep 16 02:57:51 2019 +0800

[Numpy] Numpy copysign (#15851)

* add numpy compatible copysign

* fix scalar op registration error

* add test
---
 python/mxnet/ndarray/numpy/_op.py  |  53 -
 python/mxnet/numpy/multiarray.py   |  53 -
 python/mxnet/symbol/numpy/_symbol.py   |  36 -
 src/operator/mshadow_op.h  |  10 +++
 src/operator/numpy/np_elemwise_broadcast_op.cc |  36 +
 src/operator/numpy/np_elemwise_broadcast_op.cu |  21 +
 src/operator/operator_tune.cc  |   5 ++
 tests/python/unittest/test_numpy_op.py | 105 +
 8 files changed, 316 insertions(+), 3 deletions(-)

diff --git a/python/mxnet/ndarray/numpy/_op.py 
b/python/mxnet/ndarray/numpy/_op.py
index 671345c..b8e4f3f 100644
--- a/python/mxnet/ndarray/numpy/_op.py
+++ b/python/mxnet/ndarray/numpy/_op.py
@@ -33,7 +33,7 @@ __all__ = ['zeros', 'ones', 'full', 'add', 'subtract', 
'multiply', 'divide', 'mo
'rint', 'radians', 'reciprocal', 'square', 'negative', 'fix', 
'ceil', 'floor',
'trunc', 'logical_not', 'arcsinh', 'arccosh', 'arctanh', 
'tensordot',
'linspace', 'expand_dims', 'tile', 'arange', 'split', 
'concatenate', 'stack', 'mean',
-   'maximum', 'minimum', 'swapaxes', 'clip', 'argmax', 'std', 'var', 
'indices']
+   'maximum', 'minimum', 'swapaxes', 'clip', 'argmax', 'std', 'var', 
'indices', 'copysign']
 
 
 @set_module('mxnet.ndarray.numpy')
@@ -2432,3 +2432,54 @@ def indices(dimensions, dtype=_np.int32, ctx=None):
 else:
 raise ValueError("The dimensions must be sequence of ints")
 # pylint: enable=redefined-outer-name
+
+
+@set_module('mxnet.ndarray.numpy')
+def copysign(x1, x2, out=None):
+r"""copysign(x1, x2, out=None)
+
+Change the sign of x1 to that of x2, element-wise.
+
+If `x2` is a scalar, its sign will be copied to all elements of `x1`.
+
+Parameters
+--
+x1 : ndarray or scalar
+Values to change the sign of.
+x2 : ndarray or scalar
+The sign of `x2` is copied to `x1`.
+out : ndarray or None, optional
+A location into which the result is stored. It must be of the
+right shape and right type to hold the output. If not provided
+or `None`,a freshly-allocated array is returned.
+
+Returns
+---
+out : ndarray or scalar
+The values of `x1` with the sign of `x2`.
+This is a scalar if both `x1` and `x2` are scalars.
+
+Notes
+---
+This function differs from the original `numpy.copysign
+
`_ in
+the following aspects:
+
+- ``where`` param is not supported.
+
+Examples
+
+>>> np.copysign(1.3, -1)
+-1.3
+>>> 1/np.copysign(0, 1)
+inf
+>>> 1/np.copysign(0, -1)
+-inf
+
+>>> a = np.array([-1, 0, 1])
+>>> np.copysign(a, -1.1)
+array([-1., -0., -1.])
+>>> np.copysign(a, np.arange(3)-1)
+array([-1.,  0.,  1.])
+"""
+return _ufunc_helper(x1, x2, _npi.copysign, _np.copysign, 
_npi.copysign_scalar, _npi.rcopysign_scalar, out)
diff --git a/python/mxnet/numpy/multiarray.py b/python/mxnet/numpy/multiarray.py
index 1f8aa92..632cfad 100644
--- a/python/mxnet/numpy/multiarray.py
+++ b/python/mxnet/numpy/multiarray.py
@@ -52,7 +52,7 @@ __all__ = ['ndarray', 'empty', 'array', 'zeros', 'ones', 
'full', 'add', 'subtrac
'degrees', 'log2', 'log1p', 'rint', 'radians', 'reciprocal', 
'square', 'negative',
'fix', 'ceil', 'floor', 'trunc', 'logical_not', 'arcsinh', 
'arccosh', 'arctanh',
'tensordot', 'linspace', 'expand_dims', 'tile', 'arange', 'split', 
'concatenate',
-   'stack', 'mean', 'maximum', 'minimum', 'swapaxes', 'clip', 
'argmax', 'std', 'var', 'indices']
+   'stack', 'mean', 'maximum', 'minimum', 'swapaxes', 'clip', 
'argmax', 'std', 'var', 'indices', 'copysign']
 
 # Return code for dispatching indexing function call
 _NDARRAY_UNSUPPORTED_INDEXING = -1
@@ -3935,3 +3935,54 @@ def indices(dimensions, dtype=_np.int32, ctx=None):
 """
 return _mx_nd_np.indices(dimensions=dimensions, dtype=dtype, ctx=ctx)
 # pylint: enable=redefined-outer-name
+
+
+@set_module('mxnet.numpy')
+def copysign(x1, x2, out=None):
+r"""copysign(x1, x2, out=None)
+
+Change the sign of x1 to that of x2, element-wise.
+
+If `x2` is a scalar, its sign will be copied to all elements of `x1`.
+
+

[GitHub] [incubator-mxnet] sxjscience merged pull request #15851: [Numpy] Numpy copysign

2019-09-15 Thread GitBox
sxjscience merged pull request #15851: [Numpy] Numpy copysign
URL: https://github.com/apache/incubator-mxnet/pull/15851
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #16009: [Numpy] Numpy compatible bitwise_and operator

2019-09-15 Thread GitBox
sxjscience commented on a change in pull request #16009: [Numpy] Numpy 
compatible bitwise_and operator
URL: https://github.com/apache/incubator-mxnet/pull/16009#discussion_r324475447
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -1080,6 +1080,53 @@ def hybrid_forward(self, F, a, *args):
 assert same(mx_out.asnumpy(), np_out)
 
 
+@with_seed()
+@use_np
+def test_np_bitwise_and():
+class TestBitwiseAnd(HybridBlock):
+def __init__(self):
+super(TestBitwiseAnd, self).__init__()
+
+def hybrid_forward(self, F, x1, x2):
+return F.np.bitwise_and(x1, x2)
+
+shapes = [
+((3, 1), (3, 1)),
+((3, 1, 2), (3, 1, 2)),
+((1, ),(1, )),
+((3, 0), (3, 0)),  # zero-size shape
+((0, 1), (0, 1)),  # zero-size shape
+((2, 0, 2), (2, 0, 2)),  # zero-size shape
+((1, ), (3, )),  # broadcast
+((2, 3), (2, 1)),  # broadcast
+((1, 3), (2, 3)),  # broadcast
+((1, 3), (2, 0, 3)),  # broadcast to zero-size shape
+((1, 0, 1), (3, 0, 1)), # broadcast of zero-size shape
+((), ()),  # zero-dim shape
+]
+
+for hybridize in [True, False]:
+for shape in shapes:
+x1_shape, x2_shape = shape
+
+test_bitwise_and = TestBitwiseAnd()
+if hybridize:
+test_bitwise_and.hybridize()
+
+x1 = rand_ndarray(x1_shape, dtype=_np.dtype(int)).as_np_ndarray()
+x2 = rand_ndarray(x2_shape, dtype=_np.dtype(int)).as_np_ndarray()
 
 Review comment:
   Should we test for all the supported dtypes, I think we support uint8, int8, 
int32, int64.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #15851: [Numpy] Numpy copysign

2019-09-15 Thread GitBox
hzfan commented on a change in pull request #15851: [Numpy] Numpy copysign
URL: https://github.com/apache/incubator-mxnet/pull/15851#discussion_r324475346
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -1853,6 +1853,111 @@ def hybrid_forward(self, F, x):
 assert_almost_equal(mx_ret.asnumpy(), np_ret, atol=1e-5, 
rtol=1e-4)
 
 
+@with_seed()
+@use_np
+def test_np_copysign():
+class TestCopysign(HybridBlock):
+def __init__(self):
+super(TestCopysign, self).__init__()
+
+def hybrid_forward(self, F, a1, a2):
+   return F.np.copysign(a1, a2)
+
+def get_grad(a1, a2):
+sign = _np.logical_or(_np.logical_and(a1 < 0, a2 < 0),
+  _np.logical_and(a1 >= 0, a2 >= 0))
+sign = 2 * sign.astype(int) - 1
+sign = sign.reshape(-1, *a1.shape)
+sign = _np.sum(sign, axis=0)
+return sign, _np.zeros_like(a2)
+
+def get_grad_left(a1, a2):
+sign = _np.logical_or(_np.logical_and(a1 < 0, a2 < 0),
+  _np.logical_and(a1 >= 0, a2 >= 0))
+sign = 2 * sign.astype(int) - 1
+sign = sign.reshape(a1.shape)
+return sign
+
+def get_grad_right(a1, a2):
+return _np.zeros_like(a2)
+
+shapes = [
+(),
+(1),
+(2, 1),
+(3, 2, 1),
+(4, 3, 2, 1),
+(2, 4, 3, 2, 1)
+]
+types = ['float16', 'float32', 'float64', 'int8', 'int32', 'int64']
+for a1shape in shapes:
+for a2shape in shapes:
+for hybridize in [True, False]:
+for dtype in types:
+test_copysign = TestCopysign()
+if hybridize:
+test_copysign.hybridize()
+rtol = 1e-3
+atol = 1e-5
+a1_np = _np.array(_np.random.uniform(-1.0, 1.0, a1shape), 
dtype=dtype)
+a2_np = _np.array(_np.random.uniform(-1.0, 1.0, a2shape), 
dtype=dtype)
+a1 = np.array(a1_np, dtype=dtype)
+a2 = np.array(a2_np, dtype=dtype)
+a1.attach_grad()
+a2.attach_grad()
+expected_np = _np.copysign(a1_np, a2_np)
+with mx.autograd.record():
+mx_out = test_copysign(a1, a2)
+assert mx_out.shape == expected_np.shape
 
 Review comment:
   I agree that in the future we should do it, after more dtypes and casting is 
supported.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on issue #16016: [numpy] operator ravel, derive from reshape

2019-09-15 Thread GitBox
sxjscience commented on issue #16016: [numpy] operator ravel, derive from 
reshape
URL: https://github.com/apache/incubator-mxnet/pull/16016#issuecomment-531588040
 
 
   @tingying2020 Would you make a small change in the code to trigger the CI 
again?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #15851: [Numpy] Numpy copysign

2019-09-15 Thread GitBox
sxjscience commented on a change in pull request #15851: [Numpy] Numpy copysign
URL: https://github.com/apache/incubator-mxnet/pull/15851#discussion_r324474947
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -1853,6 +1853,111 @@ def hybrid_forward(self, F, x):
 assert_almost_equal(mx_ret.asnumpy(), np_ret, atol=1e-5, 
rtol=1e-4)
 
 
+@with_seed()
+@use_np
+def test_np_copysign():
+class TestCopysign(HybridBlock):
+def __init__(self):
+super(TestCopysign, self).__init__()
+
+def hybrid_forward(self, F, a1, a2):
+   return F.np.copysign(a1, a2)
+
+def get_grad(a1, a2):
+sign = _np.logical_or(_np.logical_and(a1 < 0, a2 < 0),
+  _np.logical_and(a1 >= 0, a2 >= 0))
+sign = 2 * sign.astype(int) - 1
+sign = sign.reshape(-1, *a1.shape)
+sign = _np.sum(sign, axis=0)
+return sign, _np.zeros_like(a2)
+
+def get_grad_left(a1, a2):
+sign = _np.logical_or(_np.logical_and(a1 < 0, a2 < 0),
+  _np.logical_and(a1 >= 0, a2 >= 0))
+sign = 2 * sign.astype(int) - 1
+sign = sign.reshape(a1.shape)
+return sign
+
+def get_grad_right(a1, a2):
+return _np.zeros_like(a2)
+
+shapes = [
+(),
+(1),
+(2, 1),
+(3, 2, 1),
+(4, 3, 2, 1),
+(2, 4, 3, 2, 1)
+]
+types = ['float16', 'float32', 'float64', 'int8', 'int32', 'int64']
+for a1shape in shapes:
+for a2shape in shapes:
+for hybridize in [True, False]:
+for dtype in types:
+test_copysign = TestCopysign()
+if hybridize:
+test_copysign.hybridize()
+rtol = 1e-3
+atol = 1e-5
+a1_np = _np.array(_np.random.uniform(-1.0, 1.0, a1shape), 
dtype=dtype)
+a2_np = _np.array(_np.random.uniform(-1.0, 1.0, a2shape), 
dtype=dtype)
+a1 = np.array(a1_np, dtype=dtype)
+a2 = np.array(a2_np, dtype=dtype)
+a1.attach_grad()
+a2.attach_grad()
+expected_np = _np.copysign(a1_np, a2_np)
+with mx.autograd.record():
+mx_out = test_copysign(a1, a2)
+assert mx_out.shape == expected_np.shape
 
 Review comment:
   Ignore my previous comments. I think we'd better check the return types 
after we support arbitrary dtype combinations in deepnumpy. So it's okay to not 
check the return types now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #15851: [Numpy] Numpy copysign

2019-09-15 Thread GitBox
sxjscience commented on a change in pull request #15851: [Numpy] Numpy copysign
URL: https://github.com/apache/incubator-mxnet/pull/15851#discussion_r324474508
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -1853,6 +1853,111 @@ def hybrid_forward(self, F, x):
 assert_almost_equal(mx_ret.asnumpy(), np_ret, atol=1e-5, 
rtol=1e-4)
 
 
+@with_seed()
+@use_np
+def test_np_copysign():
+class TestCopysign(HybridBlock):
+def __init__(self):
+super(TestCopysign, self).__init__()
+
+def hybrid_forward(self, F, a1, a2):
+   return F.np.copysign(a1, a2)
+
+def get_grad(a1, a2):
+sign = _np.logical_or(_np.logical_and(a1 < 0, a2 < 0),
+  _np.logical_and(a1 >= 0, a2 >= 0))
+sign = 2 * sign.astype(int) - 1
+sign = sign.reshape(-1, *a1.shape)
+sign = _np.sum(sign, axis=0)
+return sign, _np.zeros_like(a2)
+
+def get_grad_left(a1, a2):
+sign = _np.logical_or(_np.logical_and(a1 < 0, a2 < 0),
+  _np.logical_and(a1 >= 0, a2 >= 0))
+sign = 2 * sign.astype(int) - 1
+sign = sign.reshape(a1.shape)
+return sign
+
+def get_grad_right(a1, a2):
+return _np.zeros_like(a2)
+
+shapes = [
+(),
+(1),
+(2, 1),
+(3, 2, 1),
+(4, 3, 2, 1),
+(2, 4, 3, 2, 1)
+]
+types = ['float16', 'float32', 'float64', 'int8', 'int32', 'int64']
+for a1shape in shapes:
+for a2shape in shapes:
+for hybridize in [True, False]:
+for dtype in types:
+test_copysign = TestCopysign()
+if hybridize:
+test_copysign.hybridize()
+rtol = 1e-3
+atol = 1e-5
+a1_np = _np.array(_np.random.uniform(-1.0, 1.0, a1shape), 
dtype=dtype)
+a2_np = _np.array(_np.random.uniform(-1.0, 1.0, a2shape), 
dtype=dtype)
+a1 = np.array(a1_np, dtype=dtype)
+a2 = np.array(a2_np, dtype=dtype)
+a1.attach_grad()
+a2.attach_grad()
+expected_np = _np.copysign(a1_np, a2_np)
+with mx.autograd.record():
+mx_out = test_copysign(a1, a2)
+assert mx_out.shape == expected_np.shape
 
 Review comment:
   Test if mx_out.dtype matches expected_np.dtype. I think we can later add a 
utility to match the shape + dtype.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #16175: [Dataset] add shard API

2019-09-15 Thread GitBox
eric-haibin-lin commented on issue #16175: [Dataset] add shard API
URL: https://github.com/apache/incubator-mxnet/pull/16175#issuecomment-531585161
 
 
   @davisliang


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin opened a new pull request #16175: [Dataset] add shard API

2019-09-15 Thread GitBox
eric-haibin-lin opened a new pull request #16175: [Dataset] add shard API
URL: https://github.com/apache/incubator-mxnet/pull/16175
 
 
   ## Description ##
   Add an API to shard the dataset. 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ciyongch commented on a change in pull request #16141: [mkldnn-v1.0] Add MKL-DNN Convolution

2019-09-15 Thread GitBox
ciyongch commented on a change in pull request #16141: [mkldnn-v1.0] Add 
MKL-DNN Convolution
URL: https://github.com/apache/incubator-mxnet/pull/16141#discussion_r324465796
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_convolution.cc
 ##
 @@ -126,13 +127,12 @@ mkldnn::convolution_forward::primitive_desc 
GetConvFwdImpl(const MKLDNNConvFullP
 
   if (param.conv_param.dilate.ndim() == 0 && bias_md_ptr == nullptr) {
 mkldnn::convolution_forward::desc desc(prop, 
mkldnn::algorithm::convolution_direct, data_md,
-   weight_md, out_md, strides, 
padding, padding,
-   mkldnn::padding_kind::zero);
 
 Review comment:
   Is `padding_kind`  disabled or `zero` becomes default option from MKL-DNN 
v1.0?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ciyongch commented on a change in pull request #16141: [mkldnn-v1.0] Add MKL-DNN Convolution

2019-09-15 Thread GitBox
ciyongch commented on a change in pull request #16141: [mkldnn-v1.0] Add 
MKL-DNN Convolution
URL: https://github.com/apache/incubator-mxnet/pull/16141#discussion_r324465554
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_convolution-inl.h
 ##
 @@ -79,54 +79,63 @@ struct MKLDNNConvFullParam {
   MKLDNNPostEltwiseParam postsum_act_param;
 };
 
-mkldnn::convolution_forward::primitive_desc GetConvFwdImpl(const 
MKLDNNConvFullParam ,
-   const bool is_train,
-   const NDArray ,
-   const NDArray 
,
-   const NDArray *bias,
-   const NDArray 
);
+std::shared_ptr GetConvFwdImpl(
+const ConvolutionParam , const bool is_train, const NDArray , 
const NDArray ,
+const NDArray *bias, const NDArray );
 
 class MKLDNNConvForward {
  public:
-  mkldnn::convolution_forward::primitive_desc fwd_pd;
-
   MKLDNNConvForward(const MKLDNNConvFullParam , const bool is_train, 
const NDArray ,
-const NDArray , const NDArray *bias, const NDArray 
);
+const NDArray , const NDArray *bias, const NDArray 
);
 
-  void SetNewMem(const mkldnn::memory , const mkldnn::memory ,
- const mkldnn::memory *bias, const mkldnn::memory );
+  const mkldnn::convolution_forward () const { return *fwd_; }
 
-  void SetNewMem(const mkldnn::memory , const mkldnn::memory ) {
-this->data_->set_data_handle(data.get_data_handle());
-this->out_->set_data_handle(output.get_data_handle());
-  }
-
-  const mkldnn::convolution_forward () const {
-return *fwd_;
-  }
+  const mkldnn::convolution_forward::primitive_desc () const { return 
*pd_; }
 
  private:
   std::shared_ptr fwd_;
-  std::shared_ptr data_;
-  std::shared_ptr weight_;
-  std::shared_ptr bias_;
-  std::shared_ptr out_;
+  std::shared_ptr pd_;
 };
 
 typedef ParamOpSign MKLDNNConvSignature;
 
-MKLDNNConvForward (const ConvolutionParam ,
-  const bool is_train, const NDArray ,
-  const NDArray , const NDArray *bias,
-  const NDArray );
-
 void MKLDNNConvolutionForwardFullFeature(const MKLDNNConvFullParam ,
  const OpContext ,
  MKLDNNConvForward *fwd,
  const std::vector _data,
  const std::vector ,
  const std::vector _data);
 
+void MKLDNNConvolutionForward(const nnvm::NodeAttrs ,
+  const OpContext ,
+  const std::vector _data,
+  const std::vector ,
+  const std::vector _data);
+
+class MKLDNNConvBackward {
+ public:
+  MKLDNNConvBackward(const MKLDNNConvFullParam , const NDArray , 
const NDArray ,
+ const NDArray *bias, const NDArray );
+
+  const mkldnn::convolution_backward_data () const { return 
*bwd_data_; }
+
+  const mkldnn::convolution_backward_weights () const { return 
*bwd_weight_; }
+
+  const mkldnn::convolution_backward_data::primitive_desc () const {
+return *bwd_data_pd_;
+  }
+
+  const mkldnn::convolution_backward_weights::primitive_desc () 
const {
+return *bwd_weights_pd_;
+  }
+
+ private:
+  std::shared_ptr 
bwd_data_pd_;
+  std::shared_ptr 
bwd_weights_pd_;
+  std::shared_ptr bwd_data_;
+  std::shared_ptr bwd_weight_;
+};
+
 
 Review comment:
   Please use `weight` or `weights`  consistently.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ciyongch commented on a change in pull request #16141: [mkldnn-v1.0] Add MKL-DNN Convolution

2019-09-15 Thread GitBox
ciyongch commented on a change in pull request #16141: [mkldnn-v1.0] Add 
MKL-DNN Convolution
URL: https://github.com/apache/incubator-mxnet/pull/16141#discussion_r324465357
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_base-inl.h
 ##
 @@ -277,6 +277,19 @@ inline static mkldnn::memory::desc GetWeightDesc(const 
NDArray ,
   }
 }
 
+inline static const std::vector GetMKLDNNInputArray(const 
std::vector ) {
+  std::vector ret;
+  ret.reserve(inputs.size());
+  for (const auto  : inputs) {
+if (in.IsView() && in.IsMKLDNNData()) {
+  ret.push_back(in.Reorder2Default());
+} else {
+  ret.push_back(in);
 
 Review comment:
   Return `inputs` here to avoid overhead?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ciyongch commented on a change in pull request #16141: [mkldnn-v1.0] Add MKL-DNN Convolution

2019-09-15 Thread GitBox
ciyongch commented on a change in pull request #16141: [mkldnn-v1.0] Add 
MKL-DNN Convolution
URL: https://github.com/apache/incubator-mxnet/pull/16141#discussion_r324465659
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_convolution.cc
 ##
 @@ -57,7 +58,7 @@ mkldnn::convolution_forward::primitive_desc 
GetConvFwdImpl(const MKLDNNConvFullP
   auto bias_md =
   bias ? (param.mkldnn_param.quantized ? GetMemDesc(*bias, 
mshadow::kInt32) : GetMemDesc(*bias))
: mkldnn::memory::desc{
- {}, mkldnn::memory::data_type::data_undef, 
mkldnn::memory::format::any};
+ {}, mkldnn::memory::data_type::undef, 
mkldnn::memory::format_tag::any};
 
 Review comment:
   indent


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-09-15 Thread anirudh2290
This is an automated email from the ASF dual-hosted git repository.

anirudh2290 pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new ddee92e  Bump the publish timestamp.
ddee92e is described below

commit ddee92e267a0f017a1d7088e9173640554ddbc1d
Author: mxnet-ci 
AuthorDate: Sun Sep 15 13:35:32 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..9e01d92
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sun Sep 15 13:35:32 UTC 2019



[GitHub] [incubator-mxnet] QueensGambit commented on issue #16173: Saving and loading cudNN autotune and graph optimization

2019-09-15 Thread GitBox
QueensGambit commented on issue #16173: Saving and loading cudNN autotune and 
graph optimization
URL: 
https://github.com/apache/incubator-mxnet/issues/16173#issuecomment-531559033
 
 
   Thank you for the reply @pengzhao-intel. I updated the description on MKLDNN.
   
   I see the point about portability and backward compatibility issues.
   Maybe it is better to define `optimize` as a string argument which must be 
in `{'on_bind', 'save_reload', 'disabled'}`:
   
   ```python
   def bind(self, ctx, args, args_grad=None, grad_req='write',
  aux_states=None, group2ctx=None, shared_exec=None, 
optimize='on_bind'):
   """
  # ...
   optimize : str, optional, default 'on_bind'
   must be in {'on_bind', 'save_reload', 'disabled'}
   'on_bind': Graph optimization / cuDNN autotune is 
executed during model binding
   'save_reload': MXNet attempts to recover previous 
optimization information. 
  Otherwise MXNet will perform optimization 
and save it to disk.
   'disabled': No graph optimization / cuDNN autotune is 
performed
   """
   ```
   
   In the default case `optimize='on_bind'`, it will behave the same way as 
currently and all previous code will behave the same.
   
   As a different aspect, it might be preferable to treat graph optimization 
(MKLDNN graph optimization / TensorRT graph fusion) as a different entity 
compared to cudNN autotune because cudNN autotune might also be performed on 
fused graphs in future versions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel closed pull request #15741: MKL-DNN LBR-GRU Inference Integration (FP32 LBR-GRU)

2019-09-15 Thread GitBox
pengzhao-intel closed pull request #15741: MKL-DNN LBR-GRU Inference 
Integration (FP32 LBR-GRU)
URL: https://github.com/apache/incubator-mxnet/pull/15741
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #15741: MKL-DNN LBR-GRU Inference Integration (FP32 LBR-GRU)

2019-09-15 Thread GitBox
pengzhao-intel commented on issue #15741: MKL-DNN LBR-GRU Inference Integration 
(FP32 LBR-GRU)
URL: https://github.com/apache/incubator-mxnet/pull/15741#issuecomment-531556087
 
 
   closing this PR since we will migrate it with MKL-DNN 1.0.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16141: [mkldnn-v1.0] Add MKL-DNN Convolution

2019-09-15 Thread GitBox
pengzhao-intel commented on issue #16141: [mkldnn-v1.0] Add MKL-DNN Convolution
URL: https://github.com/apache/incubator-mxnet/pull/16141#issuecomment-531555822
 
 
   @TaoLv @ciyongch @ZhennanQin please help to review :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #15851: [Numpy] Numpy copysign

2019-09-15 Thread GitBox
hzfan commented on a change in pull request #15851: [Numpy] Numpy copysign
URL: https://github.com/apache/incubator-mxnet/pull/15851#discussion_r324453236
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -293,3 +293,46 @@ def power(x1, x2, out=None):
 This is a scalar if both x1 and x2 are scalars.
 """
 return _ufunc_helper(x1, x2, _npi.power, _np.power, _npi.power_scalar, 
_npi.rpower_scalar, out)
+
+
+@set_module('mxnet.ndarray.numpy')
+def copysign(x1, x2, out=None):
 
 Review comment:
   Yes. Notes have been added in docs. See `_op.py`, `multiarray.py` and 
`_symbol.py`. Thank you!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #15851: [Numpy] Numpy copysign

2019-09-15 Thread GitBox
hzfan commented on a change in pull request #15851: [Numpy] Numpy copysign
URL: https://github.com/apache/incubator-mxnet/pull/15851#discussion_r324453173
 
 

 ##
 File path: src/operator/numpy/np_elemwise_broadcast_op.cc
 ##
 @@ -182,5 +202,25 @@ 
MXNET_OPERATOR_REGISTER_NP_BINARY_SCALAR(_npi_rpower_scalar)
 .set_attr("FCompute", BinaryScalarOp::Compute)
 .set_attr("FGradient", 
ElemwiseGradUseOut{"_backward_rpower_scalar"});
 
+MXNET_OPERATOR_REGISTER_NP_BINARY_SCALAR(_npi_copysign_scalar)
+.set_attr("FCompute", BinaryScalarOp::Compute)
+.set_attr("FGradient", 
ElemwiseGradUseIn{"_backward_npi_copysign_scalar"});
+
+MXNET_OPERATOR_REGISTER_NP_BINARY_SCALAR(_npi_rcopysign_scalar)
+.set_attr("FCompute", BinaryScalarOp::Compute)
+.set_attr("FGradient", 
ElemwiseGradUseIn{"_backward_npi_rcopysign_scalar"});
+
+MXNET_OPERATOR_REGISTER_BINARY(_backward_npi_copysign_scalar)
+.add_argument("scalar", "float", "scalar value")
 
 Review comment:
   Yes. Have fixed it. And Have added additional tests for scalar cases. Thank 
you!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #15851: [Numpy] Numpy copysign

2019-09-15 Thread GitBox
hzfan commented on a change in pull request #15851: [Numpy] Numpy copysign
URL: https://github.com/apache/incubator-mxnet/pull/15851#discussion_r324453143
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -149,6 +149,66 @@ def hybrid_forward(self, F, a):
 assert same(a.grad.asnumpy(), expected_grad)
 
 
+@with_seed()
+@use_np
+def test_np_copysign():
+class TestCopysign(HybridBlock):
+def __init__(self):
+super(TestCopysign, self).__init__()
+
+def hybrid_forward(self, F, a1, a2):
+   return F.np.copysign(a1, a2)
+
+def get_grad(a1, a2):
+sign = _np.logical_or(_np.logical_and(a1 < 0, a2 < 0),
+  _np.logical_and(a1 >= 0, a2 >= 0))
+sign = 2 * sign.astype(int) - 1
+sign = sign.reshape(-1, *a1.shape)
+sign = _np.sum(sign, axis=0)
+return sign, _np.zeros_like(a2)
+
+shapes = [
+(),
+(1),
+(2, 1),
+(3, 2, 1),
+(4, 3, 2, 1),
+(2, 4, 3, 2, 1)
+]
+types = ['float16', 'float32', 'float64', 'int8', 'int32', 'int64']
+for a1shape in shapes:
+for a2shape in shapes:
+for hybridize in [True, False]:
+for dtype in types:
+test_copysign = TestCopysign()
+if hybridize:
+test_copysign.hybridize()
+rtol = 1e-3
+atol = 1e-5
+a1_np = _np.array(_np.random.uniform(-1.0, 1.0, a1shape), 
dtype=dtype)
+a2_np = _np.array(_np.random.uniform(-1.0, 1.0, a2shape), 
dtype=dtype)
+a1 = mx.nd.array(a1_np).as_np_ndarray()
 
 Review comment:
   Yes. Have fixed it. Thank you!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #15851: [Numpy] Numpy copysign

2019-09-15 Thread GitBox
hzfan commented on a change in pull request #15851: [Numpy] Numpy copysign
URL: https://github.com/apache/incubator-mxnet/pull/15851#discussion_r324453119
 
 

 ##
 File path: src/operator/numpy/np_elemwise_broadcast_op.cu
 ##
 @@ -42,6 +42,12 @@ NNVM_REGISTER_OP(_npi_mod)
 NNVM_REGISTER_OP(_npi_power)
 .set_attr("FCompute", BinaryBroadcastCompute);
 
+NNVM_REGISTER_OP(_npi_copysign)
+.set_attr("FCompute", BinaryBroadcastCompute);
+
+NNVM_REGISTER_OP(_backward_npi_copysign)
+.set_attr("FCompute", BinaryBroadcastBackwardUseIn);
 
 Review comment:
   fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #15851: [Numpy] Numpy copysign

2019-09-15 Thread GitBox
hzfan commented on a change in pull request #15851: [Numpy] Numpy copysign
URL: https://github.com/apache/incubator-mxnet/pull/15851#discussion_r324453107
 
 

 ##
 File path: src/operator/numpy/np_elemwise_broadcast_op.cc
 ##
 @@ -182,5 +202,25 @@ 
MXNET_OPERATOR_REGISTER_NP_BINARY_SCALAR(_npi_rpower_scalar)
 .set_attr("FCompute", BinaryScalarOp::Compute)
 .set_attr("FGradient", 
ElemwiseGradUseOut{"_backward_rpower_scalar"});
 
+MXNET_OPERATOR_REGISTER_NP_BINARY_SCALAR(_npi_copysign_scalar)
+.set_attr("FCompute", BinaryScalarOp::Compute)
+.set_attr("FGradient", 
ElemwiseGradUseIn{"_backward_npi_copysign_scalar"});
+
+MXNET_OPERATOR_REGISTER_NP_BINARY_SCALAR(_npi_rcopysign_scalar)
+.set_attr("FCompute", BinaryScalarOp::Compute)
+.set_attr("FGradient", 
ElemwiseGradUseIn{"_backward_npi_rcopysign_scalar"});
+
+MXNET_OPERATOR_REGISTER_BINARY(_backward_npi_copysign_scalar)
+.add_argument("scalar", "float", "scalar value")
+.set_attr_parser([](NodeAttrs *attrs) { attrs->parsed = 
std::stod(attrs->dict["scalar"]); })
+.set_attr("FCompute",
+BinaryScalarOp::Backward);
+
+MXNET_OPERATOR_REGISTER_BINARY(_backward_npi_rcopysign_scalar)
+.add_argument("scalar", "float", "scalar value")
+.set_attr_parser([](NodeAttrs *attrs) { attrs->parsed = 
std::stod(attrs->dict["scalar"]); })
+.set_attr("FCompute", BinaryScalarOp::Backward<
+  cpu, mshadow_op::rcopysign_grad>);
 
 Review comment:
   fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-09-15 Thread anirudh2290
This is an automated email from the ASF dual-hosted git repository.

anirudh2290 pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 8ab3b60  Bump the publish timestamp.
8ab3b60 is described below

commit 8ab3b605387b406ebacb81752a68f7f66d7e0350
Author: mxnet-ci 
AuthorDate: Sun Sep 15 07:36:49 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..a225561
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sun Sep 15 07:36:49 UTC 2019



[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16167: [RFC] Apache MXNet 2.0 Roadmap

2019-09-15 Thread GitBox
pengzhao-intel commented on issue #16167: [RFC] Apache MXNet 2.0 Roadmap
URL: 
https://github.com/apache/incubator-mxnet/issues/16167#issuecomment-531542441
 
 
   @szha Really great proposal and we may want to add some items in 2.0 too.
   Is there a timeline of 2.0?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16173: Saving and loading cudNN autotune and graph optimization

2019-09-15 Thread GitBox
pengzhao-intel commented on issue #16173: Saving and loading cudNN autotune and 
graph optimization
URL: 
https://github.com/apache/incubator-mxnet/issues/16173#issuecomment-531542295
 
 
   FYI, MKLDNN graph fusion is already enabled by default :)
   One more, saving the fused graph maybe cause portable issues and break 
backward compatible so we need a solution to fall back as well.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] shdyn opened a new issue #16174: How to build new operators?

2019-09-15 Thread GitBox
shdyn opened a new issue #16174: How to build new operators?
URL: https://github.com/apache/incubator-mxnet/issues/16174
 
 
   Hi, I need to compile the operators in 
https://github.com/deepinsight/insightface/tree/master/3rdparty/operator. I 
tried to copy files in that folder to incubator-mxnet/src/operator and compile 
the source codes using 
incubator-mxnet/docs/install/install_mxnet_ubuntu_python.sh. Then I install 
these operators using incubator-mxnet/python/setup.py. However, I still cannot 
run these operators? Any suggestions. Thanks in advance!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #16124: [numpy] [tvm] operator true_divide

2019-09-15 Thread GitBox
reminisce commented on a change in pull request #16124: [numpy] [tvm] operator 
true_divide
URL: https://github.com/apache/incubator-mxnet/pull/16124#discussion_r324447425
 
 

 ##
 File path: src/operator/numpy/np_true_divide.cc
 ##
 @@ -41,19 +46,97 @@ bool TrueDivideType(const nnvm::NodeAttrs& attrs,
 const int lhs_dtype = in_attrs->at(0);
 const int rhs_dtype = in_attrs->at(1);
 CHECK_EQ(lhs_dtype, rhs_dtype)
-<< "_true_divide currently only supports same dtype for dividend and 
divisor";
+  << "_true_divide currently only supports same dtype for dividend and 
divisor";
   }
-  auto is_float = [](const int dtype) {
-return dtype == mshadow::kFloat32 || dtype == mshadow::kFloat64 || dtype 
== mshadow::kFloat16;
-  };
-
-  for (const int dtype : *in_attrs) {
-CHECK(is_float(dtype)) << "_true_divide currently only supports float 
dtype";
+  if (IsIntType(in_attrs->at(0))) {
+TYPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::kFloat64);
 
 Review comment:
   We use `float32` as default dtype.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #16124: [numpy] [tvm] operator true_divide

2019-09-15 Thread GitBox
reminisce commented on a change in pull request #16124: [numpy] [tvm] operator 
true_divide
URL: https://github.com/apache/incubator-mxnet/pull/16124#discussion_r324447266
 
 

 ##
 File path: contrib/tvmop/core/umath.py
 ##
 @@ -0,0 +1,112 @@
+ # Licensed to the Apache Software Foundation (ASF) under one
+ # or more contributor license agreements.  See the NOTICE file
+ # distributed with this work for additional information
+ # regarding copyright ownership.  The ASF licenses this file
+ # to you under the Apache License, Version 2.0 (the
+ # "License"); you may not use this file except in compliance
+ # with the License.  You may obtain a copy of the License at
+ #
+ #   http://www.apache.org/licenses/LICENSE-2.0
+ #
+ # Unless required by applicable law or agreed to in writing,
+ # software distributed under the License is distributed on an
+ # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ # KIND, either express or implied.  See the License for the
+ # specific language governing permissions and limitations
+ # under the License.
+import tvm
+from .. import defop, AllTypes, AllTypesButHalf
+
+def compute_true_divide(dtype, ndim):
+A = tvm.placeholder([tvm.var() for _ in range(ndim)], name='A', 
dtype=dtype)
+B = tvm.placeholder([tvm.var() for _ in range(ndim)], name='B', 
dtype=dtype)
+if dtype in ["float16", "float32", "float64"]:
+C = tvm.compute([tvm.var() for _ in range(ndim)],
+lambda *index: A[index] / B[index], name='C')
+else:
+C = tvm.compute([tvm.var() for _ in range(ndim)],
+lambda *index: A[index].astype("float64") /
 
 Review comment:
   This always produces fp64 dtype as output. We should support allowing users 
to set desired float dtypes. Also note that in numpy (divide/true_divide), it's 
not allowed to set integer dtype for outputs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #16124: [numpy] [tvm] operator true_divide

2019-09-15 Thread GitBox
reminisce commented on a change in pull request #16124: [numpy] [tvm] operator 
true_divide
URL: https://github.com/apache/incubator-mxnet/pull/16124#discussion_r324447266
 
 

 ##
 File path: contrib/tvmop/core/umath.py
 ##
 @@ -0,0 +1,112 @@
+ # Licensed to the Apache Software Foundation (ASF) under one
+ # or more contributor license agreements.  See the NOTICE file
+ # distributed with this work for additional information
+ # regarding copyright ownership.  The ASF licenses this file
+ # to you under the Apache License, Version 2.0 (the
+ # "License"); you may not use this file except in compliance
+ # with the License.  You may obtain a copy of the License at
+ #
+ #   http://www.apache.org/licenses/LICENSE-2.0
+ #
+ # Unless required by applicable law or agreed to in writing,
+ # software distributed under the License is distributed on an
+ # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ # KIND, either express or implied.  See the License for the
+ # specific language governing permissions and limitations
+ # under the License.
+import tvm
+from .. import defop, AllTypes, AllTypesButHalf
+
+def compute_true_divide(dtype, ndim):
+A = tvm.placeholder([tvm.var() for _ in range(ndim)], name='A', 
dtype=dtype)
+B = tvm.placeholder([tvm.var() for _ in range(ndim)], name='B', 
dtype=dtype)
+if dtype in ["float16", "float32", "float64"]:
+C = tvm.compute([tvm.var() for _ in range(ndim)],
+lambda *index: A[index] / B[index], name='C')
+else:
+C = tvm.compute([tvm.var() for _ in range(ndim)],
+lambda *index: A[index].astype("float64") /
 
 Review comment:
   This always produces fp64 dtype as output. We should support allowing users 
to set desired output dtype.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce closed pull request #16116: np.ndarray.repeat support & doc for np.repeat

2019-09-15 Thread GitBox
reminisce closed pull request #16116: np.ndarray.repeat support & doc for 
np.repeat
URL: https://github.com/apache/incubator-mxnet/pull/16116
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on issue #16116: np.ndarray.repeat support & doc for np.repeat

2019-09-15 Thread GitBox
reminisce commented on issue #16116: np.ndarray.repeat support & doc for 
np.repeat
URL: https://github.com/apache/incubator-mxnet/pull/16116#issuecomment-531537737
 
 
   Fixed in https://github.com/apache/incubator-mxnet/pull/16157


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services