[GitHub] MinZhaoLin commented on issue #9920: Asynchronous Issue on CustomOP and mxnet.image.ImageDetIter

2018-08-23 Thread GitBox
MinZhaoLin commented on issue #9920: Asynchronous Issue on CustomOP and 
mxnet.image.ImageDetIter
URL: 
https://github.com/apache/incubator-mxnet/issues/9920#issuecomment-415650918
 
 
   @wkcn 
我用的时mxnet-cu80,这是我定义的softmax层,主要是当label为全0时,回传0梯度,主要是想mxnet的module多任务,但每个任务单独训练,就是因为数据只有单独一个任务的标签
   ```
   class Softmax(mx.operator.CustomOp):
 def forward(self, is_train, req, in_data, out_data, aux):
 x = in_data[0]
 y = mx.nd.exp(x - x.max(axis=1).reshape((x.shape[0], 1)))
 y[:] = mx.nd.divide(y, y.sum(axis=1).reshape(x.shape[0], 1))
 self.assign(out_data[0], req[0], mx.nd.array(y))
   
 def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
 l = in_data[1].astype('int32')
 y = out_data[0]
 if l.sum().asnumpy() == 0:
   self.assign(in_grad[0], req[0], mx.nd.zeros_like(y))
 else:
   y[np.arange(l.shape[0]), l] -= 1.0
   y = y / 160
   self.assign(in_grad[0], req[0], mx.nd.array(y))
   
   @mx.operator.register("softmax")
   class SoftmaxProp(mx.operator.CustomOpProp):
 def __init__(self):
 super(SoftmaxProp, self).__init__(need_top_grad=False)
   
 def list_arguments(self):
 return ['data', 'label']
   
 def list_outputs(self):
 return ['output']
   
 def infer_shape(self, in_shape):
 data_shape = in_shape[0]
 label_shape = (in_shape[0][0],)
 output_shape = in_shape[0]
 return [data_shape, label_shape], [output_shape], []
   
 def infer_type(self, in_type):
 return in_type, [in_type[0]], []
   
 def create_operator(self, ctx, shapes, dtypes):
 return Softmax()
   
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] MinZhaoLin edited a comment on issue #9920: Asynchronous Issue on CustomOP and mxnet.image.ImageDetIter

2018-08-23 Thread GitBox
MinZhaoLin edited a comment on issue #9920: Asynchronous Issue on CustomOP and 
mxnet.image.ImageDetIter
URL: 
https://github.com/apache/incubator-mxnet/issues/9920#issuecomment-415649816
 
 
   @wkcn 
   我是一开始定义了自己的softmax层后,出现了这个issue的报错src/io/image_io.cc:186: Check failed: 
inputs[0].ctx().dev_mask() == Context::kCPU (2 vs. 1) Only supports cpu input,
   
然后根据这个issue改了之后,就是修改Imge.py之后,在AccMetric里面就出现了src/ndarray/ndarray_function.cu:43:
 Check failed: to->type_flag == from.type_flag_ (0 vs. 3) Source and target 
must have the same data type when copying across devices.这个问题
   这个问题具体是在pred_label = 
pred_label.asnumpy().astype('int32').flatten()的时候出现的,然后当我尝试在这句话前面(某些地方,有些地方可以,有些地方也会报错)将pred_label直接print输出的时候,他是有时会正常运行不会报错,但是每次当pred_label调用到asnumpy()的时候就会报这个错
   ```
   class GenderAccMetric(mx.metric.EvalMetric):
 def __init__(self):
   self.axis = 1
   super(GenderAccMetric, self).__init__(
   'acc', axis=self.axis,
   output_names=None, label_names=None)
   self.losses = []
   self.count = 0
   
 def update(self, _labels, preds):
   self.count+=1
   #print("in gender AccMetric\n")
   #print("label is {}\n".format(_labels[2])) 
   #print("preds is {}\n".format(preds[3]))  当输出label和preds时,有时会正常运行
   labels = [_labels[2]]
   _preds = [preds[3]] #use softmax output
   for label, pred_label in zip(labels, _preds):
   #print("pred_label before if is {}\n".format(pred_label)) 在这里输出也会报错
   pred_label = pred_label.as_in_context(label.context) 加上了这句话还是不行
   if pred_label.shape != label.shape:
   #pred_label = mx.ndarray.array(_pred_label, 
ctx=mx.current_context())其中一个AccMetric在这里进行转换就解决了问题,但剩下的两个不行
   pred_label = mx.ndarray.argmax(pred_label, axis=self.axis)
   pred_label = pred_label.asnumpy().astype('int32').flatten()
   label = label.asnumpy()
   if label.ndim==2:
 label = label[:,0]
   label = label.astype('int32').flatten()
   assert label.shape==pred_label.shape
   self.sum_metric += (pred_label.flat == label.flat).sum()
   self.num_inst += len(pred_label.flat)
   ```
   
   所以总的来说,我感到非常奇怪和困扰,按照你说的方法改了还是不行,我感觉是只要是pred调用了asnumpy()就会报错
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] MinZhaoLin edited a comment on issue #9920: Asynchronous Issue on CustomOP and mxnet.image.ImageDetIter

2018-08-23 Thread GitBox
MinZhaoLin edited a comment on issue #9920: Asynchronous Issue on CustomOP and 
mxnet.image.ImageDetIter
URL: 
https://github.com/apache/incubator-mxnet/issues/9920#issuecomment-415649816
 
 
   @wkcn 
   我是一开始定义了自己的softmax层后,出现了这个issue的报错src/io/image_io.cc:186: Check failed: 
inputs[0].ctx().dev_mask() == Context::kCPU (2 vs. 1) Only supports cpu input,
   
然后根据这个issue改了之后,就是修改Imge.py之后,在AccMetric里面就出现了src/ndarray/ndarray_function.cu:43:
 Check failed: to->type_flag == from.type_flag_ (0 vs. 3) Source and target 
must have the same data type when copying across devices.这个问题
   这个问题具体是在pred_label = 
pred_label.asnumpy().astype('int32').flatten()的时候出现的,然后当我尝试在这句话前面(某些地方,有些地方可以,有些地方也会报错)将pred_label直接print输出的时候,他是有时会正常运行不会报错,但是每次当pred_label调用到asnumpy()的时候就会报这个错
   `class GenderAccMetric(mx.metric.EvalMetric):
 def __init__(self):
   self.axis = 1
   super(GenderAccMetric, self).__init__(
   'acc', axis=self.axis,
   output_names=None, label_names=None)
   self.losses = []
   self.count = 0
   
 def update(self, _labels, preds):
   self.count+=1
   #print("in gender AccMetric\n")
   #print("label is {}\n".format(_labels[2])) 
   #print("preds is {}\n".format(preds[3]))  当输出label和preds时,有时会正常运行
   labels = [_labels[2]]
   _preds = [preds[3]] #use softmax output
   for label, pred_label in zip(labels, _preds):
   #print("pred_label before if is {}\n".format(pred_label)) 在这里输出也会报错
   pred_label = pred_label.as_in_context(label.context) 加上了这句话还是不行
   if pred_label.shape != label.shape:
   #pred_label = mx.ndarray.array(_pred_label, 
ctx=mx.current_context())其中一个AccMetric在这里进行转换就解决了问题,但剩下的两个不行
   pred_label = mx.ndarray.argmax(pred_label, axis=self.axis)
   pred_label = pred_label.asnumpy().astype('int32').flatten()
   label = label.asnumpy()
   if label.ndim==2:
 label = label[:,0]
   label = label.astype('int32').flatten()
   assert label.shape==pred_label.shape
   self.sum_metric += (pred_label.flat == label.flat).sum()
   self.num_inst += len(pred_label.flat)`
   
   所以总的来说,我感到非常奇怪和困扰,按照你说的方法改了还是不行,我感觉是只要是pred调用了asnumpy()就会报错
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] MinZhaoLin commented on issue #9920: Asynchronous Issue on CustomOP and mxnet.image.ImageDetIter

2018-08-23 Thread GitBox
MinZhaoLin commented on issue #9920: Asynchronous Issue on CustomOP and 
mxnet.image.ImageDetIter
URL: 
https://github.com/apache/incubator-mxnet/issues/9920#issuecomment-415649816
 
 
   @wkcn 
   我是一开始定义了自己的softmax层后,出现了这个issue的报错src/io/image_io.cc:186: Check failed: 
inputs[0].ctx().dev_mask() == Context::kCPU (2 vs. 1) Only supports cpu input,
   
然后根据这个issue改了之后,就是修改Imge.py之后,在AccMetric里面就出现了src/ndarray/ndarray_function.cu:43:
 Check failed: to->type_flag == from.type_flag_ (0 vs. 3) Source and target 
must have the same data type when copying across devices.这个问题
   这个问题具体是在pred_label = 
pred_label.asnumpy().astype('int32').flatten()的时候出现的,然后当我尝试在这句话前面(某些地方,有些地方可以,有些地方也会报错)将pred_label直接print输出的时候,他是有时会正常运行不会报错,但是每次当pred_label调用到asnumpy()的时候就会报这个错
   class GenderAccMetric(mx.metric.EvalMetric):
 def __init__(self):
   self.axis = 1
   super(GenderAccMetric, self).__init__(
   'acc', axis=self.axis,
   output_names=None, label_names=None)
   self.losses = []
   self.count = 0
   
 def update(self, _labels, preds):
   self.count+=1
   #print("in gender AccMetric\n")
   #print("label is {}\n".format(_labels[2])) 
   #print("preds is {}\n".format(preds[3]))  当输出label和preds时,有时会正常运行
   labels = [_labels[2]]
   _preds = [preds[3]] #use softmax output
   for label, pred_label in zip(labels, _preds):
   #print("pred_label before if is {}\n".format(pred_label)) 在这里输出也会报错
   pred_label = pred_label.as_in_context(label.context) 加上了这句话还是不行
   if pred_label.shape != label.shape:
   #pred_label = mx.ndarray.array(_pred_label, 
ctx=mx.current_context())其中一个AccMetric在这里进行转换就解决了问题,但剩下的两个不行
   pred_label = mx.ndarray.argmax(pred_label, axis=self.axis)
   pred_label = pred_label.asnumpy().astype('int32').flatten()
   label = label.asnumpy()
   if label.ndim==2:
 label = label[:,0]
   label = label.astype('int32').flatten()
   assert label.shape==pred_label.shape
   self.sum_metric += (pred_label.flat == label.flat).sum()
   self.num_inst += len(pred_label.flat)
   
   所以总的来说,我感到非常奇怪和困扰,按照你说的方法改了还是不行,我感觉是只要是pred调用了asnumpy()就会报错
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] luobao-intel commented on a change in pull request #11664: Fall back when sparse arrays are passed to MKLDNN-enabled operators

2018-08-23 Thread GitBox
luobao-intel commented on a change in pull request #11664: Fall back when 
sparse arrays are passed to MKLDNN-enabled operators
URL: https://github.com/apache/incubator-mxnet/pull/11664#discussion_r212516959
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_base.cc
 ##
 @@ -496,11 +494,6 @@ void OpCheck::Run(mxnet::FCompute fn, const 
nnvm::NodeAttrs ,
   const std::vector _,
   const std::vector ,
   const std::vector _) {
-  static auto& is_excluded = Op::GetAttr("TExcludeMKLDNNDebug");
-  if (is_excluded.get(attrs.op, false)) {
-LOG(WARNING) << attrs.op->name << " not checked. TExcludeMKLDNNDebug flag 
present";
-return;
-  }
 
 Review comment:
   sorry, I didn't mean to remove the code. It was caused by the wrong merge. 
I'll handle it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #11664: Fall back when sparse arrays are passed to MKLDNN-enabled operators

2018-08-23 Thread GitBox
zheng-da commented on a change in pull request #11664: Fall back when sparse 
arrays are passed to MKLDNN-enabled operators
URL: https://github.com/apache/incubator-mxnet/pull/11664#discussion_r212514628
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_base.cc
 ##
 @@ -473,11 +473,9 @@ void OpCheck::Init(const std::vector 
_,
   auto ctx = inputs_[0].ctx();
   CHECK(!MKLDNNStream::Get()->HasOps());
   for (size_t i = 0; i < inputs_.size(); i++) {
-NDArray data = inputs_[i];
-inputs.emplace_back(data.shape(), ctx, false, data.dtype());
-if (data.IsMKLDNNData() && data.IsView())
-data = data.Reorder2Default();
 
 Review comment:
   why do you remove the code here?
   I think the original code is correct.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #11664: Fall back when sparse arrays are passed to MKLDNN-enabled operators

2018-08-23 Thread GitBox
zheng-da commented on issue #11664: Fall back when sparse arrays are passed to 
MKLDNN-enabled operators
URL: https://github.com/apache/incubator-mxnet/pull/11664#issuecomment-415642640
 
 
   @azai91 could you please review the code?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #11664: Fall back when sparse arrays are passed to MKLDNN-enabled operators

2018-08-23 Thread GitBox
zheng-da commented on a change in pull request #11664: Fall back when sparse 
arrays are passed to MKLDNN-enabled operators
URL: https://github.com/apache/incubator-mxnet/pull/11664#discussion_r212514733
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_base.cc
 ##
 @@ -496,11 +494,6 @@ void OpCheck::Run(mxnet::FCompute fn, const 
nnvm::NodeAttrs ,
   const std::vector _,
   const std::vector ,
   const std::vector _) {
-  static auto& is_excluded = Op::GetAttr("TExcludeMKLDNNDebug");
-  if (is_excluded.get(attrs.op, false)) {
-LOG(WARNING) << attrs.op->name << " not checked. TExcludeMKLDNNDebug flag 
present";
-return;
-  }
 
 Review comment:
   Why is this removed?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nicklhy commented on issue #11649: CPP-PACKAGE Examples throwing an instance of 'dmlc::Error'

2018-08-23 Thread GitBox
nicklhy commented on issue #11649: CPP-PACKAGE Examples throwing an instance of 
'dmlc::Error'
URL: 
https://github.com/apache/incubator-mxnet/issues/11649#issuecomment-415642542
 
 
   @golo314 gcc 5.4, ubuntu16.04.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #12239: Scale to many CPU cores

2018-08-23 Thread GitBox
pengzhao-intel commented on issue #12239: Scale to many CPU cores
URL: 
https://github.com/apache/incubator-mxnet/issues/12239#issuecomment-415642434
 
 
   Thanks for the feedback. It needs the FW level supports or you can write 
your own code for your targets.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #12325: Fix a bug in where op with 1-D input

2018-08-23 Thread GitBox
zheng-da commented on issue #12325: Fix a bug in where op with 1-D input
URL: https://github.com/apache/incubator-mxnet/pull/12325#issuecomment-415641404
 
 
   Please add a test case for 1D arrays


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cclauss commented on issue #12278: Tighten up PyLint directives

2018-08-23 Thread GitBox
cclauss commented on issue #12278: Tighten up PyLint directives
URL: https://github.com/apache/incubator-mxnet/pull/12278#issuecomment-415640918
 
 
   Closing in favor of #12322


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] qw42 commented on issue #12239: Scale to many CPU cores

2018-08-23 Thread GitBox
qw42 commented on issue #12239: Scale to many CPU cores
URL: 
https://github.com/apache/incubator-mxnet/issues/12239#issuecomment-415639662
 
 
   Hi pengzhao.
   It looks to me that your code above creates multiple processes. It is not 
the same as scaling with threads.
   In C++ one can create multiple executors and run each executor in its own 
thread. This approach doesn't work, because there is a lock somewhere in the 
code that prevents scaling.
   
   P.S. I am using MKL-DNN, but is not related to scaling.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vishaalkapoor opened a new pull request #12326: [MXAPPS-581] Disable a long test in the SD nightly.

2018-08-23 Thread GitBox
vishaalkapoor opened a new pull request #12326: [MXAPPS-581] Disable a long 
test in the SD nightly.
URL: https://github.com/apache/incubator-mxnet/pull/12326
 
 
   ## Description ##
   Disable a test that's taking longer than 10 minutes with the Python 2 
interpreter in the Straight Dope Nightly.
   
   This was missed previously as the hardware I'm running my tests on was a 
little faster than the hardware the test runner is using.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [*] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [*] Changes are complete (i.e. I finished coding on this PR)
   - [* ] Code is well-documented: 
   - [ *] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2018-08-23 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new a4ac37a  Bump the publish timestamp.
a4ac37a is described below

commit a4ac37aa5d7d7657ffa1337f9062ec69f6a02e30
Author: mxnet-ci 
AuthorDate: Fri Aug 24 00:56:03 2018 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..554b515
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Fri Aug 24 00:56:03 UTC 2018



[GitHub] eric-haibin-lin commented on a change in pull request #12290: [MXNET-798][WIP] Fix the dtype cast from non float32 in Gradient computation

2018-08-23 Thread GitBox
eric-haibin-lin commented on a change in pull request #12290: [MXNET-798][WIP] 
Fix the dtype cast from non float32 in Gradient computation
URL: https://github.com/apache/incubator-mxnet/pull/12290#discussion_r212494268
 
 

 ##
 File path: src/executor/infer_graph_attr_pass.cc
 ##
 @@ -254,7 +254,8 @@ nnvm::Graph InferAttr(nnvm::Graph &,
 dispatch_mode = _modes[nid];
 if (dispatch_modes[nid] == DispatchMode::kUndefined) forward_known = 
false;
   }
-  auto finfer = finfer_shape.get(inode.source->op(), fdefault);
+  auto finfer = (inode.source->op() == Op::Get("_zeros")) ? fdefault :
+finfer_shape.get(inode.source->op(), fdefault);
 
 Review comment:
   Are you sure about this? This affects all _zero ops, not just for the case 
you mentioned.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest commented on issue #12325: Fix a bug in where op with 1-D input

2018-08-23 Thread GitBox
apeforest commented on issue #12325: Fix a bug in where op with 1-D input
URL: https://github.com/apache/incubator-mxnet/pull/12325#issuecomment-415610728
 
 
   @zheng-da Please review


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest opened a new pull request #12325: Fix a bug in where op with 1-D input

2018-08-23 Thread GitBox
apeforest opened a new pull request #12325: Fix a bug in where op with 1-D input
URL: https://github.com/apache/incubator-mxnet/pull/12325
 
 
   ## Description ##
   Fix a bug in shape infer in where operator with 1-D input.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [X] Changes are complete (i.e. I finished coding on this PR)
   - [X] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [X] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [X] bug fix


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hetong007 commented on a change in pull request #12283: [MXNET-825] Fix CGAN R Example with MNIST dataset

2018-08-23 Thread GitBox
hetong007 commented on a change in pull request #12283: [MXNET-825] Fix CGAN R 
Example with MNIST dataset
URL: https://github.com/apache/incubator-mxnet/pull/12283#discussion_r212488351
 
 

 ##
 File path: example/gan/CGAN_mnist_R/CGAN_train.R
 ##
 @@ -15,9 +15,116 @@
 # specific language governing permissions and limitations
 # under the License.
 
-#
+require("imager")
+require("dplyr")
+require("readr")
+require("mxnet")
+
+source("iterators.R")
+
+### Data import and preperation 
+# First download MNIST train data at Kaggle:
+# https://www.kaggle.com/c/digit-recognizer/data
+
+train <- read_csv("data/train.csv")
+train <- data.matrix(train)
+
+train_data <- train[, -1]
+train_data <- t(train_data/255 * 2 - 1)
+train_label <- as.integer(train[, 1])
+
+dim(train_data) <- c(28, 28, 1, ncol(train_data))
+
+### Model parameters
+random_dim <- 96
+gen_features <- 96
+dis_features <- 32
+image_depth = 1
 
 Review comment:
   `formatR` has option to check assign operator.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy closed issue #7835: C++ batchnorm CPU version training well, but validation wrong

2018-08-23 Thread GitBox
sandeep-krishnamurthy closed issue #7835: C++ batchnorm CPU version training 
well, but validation wrong
URL: https://github.com/apache/incubator-mxnet/issues/7835
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] samskalicky commented on issue #7835: C++ batchnorm CPU version training well, but validation wrong

2018-08-23 Thread GitBox
samskalicky commented on issue #7835: C++ batchnorm CPU version training well, 
but validation wrong
URL: 
https://github.com/apache/incubator-mxnet/issues/7835#issuecomment-415601747
 
 
   @sandeep-krishnamurthy please close this thread. Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn edited a comment on issue #9920: Asynchronous Issue on CustomOP and mxnet.image.ImageDetIter

2018-08-23 Thread GitBox
wkcn edited a comment on issue #9920: Asynchronous Issue on CustomOP and 
mxnet.image.ImageDetIter
URL: 
https://github.com/apache/incubator-mxnet/issues/9920#issuecomment-415599500
 
 
   @MinZhaoLin 你遇到的问题好像和这个issue不一样
   metric里传入的pred_label是GPU数据,label是CPU数据
   你可以尝试pred_label.as_in_context(label.context),让pred和label在同一个context里


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] samskalicky commented on issue #11156: mx.nd.topk does not work with ndarray of type float16

2018-08-23 Thread GitBox
samskalicky commented on issue #11156: mx.nd.topk does not work with ndarray of 
type float16
URL: 
https://github.com/apache/incubator-mxnet/issues/11156#issuecomment-415599574
 
 
   Rerunning this now results in the following message:
   
   ```
   >>> import mxnet as mx
   >>> a = mx.nd.array([1,2,3])
   >>> a.astype('float16').max()
   
   [3.]
   
   >>> a.astype('float16').topk()
   Traceback (most recent call last):
 File "", line 1, in 
 File "/home/ubuntu/topk_fp16/python/mxnet/ndarray/ndarray.py", line 189, 
in __repr__
   return '\n%s\n<%s %s @%s>' % (str(self.asnumpy()),
 File "/home/ubuntu/topk_fp16/python/mxnet/ndarray/ndarray.py", line 1972, 
in asnumpy
   ctypes.c_size_t(data.size)))
 File "/home/ubuntu/topk_fp16/python/mxnet/base.py", line 252, in check_call
   raise MXNetError(py_str(_LIB.MXGetLastError()))
   mxnet.base.MXNetError: [22:58:33] 
/home/ubuntu/topk_fp16/src/operator/tensor/./ordering_op-inl.h:535: This 
operation does not support float16
   ```
   
   This is due to change #12250 that improves the messaging that float16 is not 
supported. 
   
   We should change the tags on this issue to [Operator, Feature Request] and 
remove [Bug] now that its been handled as not currently supported.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy closed issue #10824: Segmentation Fault when using as_in_context

2018-08-23 Thread GitBox
sandeep-krishnamurthy closed issue #10824: Segmentation Fault when using 
as_in_context
URL: https://github.com/apache/incubator-mxnet/issues/10824
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9920: Asynchronous Issue on CustomOP and mxnet.image.ImageDetIter

2018-08-23 Thread GitBox
wkcn commented on issue #9920: Asynchronous Issue on CustomOP and 
mxnet.image.ImageDetIter
URL: 
https://github.com/apache/incubator-mxnet/issues/9920#issuecomment-415599500
 
 
   @MinZhaoLin 你遇到的问题好像和这个issue不一样
   metric里传入的pred_label是GPU数据,label是CPU数据
   你可以尝试pred_label.as_in_context(label.context)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on issue #10824: Segmentation Fault when using as_in_context

2018-08-23 Thread GitBox
sandeep-krishnamurthy commented on issue #10824: Segmentation Fault when using 
as_in_context
URL: 
https://github.com/apache/incubator-mxnet/issues/10824#issuecomment-415599375
 
 
   Verified with latest MXNet builds, on (CPU) C5.18X machine with mxnet mkldnn 
build and (GPU) mxnet cu90, issue is not reproducible.
   
   Resolving the issue. Please reopen if issue still persists.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ankkhedia commented on issue #12321: Sphinx docs build errors

2018-08-23 Thread GitBox
ankkhedia commented on issue #12321: Sphinx docs build errors
URL: 
https://github.com/apache/incubator-mxnet/issues/12321#issuecomment-415598637
 
 
   @sandeep-krishnamurthy Could you please label above as [Python, Build, Bug] 
and remove [Doc]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ankkhedia commented on issue #12318: Sphinx is unable to access some MXNet ONNX module functions

2018-08-23 Thread GitBox
ankkhedia commented on issue #12318: Sphinx is unable to access some MXNet ONNX 
module functions
URL: 
https://github.com/apache/incubator-mxnet/issues/12318#issuecomment-415598762
 
 
   @sandeep-krishnamurthy Could you please label above as [Python, Build, Bug] 
and remove [Doc, Website]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on a change in pull request #12157: Subgraph API for integrating accelerators with MXNet

2018-08-23 Thread GitBox
anirudh2290 commented on a change in pull request #12157: Subgraph API for 
integrating accelerators with MXNet
URL: https://github.com/apache/incubator-mxnet/pull/12157#discussion_r212479384
 
 

 ##
 File path: src/engine/naive_engine.cc
 ##
 @@ -165,14 +178,22 @@ class NaiveEngine final : public Engine {
 }
 CHECK(this->req_completed_)
 << "NaiveEngine only support synchronize Push so far";
+// increment var version
+for (auto var : mutable_vars) {
 
 Review comment:
   In case of deletevariable, will this increment the var version after it is 
deleted ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on a change in pull request #12157: Subgraph API for integrating accelerators with MXNet

2018-08-23 Thread GitBox
anirudh2290 commented on a change in pull request #12157: Subgraph API for 
integrating accelerators with MXNet
URL: https://github.com/apache/incubator-mxnet/pull/12157#discussion_r212481311
 
 

 ##
 File path: src/operator/subgraph/default_subgraph_property.cc
 ##
 @@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include 
+#include 
+#include "./common.h"
+#include "./subgraph_property.h"
+
+namespace mxnet {
+namespace op {
+
+/*
+ * This selects nodes for a subgraph that only contains operators
+ * in a given set and it visits nodes via both input and output links.
+ */
+class ContainOpSelector: public SubgraphSelector {
+ public:
+  explicit ContainOpSelector(const std::unordered_set& op_names)
+: op_names_(op_names) {}
+
+  virtual bool Select(const nnvm::Node ) {
+return !n.is_variable() && op_names_.count(n.op()->name);
+  }
+
+  virtual bool SelectInput(const nnvm::Node , const nnvm::Node _node) {
+return !new_node.is_variable() && op_names_.count(new_node.op()->name);
+  }
+
+  virtual bool SelectOutput(const nnvm::Node , const nnvm::Node _node) {
+return !new_node.is_variable() && op_names_.count(new_node.op()->name);
+  }
+ private:
+  const std::unordered_set& op_names_;
+};
+
+/*
+ * This subgraph property finds a subgraph whose nodes have only operators
+ * within a set. The operators in the subgraph will be executed by 
_default_subgraph_op.
+ */
+class DefaultSubgraphProperty: public SubgraphProperty {
+ public:
+  static SubgraphPropertyPtr Create() { return 
std::make_shared(); }
+  virtual nnvm::NodePtr CreateSubgraphNode(const nnvm::Symbol ,
+   const int subgraph_id = 0) const {
+nnvm::NodePtr n = nnvm::Node::Create();
+n->attrs.op = Op::Get("_default_subgraph_op");
+n->attrs.name = "_default_subgraph_op" + std::to_string(subgraph_id);
+n->attrs.subgraphs.push_back(std::make_shared(sym));
+return n;
+  }
+  virtual SubgraphSelectorPtr CreateSubgraphSelector() const {
+return std::make_shared(
+this->GetAttr>("op_names"));
 
 Review comment:
   Can we override GetAttr for DefaultSubgraphProperty, so that it informs the 
user that the set has to be provided by the user using C API to test default 
subgraph property? This will also be a good example for developers writing 
custom subgraph property.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on a change in pull request #12157: Subgraph API for integrating accelerators with MXNet

2018-08-23 Thread GitBox
anirudh2290 commented on a change in pull request #12157: Subgraph API for 
integrating accelerators with MXNet
URL: https://github.com/apache/incubator-mxnet/pull/12157#discussion_r212480073
 
 

 ##
 File path: src/executor/graph_executor.cc
 ##
 @@ -1428,6 +1430,146 @@ GraphExecutor::CachedSegOpr 
GraphExecutor::CreateCachedSegOpr(size_t topo_start,
 iter->c_str());
   return ret;
 }
+
+// Infer shapes, dtypes, stypes, contexts for the forward graph
+static nnvm::Graph InferForwardAttrs(nnvm::Graph g,
+ nnvm::ShapeVector arg_shapes,
+ nnvm::DTypeVector arg_dtypes,
+ StorageTypeVector arg_stypes,
+ const Context& default_ctx,
+ const std::map& 
ctx_map,
+ const std::vector& in_arg_ctxes,
+ const std::vector& 
aux_state_ctxes) {
+  const auto& indexed_graph = g.indexed_graph();
+  const auto num_forward_inputs = indexed_graph.input_nodes().size();
+  g = AssignContext(g, default_ctx, ctx_map, in_arg_ctxes, {},
+   aux_state_ctxes, {}, num_forward_inputs, g.outputs.size());
+  g = InferShape(std::move(g), std::move(arg_shapes), "__shape__");
+  if (g.GetAttr("shape_num_unknown_nodes") != 0U) {
+HandleInferShapeError(num_forward_inputs, indexed_graph,
+  g.GetAttr("shape"));
+  }
+  g = InferType(std::move(g), std::move(arg_dtypes), "__dtype__");
+  if (g.GetAttr("dtype_num_unknown_nodes") != 0U) {
+HandleInferTypeError(num_forward_inputs, indexed_graph,
+ g.GetAttr("dtype"));
+  }
+  g = InferStorageType(std::move(g), std::move(arg_stypes), 
"__storage_type__");
+  if (g.GetAttr("storage_type_num_unknown_nodes") != 0U) {
+HandleInferStorageTypeError(num_forward_inputs, indexed_graph,
+g.GetAttr("storage_type"));
+  }
+  return g;
+}
+
+// Given input attr arrays, partition the graph using the backend name equal 
to prop_name.
+// This is a common function for bind and simple_bind flows.
+static nnvm::Symbol PartitionGraph(const nnvm::Symbol& src,
+   const std::string& prop_name,
+   const nnvm::ShapeVector& arg_shapes,
+   const nnvm::DTypeVector& arg_dtypes,
+   const StorageTypeVector& arg_stypes,
+   const Context& default_ctx,
+   const std::map& 
ctx_map,
+   const std::vector& in_arg_ctxes,
+   const std::vector& 
aux_state_ctxes) {
+  auto subgraph_prop = 
op::SubgraphPropertyRegistry::Get()->CreateSubgraphProperty(prop_name);
+  nnvm::Symbol ret = src.Copy();
+  nnvm::Graph g;
+  g.outputs = ret.outputs;
+  g = InferForwardAttrs(g, arg_shapes, arg_dtypes, arg_stypes, default_ctx,
+ctx_map, in_arg_ctxes, aux_state_ctxes);
+  subgraph_prop->SetAttr("graph", g);
+  auto it = op::SubgraphPropertyOpNameSet::Get()->find(prop_name);
+  // assign a op name set to the subgraph property if it has been provided by 
users
+  if (it != op::SubgraphPropertyOpNameSet::Get()->end()) {
+LOG(INFO) << "SubgraphPropertyOpNameSet for subgraph property " << 
prop_name
+  << " has been assigned a value. Please make sure it is 
initialized"
+ " only for the testing purpose.";
+subgraph_prop->SetAttr("op_names", it->second);
+  }
+  g.attrs["subgraph_property"] = 
std::make_shared(std::move(subgraph_prop));
+  g = ApplyPass(std::move(g), "PartitionGraph");
+  ret.outputs = g.outputs;
+  return ret;
+}
+
+// Given input attr dicts, partition the graph using the backend name equal to 
prop_name.
+// This is for simple_bind flow.
+static nnvm::Symbol PartitionGraph(const nnvm::Symbol& src,
+   const std::string& prop_name,
+   const std::unordered_map& arg_shape_map,
+   const std::unordered_map& 
arg_dtype_map,
+   const std::unordered_map& 
arg_stype_map,
+   const Context& default_ctx,
+   const std::map& 
ctx_map,
+   const std::vector& in_arg_ctxes,
+   const std::vector& 
aux_state_ctxes) {
+  const std::vector input_names = 
src.ListInputNames(Symbol::kAll);
+  nnvm::ShapeVector arg_shapes(input_names.size(), TShape());
+  nnvm::DTypeVector arg_dtypes(input_names.size(), -1);
+  StorageTypeVector arg_stypes(input_names.size(), kUndefinedStorage);
+  for (size_t i = 0; i < input_names.size(); ++i) {
+auto it1 = arg_shape_map.find(input_names[i]);
+if (arg_shape_map.end() != 

[GitHub] Roshrini opened a new pull request #12324: Readme and News updated for 1.3 release

2018-08-23 Thread GitBox
Roshrini opened a new pull request #12324: Readme and News updated for 1.3 
release
URL: https://github.com/apache/incubator-mxnet/pull/12324
 
 
   ## Description ##
   Readme file and News file updated with 1.3 release notes.
   
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #12230: Update Operator Implementation Tutorial

2018-08-23 Thread GitBox
anirudhacharya commented on a change in pull request #12230: Update Operator 
Implementation Tutorial
URL: https://github.com/apache/incubator-mxnet/pull/12230#discussion_r212480608
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -6513,19 +6513,25 @@ def f(x, a, b, c):
 a = np.random.random_sample()
 b = np.random.random_sample()
 c = np.random.random_sample()
-# check forward
-for ndim in range(1, 6):
-shape = rand_shape_nd(ndim, 5)
-data = rand_ndarray(shape=shape, stype='default')
-data_np = data.asnumpy()
-expected = f(data_np, a, b, c)
-output = mx.nd.contrib.quadratic(data, a=a, b=b, c=c)
-assert_almost_equal(output.asnumpy(), expected, rtol=0.001, 
atol=0.0001)
-
-# check backward using finite difference
-data = mx.sym.Variable('data')
-quad_sym = mx.sym.contrib.quadratic(data=data, a=a, b=b, c=c)
-check_numeric_gradient(quad_sym, [data_np], atol=0.001)
+data = mx.symbol.Variable('data')
 
 Review comment:
   i have added.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #12230: Update Operator Implementation Tutorial

2018-08-23 Thread GitBox
anirudhacharya commented on a change in pull request #12230: Update Operator 
Implementation Tutorial
URL: https://github.com/apache/incubator-mxnet/pull/12230#discussion_r212480586
 
 

 ##
 File path: docs/faq/add_op_in_backend.md
 ##
 @@ -567,29 +568,66 @@ def test_quadratic_function():
 a = np.random.random_sample()
 b = np.random.random_sample()
 c = np.random.random_sample()
-for ndim in range(1, 6):
 
 Review comment:
   i have added.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #12318: Sphinx is unable to access some MXNet ONNX module functions

2018-08-23 Thread GitBox
aaronmarkham commented on issue #12318: Sphinx is unable to access some MXNet 
ONNX module functions
URL: 
https://github.com/apache/incubator-mxnet/issues/12318#issuecomment-415595609
 
 
   Hi @marcoabreu - this is more of a Python issue on the engineering side than 
something that can be fixed from within the website or docs systems. I might 
even label it a bug.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham opened a new pull request #12323: Sphinx error reduction

2018-08-23 Thread GitBox
aaronmarkham opened a new pull request #12323: Sphinx error reduction
URL: https://github.com/apache/incubator-mxnet/pull/12323
 
 
   ## Description ##
   The is a resubmission of #11916.
   I had to remove updating the config files that were already updated in the 
website build pipeline and to keep this one focused on the `toctree` and other 
content issues.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ankkhedia commented on issue #12320: Clojure's approx= test function has a weakness

2018-08-23 Thread GitBox
ankkhedia commented on issue #12320: Clojure's approx= test function has a 
weakness
URL: 
https://github.com/apache/incubator-mxnet/issues/12320#issuecomment-415594290
 
 
   @mxnet-label-bot [Clojure]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ankkhedia commented on issue #12321: Sphinx docs build errors

2018-08-23 Thread GitBox
ankkhedia commented on issue #12321: Sphinx docs build errors
URL: 
https://github.com/apache/incubator-mxnet/issues/12321#issuecomment-415594087
 
 
   @mxnet-label-bot [Doc]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cclauss opened a new pull request #12322: Tighten up PyLint directives again

2018-08-23 Thread GitBox
cclauss opened a new pull request #12322: Tighten up PyLint directives again
URL: https://github.com/apache/incubator-mxnet/pull/12322
 
 
   ## Description ##
   #12278 became disconnected from my fork so its branch is now __unknown 
repository__.  This prevents me from resolving the conflict or making other 
requested changes.  This PR attempts to be and exact copy of #12278  @ 
vandanavk and @marcoabreu, please rereview this PR.
   
   Remove PyLint disable directives that the codebase is not violating.  This 
tightens up the linting of future proposed changes.  Also looked through PyLint 
issues that were only being flagged once in the code base to see if small 
changes code make the code compliant.  __Fixed__ via code changes:
   ```
   * Module setup
   python/setup.py:32:0: C0413: Import "from setuptools import find_packages" 
should be placed at the top of the module (wrong-import-position)
   * Module mxnet.executor_manager
   python/mxnet/executor_manager.py:130:24: C0121: Comparison to False should 
be 'expr' or 'expr is not False' (singleton-comparison)
   * Module mxnet.model
   python/mxnet/model.py:136:16: W1508: os.getenv default type is builtins.int. 
Expected str or None. (invalid-envvar-default)
   python/mxnet/model.py:192:0: R1711: Useless return at end of function or 
method (useless-return)
   * Module mxnet.gluon.rnn.rnn_cell
   python/mxnet/gluon/rnn/rnn_cell.py:253:4: R0911: Too many return statements 
(7/6) (too-many-return-statements)
   * Module mxnet.image.detection
   python/mxnet/image/detection.py:311:16: R0916: Too many boolean expressions 
in if statement (6/5) (too-many-boolean-expressions)
   ```
   
   @vandanavk 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ankkhedia commented on issue #12318: Sphinx is unable to access some MXNet ONNX module functions

2018-08-23 Thread GitBox
ankkhedia commented on issue #12318: Sphinx is unable to access some MXNet ONNX 
module functions
URL: 
https://github.com/apache/incubator-mxnet/issues/12318#issuecomment-415594184
 
 
   @mxnet-label-bot [Doc, Website]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] taliesinb commented on issue #12320: Clojure's approx= test function has a weakness

2018-08-23 Thread GitBox
taliesinb commented on issue #12320: Clojure's approx= test function has a 
weakness
URL: 
https://github.com/apache/incubator-mxnet/issues/12320#issuecomment-415593173
 
 
   I have fixed this in https://github.com/apache/incubator-mxnet/pull/12064.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] taliesinb commented on a change in pull request #12064: Allow stop of arange to be inferred from dims.

2018-08-23 Thread GitBox
taliesinb commented on a change in pull request #12064: Allow stop of arange to 
be inferred from dims.
URL: https://github.com/apache/incubator-mxnet/pull/12064#discussion_r212477584
 
 

 ##
 File path: contrib/clojure-package/src/org/apache/clojure_mxnet/symbol.clj
 ##
 @@ -135,7 +135,17 @@
   ([start stop  {:keys [step repeat dtype]
  :or {step (float 1) repeat (int 1) dtype base/MX_REAL_TYPE}
  :as opts}]
-   (Symbol/arange (float start) ($/option (float stop)) step repeat nil dtype))
+   (Symbol/arange (float start) ($/option (float stop)) step repeat false nil 
dtype))
+  ([start stop]
+   (arange start stop {})))
+
+(defn arange-with-inference
+  "Behaves like arange operator, but infers the stop value from the output 
shape, 
 
 Review comment:
   @gigasquid ok we're good to go. I removed the pointless imperative function 
version of arange-with-inference, so I only had to add a test to 
`operator_test.clj`. 
   
   However, in doing this, I think I've picked up a problem with `approx=`, in 
which it incorrectly returns true if one of the comparisands (is that a word??) 
is shorter than the other, and differs in the remaining elements that the other 
does not have.
   
   For example, try change the test starting on line 200 to the following:
   
   ```
   (deftest ones
 (let [ones (sym/ones [2 2])
   exec (sym/simple-bind ones (context/default-context))]
   (is (approx= 1e-4
[1 1 1 1 9 9 9 9 9 9]
(-> exec (executor/forward) (executor/outputs) (first))
   ```
   
   (I've introduced the 9 9 9 9 9 9 here). This test still passes. 
   
   I've reported the issue here: 
https://github.com/apache/incubator-mxnet/issues/12320, and fixed it in this 
PR. It doesn't produce any regressions, luckily!
   
   If my new test looks good to you, we should be ready to merge!
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] taliesinb commented on a change in pull request #12064: Allow stop of arange to be inferred from dims.

2018-08-23 Thread GitBox
taliesinb commented on a change in pull request #12064: Allow stop of arange to 
be inferred from dims.
URL: https://github.com/apache/incubator-mxnet/pull/12064#discussion_r212477584
 
 

 ##
 File path: contrib/clojure-package/src/org/apache/clojure_mxnet/symbol.clj
 ##
 @@ -135,7 +135,17 @@
   ([start stop  {:keys [step repeat dtype]
  :or {step (float 1) repeat (int 1) dtype base/MX_REAL_TYPE}
  :as opts}]
-   (Symbol/arange (float start) ($/option (float stop)) step repeat nil dtype))
+   (Symbol/arange (float start) ($/option (float stop)) step repeat false nil 
dtype))
+  ([start stop]
+   (arange start stop {})))
+
+(defn arange-with-inference
+  "Behaves like arange operator, but infers the stop value from the output 
shape, 
 
 Review comment:
   @gigasquid ok we're good to go. I removed the pointless imperative function 
version of `arange-with-inference`, so I only had to add a test to 
`operator_test.clj`. 
   
   However, in doing this, I think I've picked up a problem with `approx=`, in 
which it incorrectly returns true if one of the comparisands (is that a word??) 
is shorter than the other, and differs in the remaining elements that the other 
does not have.
   
   For example, try change the test starting on line 200 to the following:
   
   ```
   (deftest ones
 (let [ones (sym/ones [2 2])
   exec (sym/simple-bind ones (context/default-context))]
   (is (approx= 1e-4
[1 1 1 1 9 9 9 9 9 9]
(-> exec (executor/forward) (executor/outputs) (first))
   ```
   
   (I've introduced the 9 9 9 9 9 9 here). This test still passes. 
   
   I've reported the issue here: 
https://github.com/apache/incubator-mxnet/issues/12320, and fixed it in this 
PR. It doesn't produce any regressions, luckily!
   
   If my new test looks good to you, we should be ready to merge!
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham opened a new issue #12321: Sphinx docs build errors

2018-08-23 Thread GitBox
aaronmarkham opened a new issue #12321: Sphinx docs build errors
URL: https://github.com/apache/incubator-mxnet/issues/12321
 
 
   At the top of the build log below are the doc build errors mentioned in 
#12317.
   
   But there are a lot more across the Python files in the project. If linting 
it turned on, it seems to be missing issues that Sphinx picks up.
   
   /home/ubuntu/incubator-mxnet/python/mxnet/autograd.py:docstring of 
mxnet.autograd.grad:6: WARNING: Explicit markup ends without a blank line; 
unexpected unindent.
   
/home/ubuntu/incubator-mxnet/python/mxnet/contrib/onnx/onnx2mx/import_model.py:docstring
 of mxnet.contrib.onnx.onnx2mx.import_model.get_model_metadata:9: ERROR: 
Unexpected indentation.
   
/home/ubuntu/incubator-mxnet/python/mxnet/contrib/onnx/onnx2mx/import_model.py:docstring
 of mxnet.contrib.onnx.onnx2mx.import_model.get_model_metadata:11: ERROR: 
Unexpected indentation.
   
/home/ubuntu/incubator-mxnet/python/mxnet/contrib/onnx/onnx2mx/import_model.py:docstring
 of mxnet.contrib.onnx.onnx2mx.import_model.get_model_metadata:12: WARNING: 
Block quote ends without a blank line; unexpected unindent.
   
/home/ubuntu/incubator-mxnet/python/mxnet/contrib/text/embedding.py:docstring 
of mxnet.contrib.text.embedding.GloVe:40: SEVERE: Unexpected section title.
   
   Properties
   --
   
/home/ubuntu/incubator-mxnet/python/mxnet/contrib/text/embedding.py:docstring 
of mxnet.contrib.text.embedding.FastText:54: SEVERE: Unexpected section title.
   
   Properties
   --
   
/home/ubuntu/incubator-mxnet/python/mxnet/contrib/text/embedding.py:docstring 
of mxnet.contrib.text.embedding.CustomEmbedding:31: SEVERE: Unexpected section 
title.
   
   Properties
   --
   
/home/ubuntu/incubator-mxnet/python/mxnet/contrib/text/embedding.py:docstring 
of mxnet.contrib.text.embedding.CompositeEmbedding:18: SEVERE: Unexpected 
section title.
   
   Properties
   --
   /home/ubuntu/incubator-mxnet/python/mxnet/contrib/text/vocab.py:docstring of 
mxnet.contrib.text.vocab.Vocabulary:34: SEVERE: Unexpected section title.
   
   Properties
   --
   /home/ubuntu/incubator-mxnet/docs/api/python/gluon/contrib.md:1: WARNING: 
Inline interpreted text or phrase reference start-string without end-string.
   /home/ubuntu/incubator-mxnet/docs/api/python/gluon/contrib.md:1: WARNING: 
Inline interpreted text or phrase reference start-string without end-string.
   
/home/ubuntu/incubator-mxnet/python/mxnet/gluon/contrib/nn/__init__.py:docstring
 of mxnet.gluon.contrib.nn.Concurrent:1: WARNING: Inline interpreted text or 
phrase reference start-string without end-string.
   
/home/ubuntu/incubator-mxnet/python/mxnet/gluon/contrib/nn/__init__.py:docstring
 of mxnet.gluon.contrib.nn.HybridConcurrent:1: WARNING: Inline interpreted text 
or phrase reference start-string without end-string.
   
/home/ubuntu/incubator-mxnet/python/mxnet/gluon/contrib/rnn/__init__.py:docstring
 of mxnet.gluon.contrib.rnn.VariationalDropoutCell:3: ERROR: Unexpected 
indentation.
   
/home/ubuntu/incubator-mxnet/python/mxnet/gluon/contrib/rnn/__init__.py:docstring
 of mxnet.gluon.contrib.rnn.LSTMPCell:5: ERROR: Unexpected indentation.
   
/home/ubuntu/incubator-mxnet/python/mxnet/gluon/contrib/rnn/__init__.py:docstring
 of mxnet.gluon.contrib.rnn.LSTMPCell:14: WARNING: Literal block ends without a 
blank line; unexpected unindent.
   
/home/ubuntu/incubator-mxnet/python/mxnet/gluon/contrib/rnn/__init__.py:docstring
 of mxnet.gluon.contrib.rnn.LSTMPCell:24: ERROR: Unexpected indentation.
   
/home/ubuntu/incubator-mxnet/python/mxnet/gluon/contrib/rnn/__init__.py:docstring
 of mxnet.gluon.contrib.rnn.LSTMPCell:25: WARNING: Block quote ends without a 
blank line; unexpected unindent.
   
/home/ubuntu/incubator-mxnet/python/mxnet/gluon/contrib/rnn/__init__.py:docstring
 of mxnet.gluon.contrib.rnn.Conv1DRNNCell:: ERROR: Unknown target name: 
"conv_rnn".
   
/home/ubuntu/incubator-mxnet/python/mxnet/gluon/contrib/rnn/__init__.py:docstring
 of mxnet.gluon.contrib.rnn.Conv2DRNNCell:: ERROR: Unknown target name: 
"conv_rnn".
   
/home/ubuntu/incubator-mxnet/python/mxnet/gluon/contrib/rnn/__init__.py:docstring
 of mxnet.gluon.contrib.rnn.Conv3DRNNCell:: ERROR: Unknown target name: 
"conv_rnn".
   
/home/ubuntu/incubator-mxnet/python/mxnet/gluon/contrib/rnn/__init__.py:docstring
 of mxnet.gluon.contrib.rnn.Conv1DLSTMCell:: ERROR: Unknown target name: 
"conv_lstm".
   
/home/ubuntu/incubator-mxnet/python/mxnet/gluon/contrib/rnn/__init__.py:docstring
 of mxnet.gluon.contrib.rnn.Conv2DLSTMCell:: ERROR: Unknown target name: 
"conv_lstm".
   
/home/ubuntu/incubator-mxnet/python/mxnet/gluon/contrib/rnn/__init__.py:docstring
 of mxnet.gluon.contrib.rnn.Conv3DLSTMCell:: ERROR: Unknown target name: 
"conv_lstm".
   
/home/ubuntu/incubator-mxnet/python/mxnet/gluon/contrib/rnn/__init__.py:docstring
 of mxnet.gluon.contrib.rnn.Conv1DGRUCell:: ERROR: Unknown target name: 
"conv_gru".
   

[GitHub] taliesinb opened a new issue #12320: Clojure's approx= test function has a weakness

2018-08-23 Thread GitBox
taliesinb opened a new issue #12320: Clojure's approx= test function has a 
weakness
URL: https://github.com/apache/incubator-mxnet/issues/12320
 
 
   ## Description
   The clojure framework's `approx=` test utility function (defined in 
`contrib/clojure-package/test/org/apache/clojure_mxnet/util_test.clj`) is 
unable to detect differences between arrays if they are of different sizes and 
the difference occurs in the excess.
   
   For example, this returns true, but should return false:
   
   ```
   (approx= 1e-9 [1 1 1] [1 1 1 9 9 9])
   ```
   
   This means that tests that use `approx=` in the clojure test suite can 
incorrectly succeed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.3.x updated: MXNet to ONNX export tutorial (#12297) (#12316)

2018-08-23 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch v1.3.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.3.x by this push:
 new 483c445  MXNet to ONNX export tutorial  (#12297) (#12316)
483c445 is described below

commit 483c4454bc905a721830030d6b661774f71814ee
Author: Roshani Nagmote 
AuthorDate: Thu Aug 23 15:29:29 2018 -0700

MXNet to ONNX export tutorial  (#12297) (#12316)

* mxnet to onnx export tutorial added

* test added

* addressing review comment

* comments addressed

* few more fixes

* addressing comments

* addressing comments

* retrigger build
---
 docs/api/python/contrib/onnx.md |   1 +
 docs/tutorials/onnx/export_mxnet_to_onnx.md | 134 
 tests/tutorials/test_tutorials.py   |   3 +
 3 files changed, 138 insertions(+)

diff --git a/docs/api/python/contrib/onnx.md b/docs/api/python/contrib/onnx.md
index d7c34ec..4499414 100644
--- a/docs/api/python/contrib/onnx.md
+++ b/docs/api/python/contrib/onnx.md
@@ -35,6 +35,7 @@ This document describes all the ONNX-MXNet APIs.
:maxdepth: 1

/tutorials/onnx/super_resolution.md
+   /tutorials/onnx/export_mxnet_to_onnx.md
/tutorials/onnx/inference_on_onnx_model.md
/tutorials/onnx/fine_tuning_gluon.md
 ```
diff --git a/docs/tutorials/onnx/export_mxnet_to_onnx.md 
b/docs/tutorials/onnx/export_mxnet_to_onnx.md
new file mode 100644
index 000..a9c03be
--- /dev/null
+++ b/docs/tutorials/onnx/export_mxnet_to_onnx.md
@@ -0,0 +1,134 @@
+
+# Exporting MXNet model to ONNX format
+
+[Open Neural Network Exchange (ONNX)](https://github.com/onnx/onnx) provides 
an open source format for AI models. It defines an extensible computation graph 
model, as well as definitions of built-in operators and standard data types.
+
+In this tutorial, we will show how you can save MXNet models to the ONNX 
format.
+
+MXNet-ONNX operators coverage and features are updated regularly. Visit the 
[ONNX operator 
coverage](https://cwiki.apache.org/confluence/display/MXNET/ONNX+Operator+Coverage)
 page for the latest information.
+
+In this tutorial, we will learn how to use MXNet to ONNX exporter on 
pre-trained models.
+
+## Prerequisites
+
+To run the tutorial you will need to have installed the following python 
modules:
+- [MXNet >= 1.3.0](http://mxnet.incubator.apache.org/install/index.html)
+- [onnx]( https://github.com/onnx/onnx#installation) v1.2.1 (follow the 
install guide)
+
+*Note:* MXNet-ONNX importer and exporter follows version 7 of ONNX operator 
set which comes with ONNX v1.2.1.
+
+
+```python
+import mxnet as mx
+import numpy as np
+from mxnet.contrib import onnx as onnx_mxnet
+import logging
+logging.basicConfig(level=logging.INFO)
+```
+
+## Downloading a model from the MXNet model zoo
+
+We download the pre-trained ResNet-18 [ImageNet](http://www.image-net.org/) 
model from the [MXNet Model Zoo](http://data.mxnet.io/models/imagenet/).
+We will also download synset file to match labels.
+
+```python
+# Download pre-trained resnet model - json and params by running following 
code.
+path='http://data.mxnet.io/models/imagenet/'
+[mx.test_utils.download(path+'resnet/18-layers/resnet-18-.params'),
+ mx.test_utils.download(path+'resnet/18-layers/resnet-18-symbol.json'),
+ mx.test_utils.download(path+'synset.txt')]
+```
+
+Now, we have downloaded ResNet-18 symbol, params and synset file on the disk.
+
+## MXNet to ONNX exporter API
+
+Let us describe the MXNet's `export_model` API. 
+
+```python
+help(onnx_mxnet.export_model)
+```
+
+```python
+Help on function export_model in module 
mxnet.contrib.onnx.mx2onnx.export_model:
+
+export_model(sym, params, input_shape, input_type=, 
onnx_file_path=u'model.onnx', verbose=False)
+Exports the MXNet model file, passed as a parameter, into ONNX model.
+Accepts both symbol,parameter objects as well as json and params filepaths 
as input.
+Operator support and coverage - 
https://cwiki.apache.org/confluence/display/MXNET/ONNX
+
+Parameters
+--
+sym : str or symbol object
+Path to the json file or Symbol object
+params : str or symbol object
+Path to the params file or params dictionary. (Including both 
arg_params and aux_params)
+input_shape : List of tuple
+Input shape of the model e.g [(1,3,224,224)]
+input_type : data type
+Input data type e.g. np.float32
+onnx_file_path : str
+Path where to save the generated onnx file
+verbose : Boolean
+If true will print logs of the model conversion
+
+Returns
+---
+onnx_file_path : str
+Onnx file path
+```
+
+`export_model` API can accept the MXNet model in one of the following two ways.
+
+1. MXNet sym, params objects:
+* This is useful if we are training a model. At the end of 

[GitHub] szha closed pull request #12316: cherry-pick MXNet to ONNX export tutorial (#12297)

2018-08-23 Thread GitBox
szha closed pull request #12316: cherry-pick MXNet to ONNX export tutorial  
(#12297)
URL: https://github.com/apache/incubator-mxnet/pull/12316
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/api/python/contrib/onnx.md b/docs/api/python/contrib/onnx.md
index d7c34ec1e01..44994145916 100644
--- a/docs/api/python/contrib/onnx.md
+++ b/docs/api/python/contrib/onnx.md
@@ -35,6 +35,7 @@ This document describes all the ONNX-MXNet APIs.
:maxdepth: 1

/tutorials/onnx/super_resolution.md
+   /tutorials/onnx/export_mxnet_to_onnx.md
/tutorials/onnx/inference_on_onnx_model.md
/tutorials/onnx/fine_tuning_gluon.md
 ```
diff --git a/docs/tutorials/onnx/export_mxnet_to_onnx.md 
b/docs/tutorials/onnx/export_mxnet_to_onnx.md
new file mode 100644
index 000..a9c03bed8b1
--- /dev/null
+++ b/docs/tutorials/onnx/export_mxnet_to_onnx.md
@@ -0,0 +1,134 @@
+
+# Exporting MXNet model to ONNX format
+
+[Open Neural Network Exchange (ONNX)](https://github.com/onnx/onnx) provides 
an open source format for AI models. It defines an extensible computation graph 
model, as well as definitions of built-in operators and standard data types.
+
+In this tutorial, we will show how you can save MXNet models to the ONNX 
format.
+
+MXNet-ONNX operators coverage and features are updated regularly. Visit the 
[ONNX operator 
coverage](https://cwiki.apache.org/confluence/display/MXNET/ONNX+Operator+Coverage)
 page for the latest information.
+
+In this tutorial, we will learn how to use MXNet to ONNX exporter on 
pre-trained models.
+
+## Prerequisites
+
+To run the tutorial you will need to have installed the following python 
modules:
+- [MXNet >= 1.3.0](http://mxnet.incubator.apache.org/install/index.html)
+- [onnx]( https://github.com/onnx/onnx#installation) v1.2.1 (follow the 
install guide)
+
+*Note:* MXNet-ONNX importer and exporter follows version 7 of ONNX operator 
set which comes with ONNX v1.2.1.
+
+
+```python
+import mxnet as mx
+import numpy as np
+from mxnet.contrib import onnx as onnx_mxnet
+import logging
+logging.basicConfig(level=logging.INFO)
+```
+
+## Downloading a model from the MXNet model zoo
+
+We download the pre-trained ResNet-18 [ImageNet](http://www.image-net.org/) 
model from the [MXNet Model Zoo](http://data.mxnet.io/models/imagenet/).
+We will also download synset file to match labels.
+
+```python
+# Download pre-trained resnet model - json and params by running following 
code.
+path='http://data.mxnet.io/models/imagenet/'
+[mx.test_utils.download(path+'resnet/18-layers/resnet-18-.params'),
+ mx.test_utils.download(path+'resnet/18-layers/resnet-18-symbol.json'),
+ mx.test_utils.download(path+'synset.txt')]
+```
+
+Now, we have downloaded ResNet-18 symbol, params and synset file on the disk.
+
+## MXNet to ONNX exporter API
+
+Let us describe the MXNet's `export_model` API. 
+
+```python
+help(onnx_mxnet.export_model)
+```
+
+```python
+Help on function export_model in module 
mxnet.contrib.onnx.mx2onnx.export_model:
+
+export_model(sym, params, input_shape, input_type=, 
onnx_file_path=u'model.onnx', verbose=False)
+Exports the MXNet model file, passed as a parameter, into ONNX model.
+Accepts both symbol,parameter objects as well as json and params filepaths 
as input.
+Operator support and coverage - 
https://cwiki.apache.org/confluence/display/MXNET/ONNX
+
+Parameters
+--
+sym : str or symbol object
+Path to the json file or Symbol object
+params : str or symbol object
+Path to the params file or params dictionary. (Including both 
arg_params and aux_params)
+input_shape : List of tuple
+Input shape of the model e.g [(1,3,224,224)]
+input_type : data type
+Input data type e.g. np.float32
+onnx_file_path : str
+Path where to save the generated onnx file
+verbose : Boolean
+If true will print logs of the model conversion
+
+Returns
+---
+onnx_file_path : str
+Onnx file path
+```
+
+`export_model` API can accept the MXNet model in one of the following two ways.
+
+1. MXNet sym, params objects:
+* This is useful if we are training a model. At the end of training, we 
just need to invoke the `export_model` function and provide sym and params 
objects as inputs with other attributes to save the model in ONNX format.
+2. MXNet's exported json and params files:
+* This is useful if we have pre-trained models and we want to convert them 
to ONNX format.
+
+Since we have downloaded pre-trained model files, we will use the 
`export_model` API by passing the path for symbol and params files.
+
+## How to use MXNet to ONNX exporter API
+
+We will use the downloaded pre-trained model files (sym, params) and 

[GitHub] hetong007 opened a new issue #12319: Feature Request: support several statistical operators in NDArray

2018-08-23 Thread GitBox
hetong007 opened a new issue #12319: Feature Request: support several 
statistical operators in NDArray
URL: https://github.com/apache/incubator-mxnet/issues/12319
 
 
   ## Description
   
   Median, variance and standard deviation are basic, useful and descriptive 
statistics of an array of numbers.
   
   I propose to add these basic operators for NDArray.
   
   @haojin2 do you have any comments on this?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham opened a new issue #12318: Sphinx is unable to access some MXNet ONNX module functions

2018-08-23 Thread GitBox
aaronmarkham opened a new issue #12318: Sphinx is unable to access some MXNet 
ONNX module functions
URL: https://github.com/apache/incubator-mxnet/issues/12318
 
 
   This causes Sphinx to fail processing the API docs when shorthand references 
are used. I used a workaround in #12317 to just reference the functions the 
long way. 
   
   This means the reference is:
   mxnet.contrib.onnx.onnx2mx.import_model.import_model
   When it could be:
   mxnet.contrib.onnx.import_model
   
   
![2018-08-23_15-08-55](https://user-images.githubusercontent.com/5974205/44555096-657b4100-a6e8-11e8-8ea5-49cf3db31063.png)
   
   But this doesn't work.
   
   Example:
   ```
   ubuntu@ip-172-31-66-78:~/incubator-mxnet/docs$ python -c "import 
mxnet.contrib.onnx.onnx2mx"
   ubuntu@ip-172-31-66-78:~/incubator-mxnet/docs$ python -c "import 
mxnet.contrib.onnx.onnx2mx.import_model"
   ubuntu@ip-172-31-66-78:~/incubator-mxnet/docs$ python -c "import 
mxnet.contrib.onnx.import_model"
   Traceback (most recent call last):
 File "", line 1, in 
   ImportError: No module named import_model
   ```
   @Roshrini 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nswamy commented on issue #2268: MXNet on Spark Roadmap

2018-08-23 Thread GitBox
nswamy commented on issue #2268: MXNet on Spark Roadmap
URL: 
https://github.com/apache/incubator-mxnet/issues/2268#issuecomment-415590062
 
 
   @idibidiart I am personally very interested(and probably will work on) in 
getting MXNet on Spark for training, in that effort there is work being done by 
the Spark community to introduce a barrier mode scheduling that will help run 
deepLearning frameworks https://jira.apache.org/jira/browse/SPARK-24374. reach 
out to me on [ASF Slack](the-asf.slack.com)(#mxnet channel ) if you are 
interested to collaborate on this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #11658: Optimize cached op static memory allocation

2018-08-23 Thread GitBox
zheng-da commented on a change in pull request #11658: Optimize cached op 
static memory allocation
URL: https://github.com/apache/incubator-mxnet/pull/11658#discussion_r212438446
 
 

 ##
 File path: src/imperative/imperative_utils.h
 ##
 @@ -789,22 +789,36 @@ inline MemoryPlanVector PlanMemory(
 }
 
 
-inline std::multimap AllocateMemory(
+inline NDArray AllocateMemory(
 const nnvm::Graph& g,
 const nnvm::IndexedGraph& idx,
 const Context& default_ctx,
 const uint32_t entry_start, const uint32_t entry_end,
 const MemoryPlanVector& mem_plan,
 const std::vector& arrays,
 std::vector *array_reqs,
-std::multimap&& pool = std::multimap()) {
+const bool use_pool = false,
+NDArray pool = NDArray()) {
   using namespace nnvm;
   const auto& dtypes = g.GetAttr("dtype");
   const auto& shapes = g.GetAttr("shape");
   const auto& stypes = g.GetAttr("storage_type");
+  const size_t page_size = dmlc::GetEnv("MXNET_GPU_MEM_POOL_PAGE_SIZE", 4096);
 
-  std::multimap new_pool;
+  size_t total_size = 0;
+  for (uint32_t i = entry_start; i < entry_end; ++i) {
+if (mem_plan[i].storage_id == exec::kExternalStorageID ||
+mem_plan[i].storage_id == exec::kDynamicStorageID ||
+mem_plan[i].root != i) continue;
+total_size += (mem_plan[i].size + page_size - 1) / page_size * page_size;
 
 Review comment:
   could you also move the code of calculating total_size to the condition 
below?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #11658: Optimize cached op static memory allocation

2018-08-23 Thread GitBox
zheng-da commented on a change in pull request #11658: Optimize cached op 
static memory allocation
URL: https://github.com/apache/incubator-mxnet/pull/11658#discussion_r212437803
 
 

 ##
 File path: src/imperative/imperative_utils.h
 ##
 @@ -789,22 +789,36 @@ inline MemoryPlanVector PlanMemory(
 }
 
 
-inline std::multimap AllocateMemory(
+inline NDArray AllocateMemory(
 const nnvm::Graph& g,
 const nnvm::IndexedGraph& idx,
 const Context& default_ctx,
 const uint32_t entry_start, const uint32_t entry_end,
 const MemoryPlanVector& mem_plan,
 const std::vector& arrays,
 std::vector *array_reqs,
-std::multimap&& pool = std::multimap()) {
+const bool use_pool = false,
+NDArray pool = NDArray()) {
   using namespace nnvm;
   const auto& dtypes = g.GetAttr("dtype");
   const auto& shapes = g.GetAttr("shape");
   const auto& stypes = g.GetAttr("storage_type");
+  const size_t page_size = dmlc::GetEnv("MXNET_GPU_MEM_POOL_PAGE_SIZE", 4096);
 
-  std::multimap new_pool;
+  size_t total_size = 0;
+  for (uint32_t i = entry_start; i < entry_end; ++i) {
+if (mem_plan[i].storage_id == exec::kExternalStorageID ||
+mem_plan[i].storage_id == exec::kDynamicStorageID ||
+mem_plan[i].root != i) continue;
+total_size += (mem_plan[i].size + page_size - 1) / page_size * page_size;
 
 Review comment:
   every NDArray is aligned to the page size? I thought you want to use the 
alignment for the default GPU memory allocation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham opened a new pull request #12317: Update ONNX API docs references

2018-08-23 Thread GitBox
aaronmarkham opened a new pull request #12317: Update ONNX API docs references
URL: https://github.com/apache/incubator-mxnet/pull/12317
 
 
   ## Description ##
   The ONNX page is currently broken due to some name changes. The API 
Reference is blank:
   https://mxnet.incubator.apache.org/api/python/contrib/onnx.html#api-reference
   
   I fixed the overview table to link to things and now have the API reference 
appearing. I also updated the description text.
   
   ## Comments ##
   
   Sphinx won't render any shortcut references to the functions, so I'm calling 
them out the long way. When the Python config for these ONNX modules are 
updated we can try out the shorthand references and see if Sphinx likes it or 
not.
   
![2018-08-23_15-08-55](https://user-images.githubusercontent.com/5974205/44554604-775be480-a6e6-11e8-8747-194f6b4f0a69.png)
   
   There are lint issues showing up in the docs build logs from these ONNX 
files as well as many other files. 
   
   I'll create a separate issues for these comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #12174: [MXNET-806] Report error when shape mismatch in "where" operator

2018-08-23 Thread GitBox
zheng-da commented on a change in pull request #12174: [MXNET-806] Report error 
when shape mismatch in "where" operator
URL: https://github.com/apache/incubator-mxnet/pull/12174#discussion_r212472034
 
 

 ##
 File path: src/operator/tensor/control_flow_op.h
 ##
 @@ -188,7 +188,7 @@ inline bool WhereOpShape(const nnvm::NodeAttrs& attrs,
 SHAPE_ASSIGN_CHECK(*in_attrs, 0, tshape);
 return true;
   } else if ((*in_attrs)[0].ndim() == 1) {
-return (*in_attrs)[0].Size() == static_cast(tshape[0]);
+CHECK_EQ((*in_attrs)[0].Size(), static_cast(tshape[0]));
 
 Review comment:
   this is wrong. what is the problem of the original code? If the first 
dimension doesn't match, infer shape fails, so it should return false.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] idibidiart commented on issue #2268: MXNet on Spark Roadmap

2018-08-23 Thread GitBox
idibidiart commented on issue #2268: MXNet on Spark Roadmap
URL: 
https://github.com/apache/incubator-mxnet/issues/2268#issuecomment-415585242
 
 
   At this time, is there a possibility of MXNet on Spark similar to 
TensorFlowOnSpark from Yahoo?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hetong007 commented on a change in pull request #12283: [MXNET-825] Fix CGAN R Example with MNIST dataset

2018-08-23 Thread GitBox
hetong007 commented on a change in pull request #12283: [MXNET-825] Fix CGAN R 
Example with MNIST dataset
URL: https://github.com/apache/incubator-mxnet/pull/12283#discussion_r212467491
 
 

 ##
 File path: example/gan/CGAN_mnist_R/CGAN_train.R
 ##
 @@ -64,32 +80,33 @@ input_names_D <- mxnet:::mx.model.check.arguments(D_sym)
 
 ###
 #initialize optimizers
-optimizer_G<-mx.opt.create(name = "adadelta",
+optimizer_G <- mx.opt.create(name = "adadelta",
rho=0.92, 
 
 Review comment:
   I still see some inconsistent format here and there. Please consider using 
tools like [formatR](https://yihui.name/formatr/) to clean up the style.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Roshrini opened a new pull request #12316: cherry-pick MXNet to ONNX export tutorial (#12297)

2018-08-23 Thread GitBox
Roshrini opened a new pull request #12316: cherry-pick MXNet to ONNX export 
tutorial  (#12297)
URL: https://github.com/apache/incubator-mxnet/pull/12316
 
 
   cherry-pick commit cc30fabe2f36278f2e251d49f72edee107eb5496 from master 
branch


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] samskalicky commented on issue #12314: flaky test: test_operator.test_dropout

2018-08-23 Thread GitBox
samskalicky commented on issue #12314: flaky test: test_operator.test_dropout
URL: 
https://github.com/apache/incubator-mxnet/issues/12314#issuecomment-415574865
 
 
   @haojin2 this looks like an issue with MKL, building with MKL=OFF works fine


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on a change in pull request #12296: Separate refactoring from #12276 in a prior PR

2018-08-23 Thread GitBox
larroy commented on a change in pull request #12296: Separate refactoring from 
#12276 in a prior PR
URL: https://github.com/apache/incubator-mxnet/pull/12296#discussion_r212455772
 
 

 ##
 File path: ci/build.py
 ##
 @@ -90,41 +93,29 @@ def build_docker(platform: str, docker_binary: str, 
registry: str, num_retries:
 # cache-from is needed so we use the cached images tagged from the remote 
via
 # docker pull see: docker_cache.load_docker_cache
 #
+# This also prevents using local layers for caching: 
https://github.com/moby/moby/issues/33002
+# So to use local caching, we should omit the cache-from by using 
--no-dockerhub-cache argument to this
+# script.
+#
 # This doesn't work with multi head docker files.
-# 
-
-for i in range(num_retries):
-logging.info('%d out of %d tries to build the docker image.', i + 1, 
num_retries)
-
-cmd = [docker_binary, "build",
-   "-f", get_dockerfile(platform),
-   "--build-arg", "USER_ID={}".format(os.getuid()),
-   "--build-arg", "GROUP_ID={}".format(os.getgid()),
-   "--cache-from", tag,
-   "-t", tag,
-   "docker"]
+#
+cmd = [docker_binary, "build",
+   "-f", get_dockerfile(platform),
+   "--build-arg", "USER_ID={}".format(os.getuid()),
+   "--build-arg", "GROUP_ID={}".format(os.getgid())]
+if use_cache:
 
 Review comment:
   I'm open. Any suggestions?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy opened a new pull request #12315: Enable gluon multi worker data loader test

2018-08-23 Thread GitBox
sandeep-krishnamurthy opened a new pull request #12315: Enable gluon multi 
worker data loader test
URL: https://github.com/apache/incubator-mxnet/pull/12315
 
 
   ## Description ##
   Re-enable the skipped test for Gluon DataLoader. Described in this issue - 
https://github.com/apache/incubator-mxnet/issues/11455
   
   Tests are working fine on latest master.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [X] Changes are complete (i.e. I finished coding on this PR)
   - [X] All changes have test coverage:
   
   ### Changes ###
   - Enable - `tests/python/unittest/test_gluon_data.py:test_multi_worker`
   
   ## Comments ##
   
   Ran for 10,000 runs on CPU/GPU with/without MKLDNN.
   ```
   ~/incubator-mxnet$ MXNET_TEST_COUNT=1 nosetests -s --verbose 
tests/python/unittest/test_gluon_data.py:test_multi_worker
   /usr/local/lib/python3.5/dist-packages/nose/util.py:453: DeprecationWarning: 
inspect.getargspec() is deprecated, use inspect.signature() instead
 inspect.getargspec(func)
   [INFO] Setting module np/mx/python random seeds, use 
MXNET_MODULE_SEED=1359342575 to reproduce.
   test_gluon_data.test_multi_worker ... ok
   
   --
   Ran 1 test in 509.370s
   
   OK
   ```
   
   @marcoabreu 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #12308: [MKLDNN] Enable convolution fusion.

2018-08-23 Thread GitBox
marcoabreu commented on a change in pull request #12308: [MKLDNN] Enable 
convolution fusion.
URL: https://github.com/apache/incubator-mxnet/pull/12308#discussion_r212230019
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_convolution-inl.h
 ##
 @@ -35,19 +35,48 @@
 namespace mxnet {
 namespace op {
 
+struct ConvFusionParam : public dmlc::Parameter {
+  // When adding more members into this clss, please double check GetHash()
+  // won't overflow.
+  bool with_bn;
+  bool with_relu;
+  bool with_sum;
+  bool with_postsum_relu;
+  DMLC_DECLARE_PARAMETER(ConvFusionParam) {
+DMLC_DECLARE_FIELD(with_bn).set_default(false)
+.describe("Add post batchnorm.");
+DMLC_DECLARE_FIELD(with_relu).set_default(false)
+.describe("Add post relu");
+DMLC_DECLARE_FIELD(with_sum).set_default(false)
+.describe("Add post sum");
+DMLC_DECLARE_FIELD(with_postsum_relu).set_default(false)
+.describe("Add post relu after sum");
+  }
+  const int GetHash() const {
+int hash = 0;
+hash = hash * 2 + this->with_bn ? 1 : 0;
 
 Review comment:
   Possible hash collision: with_bn=0 and with_relu=1 equals BN=1 and relu0. 
Consider using bitflags


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on issue #12296: Separate refactoring from #12276 in a prior PR

2018-08-23 Thread GitBox
sandeep-krishnamurthy commented on issue #12296: Separate refactoring from 
#12276 in a prior PR
URL: https://github.com/apache/incubator-mxnet/pull/12296#issuecomment-415560926
 
 
   @lebeg - This PR is ready to go, if your concerns are addressed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (4664a30 -> cc30fab)

2018-08-23 Thread skm
This is an automated email from the ASF dual-hosted git repository.

skm pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 4664a30  [MXNET-696] Define cmp() in Python 3 again (#12295)
 add cc30fab  MXNet to ONNX export tutorial  (#12297)

No new revisions were added by this update.

Summary of changes:
 docs/api/python/contrib/onnx.md |   1 +
 docs/tutorials/onnx/export_mxnet_to_onnx.md | 134 
 tests/tutorials/test_tutorials.py   |   3 +
 3 files changed, 138 insertions(+)
 create mode 100644 docs/tutorials/onnx/export_mxnet_to_onnx.md



[GitHub] sandeep-krishnamurthy closed pull request #12297: MXNet to ONNX export tutorial

2018-08-23 Thread GitBox
sandeep-krishnamurthy closed pull request #12297: MXNet to ONNX export tutorial 
URL: https://github.com/apache/incubator-mxnet/pull/12297
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/api/python/contrib/onnx.md b/docs/api/python/contrib/onnx.md
index d7c34ec1e01..44994145916 100644
--- a/docs/api/python/contrib/onnx.md
+++ b/docs/api/python/contrib/onnx.md
@@ -35,6 +35,7 @@ This document describes all the ONNX-MXNet APIs.
:maxdepth: 1

/tutorials/onnx/super_resolution.md
+   /tutorials/onnx/export_mxnet_to_onnx.md
/tutorials/onnx/inference_on_onnx_model.md
/tutorials/onnx/fine_tuning_gluon.md
 ```
diff --git a/docs/tutorials/onnx/export_mxnet_to_onnx.md 
b/docs/tutorials/onnx/export_mxnet_to_onnx.md
new file mode 100644
index 000..a9c03bed8b1
--- /dev/null
+++ b/docs/tutorials/onnx/export_mxnet_to_onnx.md
@@ -0,0 +1,134 @@
+
+# Exporting MXNet model to ONNX format
+
+[Open Neural Network Exchange (ONNX)](https://github.com/onnx/onnx) provides 
an open source format for AI models. It defines an extensible computation graph 
model, as well as definitions of built-in operators and standard data types.
+
+In this tutorial, we will show how you can save MXNet models to the ONNX 
format.
+
+MXNet-ONNX operators coverage and features are updated regularly. Visit the 
[ONNX operator 
coverage](https://cwiki.apache.org/confluence/display/MXNET/ONNX+Operator+Coverage)
 page for the latest information.
+
+In this tutorial, we will learn how to use MXNet to ONNX exporter on 
pre-trained models.
+
+## Prerequisites
+
+To run the tutorial you will need to have installed the following python 
modules:
+- [MXNet >= 1.3.0](http://mxnet.incubator.apache.org/install/index.html)
+- [onnx]( https://github.com/onnx/onnx#installation) v1.2.1 (follow the 
install guide)
+
+*Note:* MXNet-ONNX importer and exporter follows version 7 of ONNX operator 
set which comes with ONNX v1.2.1.
+
+
+```python
+import mxnet as mx
+import numpy as np
+from mxnet.contrib import onnx as onnx_mxnet
+import logging
+logging.basicConfig(level=logging.INFO)
+```
+
+## Downloading a model from the MXNet model zoo
+
+We download the pre-trained ResNet-18 [ImageNet](http://www.image-net.org/) 
model from the [MXNet Model Zoo](http://data.mxnet.io/models/imagenet/).
+We will also download synset file to match labels.
+
+```python
+# Download pre-trained resnet model - json and params by running following 
code.
+path='http://data.mxnet.io/models/imagenet/'
+[mx.test_utils.download(path+'resnet/18-layers/resnet-18-.params'),
+ mx.test_utils.download(path+'resnet/18-layers/resnet-18-symbol.json'),
+ mx.test_utils.download(path+'synset.txt')]
+```
+
+Now, we have downloaded ResNet-18 symbol, params and synset file on the disk.
+
+## MXNet to ONNX exporter API
+
+Let us describe the MXNet's `export_model` API. 
+
+```python
+help(onnx_mxnet.export_model)
+```
+
+```python
+Help on function export_model in module 
mxnet.contrib.onnx.mx2onnx.export_model:
+
+export_model(sym, params, input_shape, input_type=, 
onnx_file_path=u'model.onnx', verbose=False)
+Exports the MXNet model file, passed as a parameter, into ONNX model.
+Accepts both symbol,parameter objects as well as json and params filepaths 
as input.
+Operator support and coverage - 
https://cwiki.apache.org/confluence/display/MXNET/ONNX
+
+Parameters
+--
+sym : str or symbol object
+Path to the json file or Symbol object
+params : str or symbol object
+Path to the params file or params dictionary. (Including both 
arg_params and aux_params)
+input_shape : List of tuple
+Input shape of the model e.g [(1,3,224,224)]
+input_type : data type
+Input data type e.g. np.float32
+onnx_file_path : str
+Path where to save the generated onnx file
+verbose : Boolean
+If true will print logs of the model conversion
+
+Returns
+---
+onnx_file_path : str
+Onnx file path
+```
+
+`export_model` API can accept the MXNet model in one of the following two ways.
+
+1. MXNet sym, params objects:
+* This is useful if we are training a model. At the end of training, we 
just need to invoke the `export_model` function and provide sym and params 
objects as inputs with other attributes to save the model in ONNX format.
+2. MXNet's exported json and params files:
+* This is useful if we have pre-trained models and we want to convert them 
to ONNX format.
+
+Since we have downloaded pre-trained model files, we will use the 
`export_model` API by passing the path for symbol and params files.
+
+## How to use MXNet to ONNX exporter API
+
+We will use the downloaded pre-trained model files (sym, params) and define 

[GitHub] vandanavk commented on issue #12182: Remove Epoch training metric log

2018-08-23 Thread GitBox
vandanavk commented on issue #12182: Remove Epoch training metric log
URL: https://github.com/apache/incubator-mxnet/pull/12182#issuecomment-415559715
 
 
   @sandeep-krishnamurthy Please change it back to pr-awaiting-review


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on issue #12284: [MXNET-853] Fix for smooth_l1 operator scalar default value

2018-08-23 Thread GitBox
zhreshold commented on issue #12284: [MXNET-853] Fix for smooth_l1 operator 
scalar default value
URL: https://github.com/apache/incubator-mxnet/pull/12284#issuecomment-415557483
 
 
   @samskalicky I didn't mean not sending this PR to fix the problem, but 
should address the exception catch problem anyways. Sorry about the confusion. 
#12286 is a perfect attempt. Thanks!
   
   This LGTM now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on issue #12182: Remove Epoch training metric log

2018-08-23 Thread GitBox
vandanavk commented on issue #12182: Remove Epoch training metric log
URL: https://github.com/apache/incubator-mxnet/pull/12182#issuecomment-415556272
 
 
   This PR is ready for review again. Fixed parse_log to accommodate this 
change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #11658: Optimize cached op static memory allocation

2018-08-23 Thread GitBox
zheng-da commented on a change in pull request #11658: Optimize cached op 
static memory allocation
URL: https://github.com/apache/incubator-mxnet/pull/11658#discussion_r212438446
 
 

 ##
 File path: src/imperative/imperative_utils.h
 ##
 @@ -789,22 +789,36 @@ inline MemoryPlanVector PlanMemory(
 }
 
 
-inline std::multimap AllocateMemory(
+inline NDArray AllocateMemory(
 const nnvm::Graph& g,
 const nnvm::IndexedGraph& idx,
 const Context& default_ctx,
 const uint32_t entry_start, const uint32_t entry_end,
 const MemoryPlanVector& mem_plan,
 const std::vector& arrays,
 std::vector *array_reqs,
-std::multimap&& pool = std::multimap()) {
+const bool use_pool = false,
+NDArray pool = NDArray()) {
   using namespace nnvm;
   const auto& dtypes = g.GetAttr("dtype");
   const auto& shapes = g.GetAttr("shape");
   const auto& stypes = g.GetAttr("storage_type");
+  const size_t page_size = dmlc::GetEnv("MXNET_GPU_MEM_POOL_PAGE_SIZE", 4096);
 
-  std::multimap new_pool;
+  size_t total_size = 0;
+  for (uint32_t i = entry_start; i < entry_end; ++i) {
+if (mem_plan[i].storage_id == exec::kExternalStorageID ||
+mem_plan[i].storage_id == exec::kDynamicStorageID ||
+mem_plan[i].root != i) continue;
+total_size += (mem_plan[i].size + page_size - 1) / page_size * page_size;
 
 Review comment:
   could you also move the code of calculating total_size to the condition 
below?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #11658: Optimize cached op static memory allocation

2018-08-23 Thread GitBox
zheng-da commented on a change in pull request #11658: Optimize cached op 
static memory allocation
URL: https://github.com/apache/incubator-mxnet/pull/11658#discussion_r212437803
 
 

 ##
 File path: src/imperative/imperative_utils.h
 ##
 @@ -789,22 +789,36 @@ inline MemoryPlanVector PlanMemory(
 }
 
 
-inline std::multimap AllocateMemory(
+inline NDArray AllocateMemory(
 const nnvm::Graph& g,
 const nnvm::IndexedGraph& idx,
 const Context& default_ctx,
 const uint32_t entry_start, const uint32_t entry_end,
 const MemoryPlanVector& mem_plan,
 const std::vector& arrays,
 std::vector *array_reqs,
-std::multimap&& pool = std::multimap()) {
+const bool use_pool = false,
+NDArray pool = NDArray()) {
   using namespace nnvm;
   const auto& dtypes = g.GetAttr("dtype");
   const auto& shapes = g.GetAttr("shape");
   const auto& stypes = g.GetAttr("storage_type");
+  const size_t page_size = dmlc::GetEnv("MXNET_GPU_MEM_POOL_PAGE_SIZE", 4096);
 
-  std::multimap new_pool;
+  size_t total_size = 0;
+  for (uint32_t i = entry_start; i < entry_end; ++i) {
+if (mem_plan[i].storage_id == exec::kExternalStorageID ||
+mem_plan[i].storage_id == exec::kDynamicStorageID ||
+mem_plan[i].root != i) continue;
+total_size += (mem_plan[i].size + page_size - 1) / page_size * page_size;
 
 Review comment:
   every NDArray is aligned to the page size? I thought you want to use the 
alignment for the default GPU memory allocation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 opened a new issue #12314: flaky test: test_operator.test_dropout

2018-08-23 Thread GitBox
haojin2 opened a new issue #12314: flaky test: test_operator.test_dropout
URL: https://github.com/apache/incubator-mxnet/issues/12314
 
 
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-12311/1/pipeline
   ```
   ==
   
   FAIL: test_operator.test_dropout
   
   --
   
   Traceback (most recent call last):
   
 File "/usr/local/lib/python3.5/dist-packages/nose/case.py", line 198, in 
runTest
   
   self.test(*self.arg)
   
 File "/work/mxnet/tests/python/unittest/common.py", line 172, in test_new
   
   orig_test(*args, **kwargs)
   
 File "/work/mxnet/tests/python/unittest/test_operator.py", line 5844, in 
test_dropout
   
   check_dropout_ratio(0.0, shape)
   
 File "/work/mxnet/tests/python/unittest/test_operator.py", line 5788, in 
check_dropout_ratio
   
   assert exe.outputs[0].asnumpy().min() == min_value
   
   AssertionError: 
   
    >> begin captured logging << 
   
   common: INFO: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=119185068 to reproduce.
   
   - >> end captured logging << -
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #12312: flaky test: test_ndarray.test_ndarray_elementwise

2018-08-23 Thread GitBox
haojin2 commented on issue #12312: flaky test: 
test_ndarray.test_ndarray_elementwise
URL: 
https://github.com/apache/incubator-mxnet/issues/12312#issuecomment-415540988
 
 
   @mxnet-label-bot [Flaky, Test]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #12314: flaky test: test_operator.test_dropout

2018-08-23 Thread GitBox
haojin2 commented on issue #12314: flaky test: test_operator.test_dropout
URL: 
https://github.com/apache/incubator-mxnet/issues/12314#issuecomment-415540911
 
 
   @mxnet-label-bot [Flaky, Test]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #12314: flaky test: test_operator.test_dropout

2018-08-23 Thread GitBox
haojin2 commented on issue #12314: flaky test: test_operator.test_dropout
URL: 
https://github.com/apache/incubator-mxnet/issues/12314#issuecomment-415540830
 
 
   @samskalicky Can you take a look at this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #12312: flaky test: test_ndarray.test_ndarray_elementwise

2018-08-23 Thread GitBox
haojin2 commented on issue #12312: flaky test: 
test_ndarray.test_ndarray_elementwise
URL: 
https://github.com/apache/incubator-mxnet/issues/12312#issuecomment-415539605
 
 
   Fix in #12313 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 opened a new pull request #12313: Set proper atol for check_with_uniform helper function

2018-08-23 Thread GitBox
haojin2 opened a new pull request #12313: Set proper atol for 
check_with_uniform helper function
URL: https://github.com/apache/incubator-mxnet/pull/12313
 
 
   ## Description ##
   Fix for #12312.
   
   ## Checklist ##
   ### Essentials ###
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Set proper atol
   
   ## Comments ##
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 opened a new issue #12312: flaky test: test_ndarray.test_ndarray_elementwise

2018-08-23 Thread GitBox
haojin2 opened a new issue #12312: flaky test: 
test_ndarray.test_ndarray_elementwise
URL: https://github.com/apache/incubator-mxnet/issues/12312
 
 
   ```
   ==
   
   FAIL: test_ndarray.test_ndarray_elementwise
   
   --
   
   Traceback (most recent call last):
   
 File "C:\Anaconda3\envs\py3\lib\site-packages\nose\case.py", line 197, in 
runTest
   
   self.test(*self.arg)
   
 File 
"C:\jenkins_slave\workspace\ut-python-cpu@2\tests\python\unittest\common.py", 
line 172, in test_new
   
   orig_test(*args, **kwargs)
   
 File 
"C:\jenkins_slave\workspace\ut-python-cpu@2\tests\python\unittest\test_ndarray.py",
 line 133, in test_ndarray_elementwise
   
   check_with_uniform(lambda x, y: x / y, 2, dim, type_list=real_type)
   
 File 
"C:\jenkins_slave\workspace\ut-python-cpu@2\tests\python\unittest\test_ndarray.py",
 line 59, in check_with_uniform
   
   assert_almost_equal(out1, out2, rtol=2e-3)
   
 File 
"C:\jenkins_slave\workspace\ut-python-cpu@2\windows_package\python\mxnet\test_utils.py",
 line 491, in assert_almost_equal
   
   raise AssertionError(msg)
   
   AssertionError: 
   
   Items are not equal:
   
   Error inf exceeds tolerance rtol=0.002000, atol=0.00.  Location of 
maximum error:(58,), a=-0.09, b=-0.09
   
a: array([-1.75488281,  0.16320801, -1.10449219, ...,  2.,
   
  -0.9921875 ,  0.08856201], dtype=float16)
   
b: array([-1.75488281,  0.16333008, -1.10449219, ...,  2.,
   
  -0.9921875 ,  0.08862305], dtype=float16)
   
    >> begin captured logging << 
   
   common: INFO: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=1627488533 to reproduce.
   
   - >> end captured logging << -
   ```
   From the error log it seems like there's actually no big difference between 
reference and actual result, should be a tolerance problem (atol is 1e-20 from 
the log), submitting a hot fix.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2018-08-23 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 49499ef  Bump the publish timestamp.
49499ef is described below

commit 49499ef929ca126bffc441f8eebe64265c6489e5
Author: mxnet-ci 
AuthorDate: Thu Aug 23 19:01:35 2018 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..154180d
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Thu Aug 23 19:01:35 UTC 2018



[GitHub] AustinDoolittle closed issue #12228: Device Kernel Image is Invalid (v1.2.1)

2018-08-23 Thread GitBox
AustinDoolittle closed issue #12228: Device Kernel Image is Invalid (v1.2.1)
URL: https://github.com/apache/incubator-mxnet/issues/12228
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] AustinDoolittle commented on issue #12228: Device Kernel Image is Invalid (v1.2.1)

2018-08-23 Thread GitBox
AustinDoolittle commented on issue #12228: Device Kernel Image is Invalid 
(v1.2.1)
URL: 
https://github.com/apache/incubator-mxnet/issues/12228#issuecomment-415530268
 
 
   Ok, I was able to resolve this issue. The steps to correct were:
   
   1. Uninstall everything remotely related to nvidia (drivers, cuda, cudnn, 
documentation, etc.)
   2. Uninstall MxNet
   3. Reboot
   4. Install nvidia drivers
   5. Reboot
   6. Install Cuda and Cudnn
   7. Reboot
   8. Install MxNet
   
   I think I definitely went a little overkill with the rebooting, but 
everything appears to be working now. Thanks for the assistance!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [MXNET-696] Define cmp() in Python 3 again (#12295)

2018-08-23 Thread skm
This is an automated email from the ASF dual-hosted git repository.

skm pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 4664a30  [MXNET-696] Define cmp() in Python 3 again (#12295)
4664a30 is described below

commit 4664a3005db259d9220fd843540d515a7d3d6036
Author: cclauss 
AuthorDate: Thu Aug 23 20:43:05 2018 +0200

[MXNET-696] Define cmp() in Python 3 again (#12295)
---
 tests/nightly/model_backwards_compatibility_check/common.py | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/tests/nightly/model_backwards_compatibility_check/common.py 
b/tests/nightly/model_backwards_compatibility_check/common.py
index 8950a92..d8ffca2 100644
--- a/tests/nightly/model_backwards_compatibility_check/common.py
+++ b/tests/nightly/model_backwards_compatibility_check/common.py
@@ -29,6 +29,13 @@ from mxnet.gluon import nn
 import re
 from mxnet.test_utils import assert_almost_equal
 
+try:
+cmp # Python 2
+except NameError:
+# See: https://docs.python.org/3.0/whatsnew/3.0.html#ordering-comparisons
+def cmp(x, y):  # Python 3
+return (x > y) - (x < y)
+
 # Set fixed random seeds.
 mx.random.seed(7)
 np.random.seed(7)



[GitHub] sandeep-krishnamurthy closed pull request #12295: [MXNET-696] Define cmp() in Python 3 again

2018-08-23 Thread GitBox
sandeep-krishnamurthy closed pull request #12295: [MXNET-696] Define cmp() in 
Python 3 again
URL: https://github.com/apache/incubator-mxnet/pull/12295
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/tests/nightly/model_backwards_compatibility_check/common.py 
b/tests/nightly/model_backwards_compatibility_check/common.py
index 8950a927083..d8ffca25a3f 100644
--- a/tests/nightly/model_backwards_compatibility_check/common.py
+++ b/tests/nightly/model_backwards_compatibility_check/common.py
@@ -29,6 +29,13 @@
 import re
 from mxnet.test_utils import assert_almost_equal
 
+try:
+cmp # Python 2
+except NameError:
+# See: https://docs.python.org/3.0/whatsnew/3.0.html#ordering-comparisons
+def cmp(x, y):  # Python 3
+return (x > y) - (x < y)
+
 # Set fixed random seeds.
 mx.random.seed(7)
 np.random.seed(7)


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy closed issue #8270: 5 undefined names in Python code

2018-08-23 Thread GitBox
sandeep-krishnamurthy closed issue #8270: 5 undefined names in Python code
URL: https://github.com/apache/incubator-mxnet/issues/8270
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nswamy closed pull request #11157: [MXNET-522] Add Media query support for Multi-size logos

2018-08-23 Thread GitBox
nswamy closed pull request #11157: [MXNET-522] Add Media query support for 
Multi-size logos
URL: https://github.com/apache/incubator-mxnet/pull/11157
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/_static/mxnet-theme/layout.html 
b/docs/_static/mxnet-theme/layout.html
index 653f5d79161..4880224a6b9 100644
--- a/docs/_static/mxnet-theme/layout.html
+++ b/docs/_static/mxnet-theme/layout.html
@@ -63,7 +63,6 @@
 
 https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML";>
  jQuery(function() { Search.loadIndex("{{ 
pathto('/searchindex.js', 1) }}"); Search.init();}); 
-
 

[incubator-mxnet] branch master updated: Generalized broadcast_like operator (#11984)

2018-08-23 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 43581a7  Generalized broadcast_like operator (#11984)
43581a7 is described below

commit 43581a7cb3393b1e6660c36c5ef4d59a09b212dc
Author: Istvan Fehervari 
AuthorDate: Thu Aug 23 11:15:40 2018 -0700

Generalized broadcast_like operator (#11984)

* Added input_axes and other_axes to broadcast_like

See https://github.com/apache/incubator-mxnet/issues/11871

* Added a simple sanity test

* Fixed linting

* Fixed linting issues

* Renamed parameters, added negative indexing, more testcases

* Fixed linting

* Replaced params with optionals

Not specified axes will result into whole shape, empty tuples shall raise 
exception.
Added tests

* Re-added the default param values

* Fixed indentation
---
 src/operator/tensor/broadcast_reduce_op.h| 71 
 src/operator/tensor/broadcast_reduce_op_value.cc |  5 ++
 tests/python/unittest/test_ndarray.py| 19 +++
 tests/python/unittest/test_symbol.py |  3 +-
 4 files changed, 87 insertions(+), 11 deletions(-)

diff --git a/src/operator/tensor/broadcast_reduce_op.h 
b/src/operator/tensor/broadcast_reduce_op.h
index 351315a..0944d25 100644
--- a/src/operator/tensor/broadcast_reduce_op.h
+++ b/src/operator/tensor/broadcast_reduce_op.h
@@ -147,6 +147,17 @@ struct BroadcastToParam : public 
dmlc::Parameter {
   }
 };
 
+struct BroadcastLikeParam : public dmlc::Parameter {
+  dmlc::optional lhs_axes;
+  dmlc::optional rhs_axes;
+  DMLC_DECLARE_PARAMETER(BroadcastLikeParam) {
+DMLC_DECLARE_FIELD(lhs_axes).set_default(dmlc::optional())
+  .describe("Axes to perform broadcast on in the first input array");
+DMLC_DECLARE_FIELD(rhs_axes).set_default(dmlc::optional())
+  .describe("Axes to copy from the second input array");
+  }
+};
+
 inline int CheckAxis(int axis, int ndim) {
   CHECK(axis < ndim && axis >= -ndim)
 << "axis " << axis << " exceeds the input dimension of " << ndim;
@@ -350,20 +361,60 @@ inline bool BroadcastLikeShape(const nnvm::NodeAttrs& 
attrs,
   CHECK_EQ(out_attrs->size(), 1U);
   TShape& lhs_shape = (*in_attrs)[0];
   TShape& rhs_shape = (*in_attrs)[1];
-  TShape oshape = TShape(rhs_shape);
-  if (lhs_shape.ndim() == 0 || lhs_shape.ndim() == 0) return false;
 
-  CHECK_EQ(lhs_shape.ndim(), rhs_shape.ndim())
-<< "Operand of shape " << lhs_shape << " cannot be broadcasted to " << 
rhs_shape;
+  if ((lhs_shape.ndim() == 0) || (lhs_shape.ndim() == 0)) {
+return false;
+  }
 
-  for (index_t i = 0; i < lhs_shape.ndim(); ++i) {
-if (rhs_shape[i] != 0) {
-  CHECK(lhs_shape[i] == rhs_shape[i] || lhs_shape[i] == 1)
-<< "Array cannot be broadcasted from " << lhs_shape << " to " << 
rhs_shape;
-} else {
-  oshape[i] = lhs_shape[i];
+  const BroadcastLikeParam& param = 
nnvm::get(attrs.parsed);
+  TShape oshape;
+
+  // lhs or rhs or both params were not specified
+  if (!param.lhs_axes.has_value() || !param.rhs_axes.has_value()) {
+CHECK_EQ(lhs_shape.ndim(), rhs_shape.ndim())
+  << "Operand of shape " << lhs_shape << " cannot be broadcasted to " << 
rhs_shape;
+
+oshape = TShape(rhs_shape);
+for (index_t i = 0; i < lhs_shape.ndim(); ++i) {
+  if (rhs_shape[i] != 0) {
+CHECK(lhs_shape[i] == rhs_shape[i] || lhs_shape[i] == 1)
+  << "Array cannot be broadcasted from " << lhs_shape << " to " << 
rhs_shape;
+  } else {
+oshape[i] = lhs_shape[i];
+  }
+}
+  } else {
+auto lhs_axes = param.lhs_axes.value();
+auto rhs_axes = param.rhs_axes.value();
+
+CHECK(rhs_axes.ndim() == lhs_axes.ndim())
+  << "Input_axis and other_axis size does not match";
+
+CHECK(lhs_axes.ndim() > 0)
+  << "Empty axes tuple is not allowed";
+
+oshape = TShape(lhs_shape);
+for (index_t i = 0; i < lhs_axes.ndim(); ++i) {
+  auto copyfrom = lhs_axes[i];
+  if (copyfrom < 0) {
+copyfrom =  lhs_shape.ndim() + copyfrom;
+  }
+  CHECK(copyfrom >= 0 && copyfrom < oshape.ndim())
+<< "Invalid dimension specified in lhs_axes: " << lhs_axes[i];
+
+  auto copyto = rhs_axes[i];
+  if (copyto < 0) {
+copyto =  rhs_shape.ndim() + copyto;
+  }
+  CHECK(copyto >= 0 && copyto < rhs_shape.ndim())
+<< "Invalid dimension specified in rhs_axes: " << rhs_axes[i];
+
+  CHECK(lhs_shape[copyfrom] == 1) << "Input axis " << lhs_axes[i]
+<< " at dimension " << i << " cannot be broadcasted to " << 
rhs_shape[copyto];
+  oshape[copyfrom] = rhs_shape[copyto];
 }
   }
+
   SHAPE_ASSIGN_CHECK(*out_attrs, 0, oshape);
   return true;
 }
diff --git a/src/operator/tensor/broadcast_reduce_op_value.cc 

[GitHub] szha closed pull request #11984: Generalized broadcast_like operator

2018-08-23 Thread GitBox
szha closed pull request #11984: Generalized broadcast_like operator
URL: https://github.com/apache/incubator-mxnet/pull/11984
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/tensor/broadcast_reduce_op.h 
b/src/operator/tensor/broadcast_reduce_op.h
index 351315ab0c8..0944d255a45 100644
--- a/src/operator/tensor/broadcast_reduce_op.h
+++ b/src/operator/tensor/broadcast_reduce_op.h
@@ -147,6 +147,17 @@ struct BroadcastToParam : public 
dmlc::Parameter {
   }
 };
 
+struct BroadcastLikeParam : public dmlc::Parameter {
+  dmlc::optional lhs_axes;
+  dmlc::optional rhs_axes;
+  DMLC_DECLARE_PARAMETER(BroadcastLikeParam) {
+DMLC_DECLARE_FIELD(lhs_axes).set_default(dmlc::optional())
+  .describe("Axes to perform broadcast on in the first input array");
+DMLC_DECLARE_FIELD(rhs_axes).set_default(dmlc::optional())
+  .describe("Axes to copy from the second input array");
+  }
+};
+
 inline int CheckAxis(int axis, int ndim) {
   CHECK(axis < ndim && axis >= -ndim)
 << "axis " << axis << " exceeds the input dimension of " << ndim;
@@ -350,20 +361,60 @@ inline bool BroadcastLikeShape(const nnvm::NodeAttrs& 
attrs,
   CHECK_EQ(out_attrs->size(), 1U);
   TShape& lhs_shape = (*in_attrs)[0];
   TShape& rhs_shape = (*in_attrs)[1];
-  TShape oshape = TShape(rhs_shape);
-  if (lhs_shape.ndim() == 0 || lhs_shape.ndim() == 0) return false;
 
-  CHECK_EQ(lhs_shape.ndim(), rhs_shape.ndim())
-<< "Operand of shape " << lhs_shape << " cannot be broadcasted to " << 
rhs_shape;
+  if ((lhs_shape.ndim() == 0) || (lhs_shape.ndim() == 0)) {
+return false;
+  }
 
-  for (index_t i = 0; i < lhs_shape.ndim(); ++i) {
-if (rhs_shape[i] != 0) {
-  CHECK(lhs_shape[i] == rhs_shape[i] || lhs_shape[i] == 1)
-<< "Array cannot be broadcasted from " << lhs_shape << " to " << 
rhs_shape;
-} else {
-  oshape[i] = lhs_shape[i];
+  const BroadcastLikeParam& param = 
nnvm::get(attrs.parsed);
+  TShape oshape;
+
+  // lhs or rhs or both params were not specified
+  if (!param.lhs_axes.has_value() || !param.rhs_axes.has_value()) {
+CHECK_EQ(lhs_shape.ndim(), rhs_shape.ndim())
+  << "Operand of shape " << lhs_shape << " cannot be broadcasted to " << 
rhs_shape;
+
+oshape = TShape(rhs_shape);
+for (index_t i = 0; i < lhs_shape.ndim(); ++i) {
+  if (rhs_shape[i] != 0) {
+CHECK(lhs_shape[i] == rhs_shape[i] || lhs_shape[i] == 1)
+  << "Array cannot be broadcasted from " << lhs_shape << " to " << 
rhs_shape;
+  } else {
+oshape[i] = lhs_shape[i];
+  }
+}
+  } else {
+auto lhs_axes = param.lhs_axes.value();
+auto rhs_axes = param.rhs_axes.value();
+
+CHECK(rhs_axes.ndim() == lhs_axes.ndim())
+  << "Input_axis and other_axis size does not match";
+
+CHECK(lhs_axes.ndim() > 0)
+  << "Empty axes tuple is not allowed";
+
+oshape = TShape(lhs_shape);
+for (index_t i = 0; i < lhs_axes.ndim(); ++i) {
+  auto copyfrom = lhs_axes[i];
+  if (copyfrom < 0) {
+copyfrom =  lhs_shape.ndim() + copyfrom;
+  }
+  CHECK(copyfrom >= 0 && copyfrom < oshape.ndim())
+<< "Invalid dimension specified in lhs_axes: " << lhs_axes[i];
+
+  auto copyto = rhs_axes[i];
+  if (copyto < 0) {
+copyto =  rhs_shape.ndim() + copyto;
+  }
+  CHECK(copyto >= 0 && copyto < rhs_shape.ndim())
+<< "Invalid dimension specified in rhs_axes: " << rhs_axes[i];
+
+  CHECK(lhs_shape[copyfrom] == 1) << "Input axis " << lhs_axes[i]
+<< " at dimension " << i << " cannot be broadcasted to " << 
rhs_shape[copyto];
+  oshape[copyfrom] = rhs_shape[copyto];
 }
   }
+
   SHAPE_ASSIGN_CHECK(*out_attrs, 0, oshape);
   return true;
 }
diff --git a/src/operator/tensor/broadcast_reduce_op_value.cc 
b/src/operator/tensor/broadcast_reduce_op_value.cc
index 929c3dfcf0a..c3bc9cfd3f0 100644
--- a/src/operator/tensor/broadcast_reduce_op_value.cc
+++ b/src/operator/tensor/broadcast_reduce_op_value.cc
@@ -31,6 +31,7 @@ DMLC_REGISTER_PARAMETER(NormParam);
 DMLC_REGISTER_PARAMETER(ReduceAxisParam);
 DMLC_REGISTER_PARAMETER(BroadcastAxesParam);
 DMLC_REGISTER_PARAMETER(BroadcastToParam);
+DMLC_REGISTER_PARAMETER(BroadcastLikeParam);
 
 inline std::string get_reduce_axes_description(const std::string& op_name, int 
line) {
   std::string doc = R"code(Computes the __op__ of array elements over given 
axes.
@@ -309,7 +310,11 @@ For example::
broadcast_like([[1,2,3]], [[5,6,7],[7,8,9]]) = [[ 1.,  2.,  3.],
[ 1.,  2.,  3.]])
 
+   broadcast_like([9], [1,2,3,4,5], lhs_axes=(0,), rhs_axes=(-1,)) = 
[9,9,9,9,9]
+
 )code" ADD_FILELINE)
+.set_attr_parser(ParamParser)
+.add_arguments(BroadcastLikeParam::__FIELDS__())
 

[GitHub] haojin2 commented on issue #12121: Broken link in test_gluon_model_zoo.test_models

2018-08-23 Thread GitBox
haojin2 commented on issue #12121: Broken link in 
test_gluon_model_zoo.test_models
URL: 
https://github.com/apache/incubator-mxnet/issues/12121#issuecomment-415515885
 
 
   Closing this issue, I agree that the current settings for # of retries 
should be sufficient, we only need to keep an active eye on the links to the 
files and keep them up-to-date and accessible on CIs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 closed issue #12121: Broken link in test_gluon_model_zoo.test_models

2018-08-23 Thread GitBox
haojin2 closed issue #12121: Broken link in test_gluon_model_zoo.test_models
URL: https://github.com/apache/incubator-mxnet/issues/12121
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hetong007 edited a comment on issue #12121: Broken link in test_gluon_model_zoo.test_models

2018-08-23 Thread GitBox
hetong007 edited a comment on issue #12121: Broken link in 
test_gluon_model_zoo.test_models
URL: 
https://github.com/apache/incubator-mxnet/issues/12121#issuecomment-415514751
 
 
   I agree that having 5 retires is sufficient, if no other improvement on the 
connection can be done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hetong007 commented on issue #12121: Broken link in test_gluon_model_zoo.test_models

2018-08-23 Thread GitBox
hetong007 commented on issue #12121: Broken link in 
test_gluon_model_zoo.test_models
URL: 
https://github.com/apache/incubator-mxnet/issues/12121#issuecomment-415514751
 
 
   I agree that 5 retires is sufficient, if no other improvement on the 
connection can be done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #12102: site-wide social include

2018-08-23 Thread GitBox
aaronmarkham commented on issue #12102: site-wide social include
URL: https://github.com/apache/incubator-mxnet/pull/12102#issuecomment-415509287
 
 
   @nswamy This one uses the logos from the platforms rather than trying to 
conform them to the mxnet blue background.
   http://34.201.8.176/versions/social_media_update_v2/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #12306: SoftMin Operator

2018-08-23 Thread GitBox
anirudhacharya commented on a change in pull request #12306: SoftMin Operator
URL: https://github.com/apache/incubator-mxnet/pull/12306#discussion_r212395901
 
 

 ##
 File path: src/operator/nn/softmax.cc
 ##
 @@ -116,6 +116,45 @@ MXNET_OPERATOR_REGISTER_BINARY(_backward_softmax)
 .set_attr("FCompute", SoftmaxGradCompute);
 
+MXNET_OPERATOR_REGISTER_UNARY(softmin)
+.describe(R"code(Applies the softmin function.
+
+The resulting array contains elements in the range (0,1) and the elements 
along the given axis sum
+up to 1. Equivalant to `softmax(-z)` where `softmax(z)` is defined as:
+
+.. math::
+   softmax(\mathbf{z/t})_j = \frac{e^{z_j/t}}{\sum_{k=1}^K e^{z_k/t}}
+
+for :math:`j = 1, ..., K`
+
+t is the temperature parameter in softmax function. By default, t equals 1.0
+
+Example::
+
+  x = [[ 1.  2.  3.]
+   [ 3.  2.  1.]]
+
+  softmax(x,axis=0) = [[ 0.88079703,  0.5,  0.11920292],
 
 Review comment:
   resolved offline to change it to softmin.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on a change in pull request #12306: SoftMin Operator

2018-08-23 Thread GitBox
haojin2 commented on a change in pull request #12306: SoftMin Operator
URL: https://github.com/apache/incubator-mxnet/pull/12306#discussion_r212394462
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -4519,6 +4519,23 @@ def test_invalid_shape():
 test_where_numeric_gradient((5, 7, 9), False)
 test_invalid_shape()
 
+
+@with_seed()
+def test_softmin():
+for ndim in range(1, 5):
+shape = np.random.randint(1, 5, size=ndim)
 
 Review comment:
   I just realized that checks for different dtypes are lacking in all of 
softmax-related tests, I'll add those checks for softmin only in this PR and 
submit another PR for adding those checks for other tests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sbodenstein commented on issue #11984: Generalized broadcast_like operator

2018-08-23 Thread GitBox
sbodenstein commented on issue #11984: Generalized broadcast_like operator
URL: https://github.com/apache/incubator-mxnet/pull/11984#issuecomment-415504088
 
 
   @szha: is it ready to be merged? (I think those indentation and other issues 
were resolved)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on a change in pull request #12306: SoftMin Operator

2018-08-23 Thread GitBox
haojin2 commented on a change in pull request #12306: SoftMin Operator
URL: https://github.com/apache/incubator-mxnet/pull/12306#discussion_r212392396
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -4519,6 +4519,23 @@ def test_invalid_shape():
 test_where_numeric_gradient((5, 7, 9), False)
 test_invalid_shape()
 
+
+@with_seed()
+def test_softmin():
+for ndim in range(1, 5):
+shape = np.random.randint(1, 5, size=ndim)
 
 Review comment:
   I can definitely add that


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on a change in pull request #12306: SoftMin Operator

2018-08-23 Thread GitBox
haojin2 commented on a change in pull request #12306: SoftMin Operator
URL: https://github.com/apache/incubator-mxnet/pull/12306#discussion_r212392233
 
 

 ##
 File path: src/operator/nn/softmax.cc
 ##
 @@ -116,6 +116,45 @@ MXNET_OPERATOR_REGISTER_BINARY(_backward_softmax)
 .set_attr("FCompute", SoftmaxGradCompute);
 
+MXNET_OPERATOR_REGISTER_UNARY(softmin)
+.describe(R"code(Applies the softmin function.
+
+The resulting array contains elements in the range (0,1) and the elements 
along the given axis sum
+up to 1. Equivalant to `softmax(-z)` where `softmax(z)` is defined as:
+
+.. math::
+   softmax(\mathbf{z/t})_j = \frac{e^{z_j/t}}{\sum_{k=1}^K e^{z_k/t}}
+
+for :math:`j = 1, ..., K`
+
+t is the temperature parameter in softmax function. By default, t equals 1.0
+
+Example::
+
+  x = [[ 1.  2.  3.]
+   [ 3.  2.  1.]]
+
+  softmax(x,axis=0) = [[ 0.88079703,  0.5,  0.11920292],
 
 Review comment:
   Please read the doc carefully before you make a comment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #12306: SoftMin Operator

2018-08-23 Thread GitBox
anirudhacharya commented on a change in pull request #12306: SoftMin Operator
URL: https://github.com/apache/incubator-mxnet/pull/12306#discussion_r212374502
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -4519,6 +4519,23 @@ def test_invalid_shape():
 test_where_numeric_gradient((5, 7, 9), False)
 test_invalid_shape()
 
+
+@with_seed()
+def test_softmin():
+for ndim in range(1, 5):
+shape = np.random.randint(1, 5, size=ndim)
 
 Review comment:
   should there be a test for different dtypes


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #12306: SoftMin Operator

2018-08-23 Thread GitBox
anirudhacharya commented on a change in pull request #12306: SoftMin Operator
URL: https://github.com/apache/incubator-mxnet/pull/12306#discussion_r212373939
 
 

 ##
 File path: src/operator/nn/softmax.cc
 ##
 @@ -116,6 +116,45 @@ MXNET_OPERATOR_REGISTER_BINARY(_backward_softmax)
 .set_attr("FCompute", SoftmaxGradCompute);
 
+MXNET_OPERATOR_REGISTER_UNARY(softmin)
+.describe(R"code(Applies the softmin function.
+
+The resulting array contains elements in the range (0,1) and the elements 
along the given axis sum
+up to 1. Equivalant to `softmax(-z)` where `softmax(z)` is defined as:
+
+.. math::
+   softmax(\mathbf{z/t})_j = \frac{e^{z_j/t}}{\sum_{k=1}^K e^{z_k/t}}
+
+for :math:`j = 1, ..., K`
+
+t is the temperature parameter in softmax function. By default, t equals 1.0
+
+Example::
+
+  x = [[ 1.  2.  3.]
+   [ 3.  2.  1.]]
+
+  softmax(x,axis=0) = [[ 0.88079703,  0.5,  0.11920292],
 
 Review comment:
   Typo: this should be softmin


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on issue #12121: Broken link in test_gluon_model_zoo.test_models

2018-08-23 Thread GitBox
sandeep-krishnamurthy commented on issue #12121: Broken link in 
test_gluon_model_zoo.test_models
URL: 
https://github.com/apache/incubator-mxnet/issues/12121#issuecomment-415499786
 
 
   @haojin2 / @marcoabreu @hetong007 - ping.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland commented on a change in pull request #12296: Separate refactoring from #12276 in a prior PR

2018-08-23 Thread GitBox
KellenSunderland commented on a change in pull request #12296: Separate 
refactoring from #12276 in a prior PR
URL: https://github.com/apache/incubator-mxnet/pull/12296#discussion_r212389624
 
 

 ##
 File path: ci/build.py
 ##
 @@ -284,8 +288,10 @@ def script_name() -> str:
 default=1,
 type=int)
 
-parser.add_argument("-c", "--cache", action="store_true",
-help="Enable docker registry cache")
+parser.add_argument("-c", "--no-dockerhub-cache", action="store_true",
 
 Review comment:
   Ok, that's the information I was missing here.  In this case it makes sense 
to me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on a change in pull request #12306: SoftMin Operator

2018-08-23 Thread GitBox
haojin2 commented on a change in pull request #12306: SoftMin Operator
URL: https://github.com/apache/incubator-mxnet/pull/12306#discussion_r21235
 
 

 ##
 File path: src/operator/nn/softmax.cc
 ##
 @@ -116,6 +116,45 @@ MXNET_OPERATOR_REGISTER_BINARY(_backward_softmax)
 .set_attr("FCompute", SoftmaxGradCompute);
 
+MXNET_OPERATOR_REGISTER_UNARY(softmin)
+.describe(R"code(Applies the softmin function.
+
+The resulting array contains elements in the range (0,1) and the elements 
along the given axis sum
+up to 1. Equivalant to `softmax(-z)` where `softmax(z)` is defined as:
+
+.. math::
+   softmax(\mathbf{z/t})_j = \frac{e^{z_j/t}}{\sum_{k=1}^K e^{z_k/t}}
 
 Review comment:
   ok


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   >