[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #15493: [numpy] numpy einsum

2019-07-10 Thread GitBox
haojin2 commented on a change in pull request #15493: [numpy] numpy einsum
URL: https://github.com/apache/incubator-mxnet/pull/15493#discussion_r302276084
 
 

 ##
 File path: src/operator/numpy/np_einsum_op-inl.h
 ##
 @@ -0,0 +1,772 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_einsum_op-inl.h
+ * \brief Function definition of numpy-compatible einsum operator
+ */
+
+#ifndef MXNET_OPERATOR_NUMPY_NP_EINSUM_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_EINSUM_OP_INL_H_
+
+#include 
+#include 
+#include 
+#include "../../common/static_array.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../mshadow_op.h"
+#include "../elemwise_op_common.h"
+
+
+namespace mxnet {
+namespace op {
+
+#define NPY_MAXDIMS 32
+#define NPY_MAXARGS 32
+
+
+inline TShape get_stride(const TShape& shape) {
+  int ndim = shape.ndim(), prod = 1;
+  TShape stride = TShape(ndim, -1);
+  for (int i = ndim - 1; i >= 0; i--) {
+stride[i] = shape[i] > 1 ? prod : 0;
+prod = prod * shape[i];
+  }
+  return stride;
+}
+
+
+inline TShape pad(const TShape& shape, int odim) {
+  int ndim = shape.ndim();
+  CHECK_GE(odim, ndim);
+  TShape ret(odim, 1);
+  for (int idim = 0; idim < ndim; ++idim) {
+ret[idim] = shape[idim];
+  }
+  return ret;
+}
+
+
 
 Review comment:
   one less blank line here


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei commented on issue #15431: [MXNet 1.5.0.rc2] Issues with asnumpy() method

2019-07-10 Thread GitBox
roywei commented on issue #15431: [MXNet 1.5.0.rc2] Issues with asnumpy() method
URL: 
https://github.com/apache/incubator-mxnet/issues/15431#issuecomment-510236197
 
 
   @Wallart your code is running fine on my machine. I'm testing on[ LJ-Speech 
Dataset](https://keithito.com/LJ-Speech-Dataset/)
   
   added a few lines in `main` to format the text files, and everything is 
plotted, and  `asnumpy()` works fine. Could you try without Docker?
   
   ```
   if __name__ == '__main__':
   logging.basicConfig()
   logging.getLogger().setLevel(logging.INFO)
   
   params = {
   'max_wav_value': 32768.0,  # for 16 bits files
   'sampling_rate': 22050,
   'filter_length': 1024,
   'hop_length': 256,
   'win_length': 1024,
   'n_mel_channels': 80,
   'mel_fmin': 0.0,
   'mel_fmax': 8000.0
   }
   with open('~/Downloads/LJSpeech-1.1/metadata.csv', encoding='utf-8') as 
f:
   for line in f:
   record = line.split('|')
   file_name = record[0] + '.txt'
   content = record[2]
   with open('~/Downloads/LJSpeech-1.1/wavs/' + file_name, 'w', 
encoding='utf-8') as text_output:
   text_output.write(content)
   
   french = WavDataset('~/Downloads/LJSpeech-1.1/wavs', text_to_sequence, 
**params)
   assert type(french[0]) == tuple
   ```
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei edited a comment on issue #15431: [MXNet 1.5.0.rc2] Issues with asnumpy() method

2019-07-10 Thread GitBox
roywei edited a comment on issue #15431: [MXNet 1.5.0.rc2] Issues with 
asnumpy() method
URL: 
https://github.com/apache/incubator-mxnet/issues/15431#issuecomment-510236197
 
 
   @Wallart your code is running fine on my machine. I'm testing on[ LJ-Speech 
Dataset](https://keithito.com/LJ-Speech-Dataset/)
   
   added a few lines in `main` to format the text files, and everything is 
plotted, and  `asnumpy()` works fine. Could you try without Docker?
   
   ```
   if __name__ == '__main__':
   logging.basicConfig()
   logging.getLogger().setLevel(logging.INFO)
   
   params = {
   'max_wav_value': 32768.0,  # for 16 bits files
   'sampling_rate': 22050,
   'filter_length': 1024,
   'hop_length': 256,
   'win_length': 1024,
   'n_mel_channels': 80,
   'mel_fmin': 0.0,
   'mel_fmax': 8000.0
   }
   with open('~/Downloads/LJSpeech-1.1/metadata.csv', encoding='utf-8') as 
f:
   for line in f:
   record = line.split('|')
   file_name = record[0] + '.txt'
   content = record[2]
   with open('~/Downloads/LJSpeech-1.1/wavs/' + file_name, 'w', 
encoding='utf-8') as text_output:
   text_output.write(content)
   
   french = WavDataset('~/Downloads/LJSpeech-1.1/wavs', text_to_sequence, 
**params)
   assert type(french[0]) == tuple
   ```
   
   ![Screen Shot 2019-07-10 at 2 16 55 
PM](https://user-images.githubusercontent.com/8022184/61006418-24e7de80-a31f-11e9-84aa-a433ab722972.png)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #15493: [numpy] numpy einsum

2019-07-10 Thread GitBox
haojin2 commented on a change in pull request #15493: [numpy] numpy einsum
URL: https://github.com/apache/incubator-mxnet/pull/15493#discussion_r302284030
 
 

 ##
 File path: src/operator/numpy/np_einsum_op.cc
 ##
 @@ -0,0 +1,192 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_einsum_op.cc
+ * \brief CPU Implementation of numpy-compatible einsum
+ */
+
+#include "./np_einsum_op-inl.h"
+#include 
+#include 
+
+namespace mxnet {
+namespace op {
+
+inline bool NumpyEinsumShape(const nnvm::NodeAttrs& attrs,
+ mxnet::ShapeVector *in_attrs,
+ mxnet::ShapeVector *out_attrs) {
+  const NumpyEinsumParam ¶m = nnvm::get(attrs.parsed);
+  const char* subscripts = param.subscripts.c_str();
+  int num_args = param.num_args;
+  CHECK_EQ(in_attrs->size(), num_args);
+  CHECK_EQ(out_attrs->size(), 1U);
+  for (int i = 0; i < num_args; i++) {
+if (!shape_is_known(in_attrs->at(i))) {
+  return false;
+}
+  }
+
+  int iop, label, min_label = 127, max_label = 0;
+  int nop = num_args;
+  char label_counts[128];
+  int label_size[128], max_broadcast = -1;
+  char op_labels[NPY_MAXARGS][NPY_MAXDIMS];
+  char output_labels[NPY_MAXDIMS];
+  int idim, ndim_output, ndim_broadcast;
+
+  /* Parse the subscripts string into label_counts and op_labels */
+  memset(label_counts, 0, sizeof(label_counts));
+  for (iop = 0; iop < nop; ++iop) {
+int length = static_cast(strcspn(subscripts, ",-"));
+CHECK(!(iop == nop-1 && subscripts[length] == ','))
+  << "more operands provided to einstein sum function "
+ "than specified in the subscripts string";
+CHECK(!(iop < nop-1 && subscripts[length] != ','))
+  << "fewer operands provided to einstein sum function "
+ "than specified in the subscripts string";
+CHECK_GE(parse_operand_subscripts(subscripts, length,
+  in_attrs->at(iop).ndim(),
+  iop, op_labels[iop], label_counts,
+  &min_label, &max_label), 0);
+
+/* Move subscripts to the start of the labels for the next op */
+subscripts += length;
+if (iop < nop - 1) {
+  subscripts++;
+}
+  }
+
+  /*
+   * Find the number of broadcast dimensions, which is the maximum
+   * number of labels == 0 in an op_labels array.
+   */
+  ndim_broadcast = 0;
+  for (iop = 0; iop < nop; ++iop) {
+int count_zeros = 0;
+int ndim;
+char *labels = op_labels[iop];
+ndim = in_attrs->at(iop).ndim();
+for (idim = 0; idim < ndim; ++idim) {
+  if (labels[idim] == 0) {
+++count_zeros;
+  } else if (labels[idim] > 0) {
+label_size[static_cast(labels[idim])] = in_attrs->at(iop)[idim];
+  }
+}
+if (count_zeros > ndim_broadcast) {
+  ndim_broadcast = count_zeros;
+  max_broadcast = iop;
+}
+  }
+
+  /*
+   * If there is no output signature, fill output_labels and ndim_output
+   * using each label that appeared once, in alphabetical order.
+   */
+  if (subscripts[0] == '\0') {
+/* If no output was specified, always broadcast left, as usual. */
+for (ndim_output = 0; ndim_output < ndim_broadcast; ++ndim_output) {
+  output_labels[ndim_output] = 0;
+}
+for (label = min_label; label <= max_label; ++label) {
+  if (label_counts[label] == 1) {
+CHECK(ndim_output < NPY_MAXDIMS)
+  << "einstein sum subscript string has too many "
+  << "distinct labels";
+output_labels[ndim_output++] = label;
+  }
+}
+  } else {
+CHECK(subscripts[0] == '-' && subscripts[1] == '>')
+  << "einstein sum subscript string does not "
+  << "contain proper '->' output specified";
+subscripts += 2;
+
+/* Parse the output subscript string. */
+ndim_output = parse_output_subscripts(subscripts, strlen(subscripts),
+  ndim_broadcast, label_counts,
+  output_labels);
+CHECK_GE(ndim_output, 0);
+  }
+
+  // std::cout << "output_labels" << std::endl;
+  // for (int i = 0; i < ndim_output; ++i) {
+  //   std::cout << output_labels[i] << " ";
+

[GitHub] [incubator-mxnet] lanking520 merged pull request #15500: fix the bug on Scala Sparse

2019-07-10 Thread GitBox
lanking520 merged pull request #15500: fix the bug on Scala Sparse
URL: https://github.com/apache/incubator-mxnet/pull/15500
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] IvyBazan commented on issue #15454: Julia docs

2019-07-10 Thread GitBox
IvyBazan commented on issue #15454: Julia docs
URL: https://github.com/apache/incubator-mxnet/pull/15454#issuecomment-510260831
 
 
   LGTM! Verified Julia installation and doc building process on an Ubuntu EC2 
instance.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Zha0q1 commented on issue #15490: [WIP] Utility to help developers debug operators: Tensor Inspector

2019-07-10 Thread GitBox
Zha0q1 commented on issue #15490: [WIP] Utility to help developers debug 
operators: Tensor Inspector
URL: https://github.com/apache/incubator-mxnet/pull/15490#issuecomment-510275321
 
 
   > I could probably rewrite build_checker() 
(https://github.com/apache/incubator-mxnet/pull/15490/files#diff-9e7d5c2420ecc900c4a85a3c35d91bffR446)
 with macro to suppress warnings when DType is not floating point. But it's not 
causing any issues because in such cases the control flow wouldn't go by the 
undefined operation. Will change if have time.
   
   After experimenting for several hours today, I think there is not a good way 
to breach at compile time to suppress the warnings. Macro apparently did not 
work. I also tried simulating C++17 style `constexpr if` with templated 
structs, which is an idea proposed by 
https://baptiste-wicht.com/posts/2015/07/simulate-static_if-with-c11c14.html, 
but that method was compiler specific and did not work either.  
   
   Having no warnings when compiling is nice, but given that this is a 
developer's tool and that behavior-wise the code has no issue, I think I will 
leave the warnings.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya opened a new issue #15506: Improving error message

2019-07-10 Thread GitBox
ChaiBapchya opened a new issue #15506: Improving error message
URL: https://github.com/apache/incubator-mxnet/issues/15506
 
 
   Source build on Ubuntu (EC2)
   
   I ran into this error:
   
   Thought this error message could be improved - 
   
   ```
   src/operator/quantization/./.././../common/../operator/mxnet_op.h(772): 
catastrophic error: error while writing generated C file: No space left on 
device
   
   1 catastrophic error detected in the compilation of 
"/tmp/tmpxft_32f8_-16_quantize.compute_35.cpp1.ii".
   Compilation terminated.
   : fatal error: when writing output to : No space left on device
   compilation terminated.
   
   Makefile:535: recipe for target 'build/src/operator/nn/layer_norm_gpu.o' 
failed
   make: *** [build/src/operator/nn/layer_norm_gpu.o] Error 1
   Makefile:535: recipe for target 
'build/src/operator/quantization/quantize_gpu.o' failed
   make: *** [build/src/operator/quantization/quantize_gpu.o] Error 1
   : fatal error: when writing output to : No such file or directory
   compilation terminated.
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #15506: Improving error message

2019-07-10 Thread GitBox
mxnet-label-bot commented on issue #15506: Improving error message
URL: 
https://github.com/apache/incubator-mxnet/issues/15506#issuecomment-510279254
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #15506: Improving error message

2019-07-10 Thread GitBox
ChaiBapchya commented on issue #15506: Improving error message
URL: 
https://github.com/apache/incubator-mxnet/issues/15506#issuecomment-510279558
 
 
   @mxnet-label-bot add [Installation, Unclear Error/Doc, Build]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zhly0 commented on issue #15131: doc consistence

2019-07-10 Thread GitBox
zhly0 commented on issue #15131: doc consistence
URL: 
https://github.com/apache/incubator-mxnet/issues/15131#issuecomment-510288738
 
 
   Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zhly0 closed issue #15131: doc consistence

2019-07-10 Thread GitBox
zhly0 closed issue #15131: doc consistence
URL: https://github.com/apache/incubator-mxnet/issues/15131
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zixuanweeei commented on a change in pull request #15497: Independent gradients requests check with respect to weights and bias of convolution

2019-07-10 Thread GitBox
zixuanweeei commented on a change in pull request #15497: Independent gradients 
requests check with respect to weights and bias of convolution
URL: https://github.com/apache/incubator-mxnet/pull/15497#discussion_r302333454
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_convolution.cc
 ##
 @@ -662,21 +662,21 @@ void MKLDNNConvolutionBackward(const nnvm::NodeAttrs& 
attrs, const OpContext &ct
 in_grad[conv::kWeight],
 convBwdWeight.bwdWeights_pd.diff_weights_primitive_desc(),
 req[conv::kWeight]);
-mkldnn_output_t in_grad_bias;
-if (param.no_bias) {
-  convBwdWeight.SetWeightNewMem(*data_mem, *out_grad_mem,
-  *in_grad_weight.second);
-  MKLDNNStream::Get()->RegisterPrim(convBwdWeight.GetBwdWeights());
-} else {
-  in_grad_bias = CreateMKLDNNMem(
+
+if (!param.no_bias && req[conv::kBias]) {
+  auto in_grad_bias = CreateMKLDNNMem(
   in_grad[conv::kBias],
   convBwdWeight.bwdWeights_pd.diff_bias_primitive_desc(), 
req[conv::kBias]);
   convBwdWeight.SetWeightNewMem(*data_mem, *out_grad_mem,
-  *in_grad_weight.second, *in_grad_bias.second);
+  *in_grad_weight.second, *in_grad_bias.second);
   MKLDNNStream::Get()->RegisterPrim(convBwdWeight.GetBwdWeights());
   CommitOutput(in_grad[conv::kBias], in_grad_bias);
+} else {
 
 Review comment:
   Sure. I see. There is unnecessary primitive registration without the check 
enabled. Thanks.  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Zha0q1 commented on issue #15490: Utility to help developers debug operators: Tensor Inspector

2019-07-10 Thread GitBox
Zha0q1 commented on issue #15490: Utility to help developers debug operators: 
Tensor Inspector
URL: https://github.com/apache/incubator-mxnet/pull/15490#issuecomment-510294519
 
 
   @apeforest @anirudh2290 @access2rohit @larroy @sandeep-krishnamurthy ,
   Hi guys, I completed this PR and it's ready for review. The only thing that 
I do not have yet is support for CSR NDArrays because I want to ask your 
opinion.  I think I probably do not need to generate the whole matrix. Having 
something like this in my interactive print should be good enough? (this is the 
existing test::print)
   ![Screen Shot 2019-07-10 at 6 49 08 
PM](https://user-images.githubusercontent.com/16669457/61016172-93d72e80-a343-11e9-9ddb-439c39a6fc82.png)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Zha0q1 edited a comment on issue #15490: Utility to help developers debug operators: Tensor Inspector

2019-07-10 Thread GitBox
Zha0q1 edited a comment on issue #15490: Utility to help developers debug 
operators: Tensor Inspector
URL: https://github.com/apache/incubator-mxnet/pull/15490#issuecomment-510294519
 
 
   @apeforest @anirudh2290 @access2rohit @larroy @sandeep-krishnamurthy ,
   Hi guys, I completed this PR and it's ready for review. The only thing that 
I do not have yet is support for CSR NDArrays because I want to ask your 
opinion.  I think I probably do not need to generate the whole matrix. Having 
something like this in my interactive print should be good enough? (this is in 
the existing test::print)
   ![Screen Shot 2019-07-10 at 6 49 08 
PM](https://user-images.githubusercontent.com/16669457/61016172-93d72e80-a343-11e9-9ddb-439c39a6fc82.png)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] KellenSunderland commented on issue #15399: Add unit tests for TensorRT integration and fix some bugs

2019-07-10 Thread GitBox
KellenSunderland commented on issue #15399: Add unit tests for TensorRT 
integration and fix some bugs
URL: https://github.com/apache/incubator-mxnet/pull/15399#issuecomment-510294845
 
 
   Looks like CI caught a few issues.  For example 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-gpu/detail/PR-15399/1/pipeline
 seems like it should be relevant to this PR.  I'll have a look to see if 
there's anything else that jumps out at me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] KellenSunderland commented on a change in pull request #15399: Add unit tests for TensorRT integration and fix some bugs

2019-07-10 Thread GitBox
KellenSunderland commented on a change in pull request #15399: Add unit tests 
for TensorRT integration and fix some bugs
URL: https://github.com/apache/incubator-mxnet/pull/15399#discussion_r302336971
 
 

 ##
 File path: src/operator/subgraph/tensorrt/nnvm_to_onnx.cc
 ##
 @@ -157,6 +157,12 @@ std::string ConvertNnvmGraphToOnnx(
   return serialized_onnx_graph;
 }
 
+void ConvertIdentity(NodeProto* node_proto, const NodeAttrs& attrs,
 
 Review comment:
   Any idea if TRT actually optimizes this out?  I've seen this in a few prod 
services :-/


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] KellenSunderland commented on a change in pull request #15399: Add unit tests for TensorRT integration and fix some bugs

2019-07-10 Thread GitBox
KellenSunderland commented on a change in pull request #15399: Add unit tests 
for TensorRT integration and fix some bugs
URL: https://github.com/apache/incubator-mxnet/pull/15399#discussion_r302337630
 
 

 ##
 File path: src/operator/subgraph/tensorrt/tensorrt-inl.h
 ##
 @@ -109,13 +111,70 @@ class TensorrtSelector : public SubgraphSelector {
 
   bool isTRTCompatible(const nnvm::Node &n) {
 const std::string op_name = n.op()->name;
+if (op_name == "FullyConnected") {
+  const auto& param = nnvm::get(n.attrs.parsed);
+  return !param.no_bias;
+}
+
 if (op_name == "Pooling") {
-  return (n.attrs.dict.at("pool_type") == "avg" ||
-  n.attrs.dict.at("pool_type") == "max");
+  const auto& param = nnvm::get(n.attrs.parsed);
+  if (param.layout.has_value()) {
+if (param.layout.value() == mshadow::kNHWC) {
+  LOG(INFO) << "Warning: NHWC layout (node: " << n.attrs.name
+<< ") is not supported by TensorRT";
+  return false;
+} else if (param.layout.value() == mshadow::kNDHWC) {
+  LOG(INFO) << "Warning: NDHWC layout (node: " << n.attrs.name
+<< ") is not supported by TensorRT";
+  return false;
+}
+  }
+  if (param.pooling_convention != pool_enum::kValid && !param.global_pool)
+return false;
+  if (param.pool_type == pool_enum::kAvgPooling) {
+if ((!param.global_pool) &&
+(!param.count_include_pad.has_value() || 
param.count_include_pad.value()))
+  return false;
+return true;
+  } else if (param.pool_type == pool_enum::kMaxPooling) {
+return true;
+  } else {
+return false;
+  }
 }
 
-if (unconditionalTRTops.count(op_name)) {
-  return true;
+if (op_name == "Convolution") {
+  const auto& param = nnvm::get(n.attrs.parsed);
+  if (!param.layout.has_value())
+return true;
+  switch (param.layout.value()) {
+case mshadow::kNCHW:
+case mshadow::kNCW:
+case mshadow::kNCDHW:
+  return true;
+case mshadow::kNHWC:
+  LOG(INFO) << "Warning: NHWC layout (node: " << n.attrs.name
+<< ") is not supported by TensorRT";
+  return false;
+case mshadow::kNDHWC:
+  LOG(INFO) << "Warning: NDHWC layout (node: " << n.attrs.name
+<< ") is not supported by TensorRT";
+  return false;
+default:
+  LOG(INFO) << "Warning: Layout (node: " << n.attrs.name
+<< ") is unknown (so unsupported by TensorRT)";
+  return false;
+  }
+}
+
+if (op_name == "Concat") {
+  const auto& param = nnvm::get(n.attrs.parsed);
+  return (param.dim != 0);
+}
+
+if (op_name == "Dropout") {
 
 Review comment:
   Again, will TensorRT optimize this out?  We don't want it at inference time 
right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] KellenSunderland commented on a change in pull request #15399: Add unit tests for TensorRT integration and fix some bugs

2019-07-10 Thread GitBox
KellenSunderland commented on a change in pull request #15399: Add unit tests 
for TensorRT integration and fix some bugs
URL: https://github.com/apache/incubator-mxnet/pull/15399#discussion_r302338157
 
 

 ##
 File path: src/operator/subgraph/tensorrt/tensorrt-inl.h
 ##
 @@ -180,6 +253,17 @@ class TensorrtProperty : public SubgraphProperty {
 n->attrs.name = "TensorRT" + std::to_string(subgraph_id);
 n->attrs.op = Op::Get("_TensorRT");
 CHECK(n->attrs.op);
+// prevent using Gamma value if using fix_gamma on BatchNorm
+DFSVisit(new_sym.outputs, [&n](const nnvm::NodePtr& node) {
+  if (node->op() == Op::Get("BatchNorm")) {
 
 Review comment:
   Why not just check for FixGamma = true during nnvm -> Onnx conversion and 
set gamma to 0 if it's true?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] KellenSunderland commented on a change in pull request #15399: Add unit tests for TensorRT integration and fix some bugs

2019-07-10 Thread GitBox
KellenSunderland commented on a change in pull request #15399: Add unit tests 
for TensorRT integration and fix some bugs
URL: https://github.com/apache/incubator-mxnet/pull/15399#discussion_r302338438
 
 

 ##
 File path: tests/python/gpu/test_tensorrt.py
 ##
 @@ -0,0 +1,437 @@
+# Licensed to the Apache Software Foundation (ASF) under one
 
 Review comment:
   Super helpful tests, thanks Clement.  It's ok to merge in a subset of these 
at a time, but I'd try to avoid pushing commented out code / too many todos 
upstream.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] KellenSunderland commented on a change in pull request #15449: cuda/cuDNN lib version checking. Force cuDNN v7 usage.

2019-07-10 Thread GitBox
KellenSunderland commented on a change in pull request #15449: cuda/cuDNN lib 
version checking.  Force cuDNN v7 usage.
URL: https://github.com/apache/incubator-mxnet/pull/15449#discussion_r302339983
 
 

 ##
 File path: src/common/cuda_utils.cc
 ##
 @@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file cuda_utils.cc
+ * \brief CUDA debugging utilities.
+ */
+
+#include 
+#include "cuda_utils.h"
+
+#if MXNET_USE_CUDA == 1
+
+namespace mxnet {
+namespace common {
+namespace cuda {
+
+// The oldest version of cuda used in upstream MXNet CI testing, both for unix 
and windows.
+// Users that have rebuilt MXNet against older versions will we advised with a 
warning to upgrade
+// their systems to match the CI level.  Minimally, users should rerun the CI 
locally.
+#if defined(_MSC_VER)
+#define MXNET_CI_OLDEST_CUDA_VERSION  9020
+#else
+#define MXNET_CI_OLDEST_CUDA_VERSION 1
+#endif
+
+// Dynamic init here will emit a warning if runtime and compile-time cuda lib 
versions mismatch.
+// Also if the user has recompiled their source to a version no longer tested 
by upstream CI.
+bool cuda_version_check_performed = []() {
+  // Don't bother with checks if there are no GPUs visible (e.g. with 
CUDA_VISIBLE_DEVICES="")
+  if (dmlc::GetEnv("MXNET_CUDA_VERSION_CHECKING", true) && 
Context::GetGPUCount() > 0) {
+int linkedAgainstCudaVersion = 0;
+CUDA_CALL(cudaRuntimeGetVersion(&linkedAgainstCudaVersion));
+if (linkedAgainstCudaVersion != CUDA_VERSION)
+  LOG(WARNING) << "cuda library mismatch: linked-against version " << 
linkedAgainstCudaVersion
 
 Review comment:
   Just want to make sure I'm understanding this one.  If a user runs with CUDA 
10.2, but the library was linked against 10.1 would this issue a warning?  I 
tend to do that fairly often, is it against best practices?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 opened a new pull request #15507: np_around

2019-07-10 Thread GitBox
tingying2020 opened a new pull request #15507: np_around
URL: https://github.com/apache/incubator-mxnet/pull/15507
 
 
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ## Comments ##
   np_around do not support `float16`
   
   @haojin2 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #15495: [Numpy] Added operator logaddexp; added support for zero-size tensor in BinaryBroadcastBackwardUseIn

2019-07-10 Thread GitBox
xidulu commented on a change in pull request #15495: [Numpy] Added operator 
logaddexp; added support for zero-size tensor in BinaryBroadcastBackwardUseIn
URL: https://github.com/apache/incubator-mxnet/pull/15495#discussion_r302343715
 
 

 ##
 File path: src/operator/numpy/np_elemwise_broadcast_op.cc
 ##
 @@ -186,6 +212,21 @@ 
MXNET_OPERATOR_REGISTER_NP_BINARY_SCALAR(_npi_power_scalar)
 .set_attr("FCompute", BinaryScalarOp::Compute)
 .set_attr("FGradient", 
ElemwiseGradUseIn{"_backward_power_scalar"});
 
+MXNET_OPERATOR_REGISTER_NP_BINARY_SCALAR(_npi_logaddexp_scalar)
+.set_attr("FCompute", BinaryScalarOp::Compute)
+.set_attr("Fgradient", 
ElemwiseGradUseIn{"_backward_logaddexp_scalar"});
+
+
+MXNET_OPERATOR_REGISTER_BINARY(_backward_logaddexp_scalar)
+.add_argument("scalar", "float", "scalar value")
+.set_attr_parser([](NodeAttrs *attrs) { attrs->parsed = 
std::stod(attrs->dict["scalar"]); })
+.set_attr("FCompute", BinaryScalarOp::Backward<
+  cpu, mshadow_op::logadd_left>);
+
+
+
+
+
 
 Review comment:
   blank lines removed, thanks for reviewing


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #15495: [Numpy] Added operator logaddexp; added support for zero-size tensor in BinaryBroadcastBackwardUseIn

2019-07-10 Thread GitBox
xidulu commented on a change in pull request #15495: [Numpy] Added operator 
logaddexp; added support for zero-size tensor in BinaryBroadcastBackwardUseIn
URL: https://github.com/apache/incubator-mxnet/pull/15495#discussion_r302343715
 
 

 ##
 File path: src/operator/numpy/np_elemwise_broadcast_op.cc
 ##
 @@ -186,6 +212,21 @@ 
MXNET_OPERATOR_REGISTER_NP_BINARY_SCALAR(_npi_power_scalar)
 .set_attr("FCompute", BinaryScalarOp::Compute)
 .set_attr("FGradient", 
ElemwiseGradUseIn{"_backward_power_scalar"});
 
+MXNET_OPERATOR_REGISTER_NP_BINARY_SCALAR(_npi_logaddexp_scalar)
+.set_attr("FCompute", BinaryScalarOp::Compute)
+.set_attr("Fgradient", 
ElemwiseGradUseIn{"_backward_logaddexp_scalar"});
+
+
+MXNET_OPERATOR_REGISTER_BINARY(_backward_logaddexp_scalar)
+.add_argument("scalar", "float", "scalar value")
+.set_attr_parser([](NodeAttrs *attrs) { attrs->parsed = 
std::stod(attrs->dict["scalar"]); })
+.set_attr("FCompute", BinaryScalarOp::Backward<
+  cpu, mshadow_op::logadd_left>);
+
+
+
+
+
 
 Review comment:
   blank lines removed, thanks for reviewing


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #15495: [Numpy] Added operator logaddexp; added support for zero-size tensor in BinaryBroadcastBackwardUseIn

2019-07-10 Thread GitBox
xidulu commented on a change in pull request #15495: [Numpy] Added operator 
logaddexp; added support for zero-size tensor in BinaryBroadcastBackwardUseIn
URL: https://github.com/apache/incubator-mxnet/pull/15495#discussion_r302343855
 
 

 ##
 File path: src/operator/numpy/np_elemwise_broadcast_op.cc
 ##
 @@ -186,6 +212,21 @@ 
MXNET_OPERATOR_REGISTER_NP_BINARY_SCALAR(_npi_power_scalar)
 .set_attr("FCompute", BinaryScalarOp::Compute)
 .set_attr("FGradient", 
ElemwiseGradUseIn{"_backward_power_scalar"});
 
+MXNET_OPERATOR_REGISTER_NP_BINARY_SCALAR(_npi_logaddexp_scalar)
+.set_attr("FCompute", BinaryScalarOp::Compute)
+.set_attr("Fgradient", 
ElemwiseGradUseIn{"_backward_logaddexp_scalar"});
+
+
+MXNET_OPERATOR_REGISTER_BINARY(_backward_logaddexp_scalar)
+.add_argument("scalar", "float", "scalar value")
+.set_attr_parser([](NodeAttrs *attrs) { attrs->parsed = 
std::stod(attrs->dict["scalar"]); })
+.set_attr("FCompute", BinaryScalarOp::Backward<
+  cpu, mshadow_op::logadd_left>);
+
+
+
+
+
 
 Review comment:
   > too many blank lines here
   
   blank lines removed, thanks for reviewing


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] iblis17 commented on a change in pull request #15454: Julia docs

2019-07-10 Thread GitBox
iblis17 commented on a change in pull request #15454: Julia docs
URL: https://github.com/apache/incubator-mxnet/pull/15454#discussion_r302345170
 
 

 ##
 File path: docs/install/ubuntu_setup.md
 ##
 @@ -310,25 +310,93 @@ Refer to the [Clojure setup 
guide](https://github.com/apache/incubator-mxnet/tre
 
 ### Install the MXNet Package for Julia
 
-The MXNet package for Julia is hosted in a separate repository, MXNet.jl, 
which is available on [GitHub](https://github.com/dmlc/MXNet.jl). To use Julia 
binding it with an existing libmxnet installation, set the ```MXNET_HOME``` 
environment variable by running the following command:
+ Install Julia
+The package available through `apt-get` is old and not compatible with the 
latest version of MXNet.
+Fetch the latest version (1.0.3 at the time of this writing).
 
 ```bash
-export MXNET_HOME=//libmxnet
+wget -qO julia-10.tar.gz 
https://julialang-s3.julialang.org/bin/linux/x64/1.0/julia-1.0.3-linux-x86_64.tar.gz
 ```
 
-The path to the existing libmxnet installation should be the root directory of 
libmxnet. In other words, you should be able to find the ```libmxnet.so``` file 
at ```$MXNET_HOME/lib```. For example, if the root directory of libmxnet is 
```~```, you would run the following command:
+Place the extracted files somewhere like a julia folder in your home dir.
 
 ```bash
-export MXNET_HOME=/~/libmxnet
+mkdir ~/julia
+mv julia-10.tar.gz ~/julia
+cd ~/julia
+tar xvf julia-10.tar.gz
 ```
 
-You might want to add this command to your ```~/.bashrc``` file. If you do, 
you can install the Julia package in the Julia console using the following 
command:
+Test Julia.
+```bash
+cd julia-1.0.3/bin
+julia -e 'using InteractiveUtils; versioninfo()'
+```
+
+If you're still getting the old version, remove it.
+```bash
+sudo apt remove julia
+```
+
+Update your PATH to have Julia's new location. Add this to your `.zshrc`, 
`.bashrc`, `.profile` or `.bash_profile`.
+```bash
+export PATH=~/julia/julia-1.0.3/bin:$PATH
+```
+
+Validate your PATH.
+```bash
+echo $PATH
+```
+
+Validate Julia works and is the expected version.
+```bash
+julia -e 'using InteractiveUtils; versioninfo()'
+```
+
+ Setup Your MXNet-Julia Environment
+
+**For each of the following environment variables, add the commands to your 
`.zshrc`, `.bashrc`, `.profile` or `.bash_profile` to make them persist.**
+
+Create a `julia-depot` folder and environment variable.
+```bash
+mkdir julia-depot
+export JULIA_DEPOT_PATH=$HOME/julia/julia-depot
+```
+
+To use the Julia binding with an existing `libmxnet` installation, set the 
`MXNET_HOME` environment variable to the MXNet source root. For example:
+```bash
+export MXNET_HOME=$HOME/incubator-mxnet
+```
 
-```julia
-Pkg.add("MXNet")
+Now set the `LD_LIBRARY_PATH` environment variable to where `libmxnet.so` is 
found. If you can't find it, you might have skipped the building MXNet step. Go 
back and [build MXNet](#build-the-shared-library) first. For example:
+```bash
+export LD_LIBRARY_PATH=$HOME/incubator-mxnet/lib:$LD_LIBRARY_PATH
+```
+
+Verify the location of `libjemalloc.so` and set the `LD_PRELOAD` environment 
variable.
+```bash
+export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so:$LD_PRELOAD
+```
+
+With all of these updates, here's an example of what you might want to have in 
your `.zshrc`, `.bashrc`, `.profile` or `.bash_profile`.
+
+```
+export PATH=$HOME/bin:$HOME/.local/bin:$HOME/julia/julia-1.0.3/bin:$PATH
+export JULIA_DEPOT_PATH=$HOME/julia/julia-depot
+export MXNET_HOME=$HOME/incubator-mxnet
+export LD_LIBRARY_PATH=$HOME/incubator-mxnet/lib:$LD_LIBRARY_PATH
+export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so:$LD_PRELOAD
+```
+
+Install MXNet with Julia:
+
+```bash
+julia --color=yes --project=./ -e \
+ 'using Pkg; \
+  Pkg.develop(PackageSpec(name="MXNet", path = 
joinpath(ENV["MXNET_HOME"], "julia")))'
 
 Review comment:
   I understand `Pkg.add` do not work at this moment, since v1.5.0 doesn't get 
released.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] francis0407 commented on a change in pull request #15495: [Numpy] Added operator logaddexp; added support for zero-size tensor in BinaryBroadcastBackwardUseIn

2019-07-10 Thread GitBox
francis0407 commented on a change in pull request #15495: [Numpy] Added 
operator logaddexp; added support for zero-size tensor in 
BinaryBroadcastBackwardUseIn
URL: https://github.com/apache/incubator-mxnet/pull/15495#discussion_r302345771
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -1070,6 +1070,61 @@ def mod(x1, x2, out=None):
 def power(x1, x2, out=None):
 return _ufunc_helper(x1, x2, _npi.power, _np.power, _npi.power_scalar, 
_npi.rpower_scalar, out)
 
+@set_module('mxnet.symbol.numpy')
+def logaddexp(x1, x2, out=None):
+"""Logarithm of the sum of exponentiations of the inputs.
+logaddexp(x1, x2, out=None)
+
+Calculates ``log(exp(x1) + exp(x2))``. This function is useful in
+statistics where the calculated probabilities of events may be so small
+as to exceed the range of normal floating point numbers.  In such cases
+the logarithm of the calculated probability is stored. This function
+allows adding probabilities stored in such a fashion.
+
+Parameters
+--
+x1, x2 : _Symbol or scalar
+Input values.
+out : ndarray, None, or tuple of ndarray and None, optional
+A location into which the result is stored. If provided, it must have
+a shape and dtype as the expected output. If not provided or `None`,
+a freshly-allocated array is returned.
+
+Returns
+---
+result : _Symbol
+Logarithm of ``exp(x1) + exp(x2)``.
+This is a scalar if both `x1` and `x2` are scalars.
+
+See Also
+
+logaddexp2: Logarithm of the sum of exponentiations of inputs in base 2.
+
+Notes
+-
+This function differs from the original `numpy.logaddexp2
+
`_ 
in
+the following aspects:
+
+- Input type does not support Python native iterables(list, tuple, ...). 
Only ndarray is supported.
+- ``out`` param: cannot perform auto broadcasting. ``out`` ndarray's shape 
must be the same as the expected output.
+- ``out`` param: cannot perform auto type cast. ``out`` ndarray's dtype 
must be the same as the expected output.
+- ``out`` param does not support scalar input case.
+
+Examples
+
+>>> prob1 = np.log(1e-50)
+>>> prob2 = np.log(2.5e-50)
+>>> prob12 = np.logaddexp(prob1, prob2)
+>>> prob12
+-113.87649168120691
+>>> np.exp(prob12)
+3.5057e-50
+"""
+return _ufunc_helper(x1, x2, _npi.logaddexp, _np.logaddexp, 
_npi.logaddexp_scalar,
+ _npi.logaddexp_scalar, out)
+
 
 Review comment:
   Only two blank lines are needed here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] francis0407 commented on a change in pull request #15495: [Numpy] Added operator logaddexp; added support for zero-size tensor in BinaryBroadcastBackwardUseIn

2019-07-10 Thread GitBox
francis0407 commented on a change in pull request #15495: [Numpy] Added 
operator logaddexp; added support for zero-size tensor in 
BinaryBroadcastBackwardUseIn
URL: https://github.com/apache/incubator-mxnet/pull/15495#discussion_r302344674
 
 

 ##
 File path: src/operator/numpy/np_elemwise_broadcast_op.cc
 ##
 @@ -150,6 +151,29 @@ Example::
 .set_attr("FCompute", BinaryBroadcastCompute)
 .set_attr("FGradient", 
ElemwiseGradUseIn{"_backward_broadcast_power"});
 
+MXNET_OPERATOR_REGISTER_BINARY_BROADCAST(_npi_logaddexp)
+.describe(
 
 Review comment:
   `.describe("logaddexp" ADD_FILELINE)` would be better


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] francis0407 commented on a change in pull request #15495: [Numpy] Added operator logaddexp; added support for zero-size tensor in BinaryBroadcastBackwardUseIn

2019-07-10 Thread GitBox
francis0407 commented on a change in pull request #15495: [Numpy] Added 
operator logaddexp; added support for zero-size tensor in 
BinaryBroadcastBackwardUseIn
URL: https://github.com/apache/incubator-mxnet/pull/15495#discussion_r302345396
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -1284,6 +1284,88 @@ def g(data, axis1, axis2, offset):
 continue
 assert False
 
+@with_seed()
+@npx.use_np_shape
+def test_np_logaddexp():
+@npx.use_np_shape
+class TestLogaddexp(HybridBlock):
+def __init__(self):
+super(TestLogaddexp, self).__init__()
+
+def hybrid_forward(self, F, x1, x2):
+return F.np.logaddexp(x1, x2)
+
+shapes = [
+((3, 1), (3, 1)),
+((3, 1, 2), (3, 1, 2)),
+((1, ),(1, )),
+((3, 0), (3, 0)),  # zero-size shape
+((0, 1), (0, 1)),  # zero-size shape
+((2, 0, 2), (2, 0, 2)),  # zero-size shape
+((1, ), (3, )),  # broadcast
+((2, 3), (2, 1)),  # broadcast
+((1, 3), (2, 3)),  # broadcast
+((1,3), (2, 0, 3)),  # broadcast to zero-dim shape
+((1, 0, 1), (3, 0, 1)), # broadcast of zero-dim shape
+((), ()),  # zero-dim shape
+]
+eps = 1e-3
+# Legal shape test.
+for shape_a, shape_b in shapes:
+for hybridize in [True, False]:
+test_logaddexp = TestLogaddexp()
+if hybridize:
+test_logaddexp.hybridize()
+lhs = rand_ndarray(shape_a).as_np_ndarray()
+rhs = rand_ndarray(shape_b).as_np_ndarray()
+lhs.attach_grad()
+rhs.attach_grad()
+np_out = _np.logaddexp(lhs.asnumpy(), rhs.asnumpy())
+np_backward_lhs = _np.exp(lhs.asnumpy()) / (_np.exp(lhs.asnumpy()) 
+ _np.exp(rhs.asnumpy()))
+np_backward_rhs = _np.exp(rhs.asnumpy()) / (_np.exp(lhs.asnumpy()) 
+ _np.exp(rhs.asnumpy()))
+with mx.autograd.record():
+mx_out = test_logaddexp(lhs, rhs)
+assert mx_out.shape == np_out.shape
+assert_almost_equal(mx_out.asnumpy(), np_out, rtol=1e-3, atol=1e-5)
+mx_out.backward()
+# For broadcast backward case,
+# reduce sum is applied on numpy result.
+for n_dim in range(len(shape_a)):
+if (shape_a[n_dim] != shape_b[n_dim]):
+if (shape_a[n_dim] > shape_b[n_dim]):
+np_backward_rhs = np_backward_rhs.sum(axis=n_dim, 
keepdims=True)
+else:
+np_backward_lhs = np_backward_lhs.sum(axis=n_dim, 
keepdims=True)
+assert_almost_equal(lhs.grad.asnumpy(), np_backward_lhs, 
rtol=1e-3, atol=1e-5)
+assert_almost_equal(rhs.grad.asnumpy(), np_backward_rhs, 
rtol=1e-3, atol=1e-5)
+# Test imperative once again
+mx_out = np.logaddexp(lhs, rhs)
+np_out = _np.logaddexp(lhs.asnumpy(), rhs.asnumpy())
+assert_almost_equal(mx_out.asnumpy(), np_out, rtol=1e-3, atol=1e-5)
+
+# Range case.
+x = [100, -100, 1000200, -1000200]
+y = [1000200, -1000200, 100, -100]
+z = [1000200, -100, 1000200, -100]
+for dt in ['float64']:
+logxf = np.array(x, dtype=dt)
+logyf = np.array(y, dtype=dt)
+logzf = np.array(z, dtype=dt)
+assert_almost_equal(np.logaddexp(logxf, logyf).asnumpy(), 
+logzf.asnumpy())
+
+
 
 Review comment:
   One blank line is enough here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] iblis17 commented on a change in pull request #15454: Julia docs

2019-07-10 Thread GitBox
iblis17 commented on a change in pull request #15454: Julia docs
URL: https://github.com/apache/incubator-mxnet/pull/15454#discussion_r302346283
 
 

 ##
 File path: docs/install/ubuntu_setup.md
 ##
 @@ -310,25 +310,93 @@ Refer to the [Clojure setup 
guide](https://github.com/apache/incubator-mxnet/tre
 
 ### Install the MXNet Package for Julia
 
-The MXNet package for Julia is hosted in a separate repository, MXNet.jl, 
which is available on [GitHub](https://github.com/dmlc/MXNet.jl). To use Julia 
binding it with an existing libmxnet installation, set the ```MXNET_HOME``` 
environment variable by running the following command:
+ Install Julia
+The package available through `apt-get` is old and not compatible with the 
latest version of MXNet.
+Fetch the latest version (1.0.3 at the time of this writing).
 
 ```bash
-export MXNET_HOME=//libmxnet
+wget -qO julia-10.tar.gz 
https://julialang-s3.julialang.org/bin/linux/x64/1.0/julia-1.0.3-linux-x86_64.tar.gz
 ```
 
-The path to the existing libmxnet installation should be the root directory of 
libmxnet. In other words, you should be able to find the ```libmxnet.so``` file 
at ```$MXNET_HOME/lib```. For example, if the root directory of libmxnet is 
```~```, you would run the following command:
+Place the extracted files somewhere like a julia folder in your home dir.
 
 ```bash
-export MXNET_HOME=/~/libmxnet
+mkdir ~/julia
+mv julia-10.tar.gz ~/julia
+cd ~/julia
+tar xvf julia-10.tar.gz
 ```
 
-You might want to add this command to your ```~/.bashrc``` file. If you do, 
you can install the Julia package in the Julia console using the following 
command:
+Test Julia.
+```bash
+cd julia-1.0.3/bin
+julia -e 'using InteractiveUtils; versioninfo()'
+```
+
+If you're still getting the old version, remove it.
+```bash
+sudo apt remove julia
+```
+
+Update your PATH to have Julia's new location. Add this to your `.zshrc`, 
`.bashrc`, `.profile` or `.bash_profile`.
+```bash
+export PATH=~/julia/julia-1.0.3/bin:$PATH
+```
+
+Validate your PATH.
+```bash
+echo $PATH
+```
+
+Validate Julia works and is the expected version.
+```bash
+julia -e 'using InteractiveUtils; versioninfo()'
+```
+
+ Setup Your MXNet-Julia Environment
+
+**For each of the following environment variables, add the commands to your 
`.zshrc`, `.bashrc`, `.profile` or `.bash_profile` to make them persist.**
+
+Create a `julia-depot` folder and environment variable.
+```bash
+mkdir julia-depot
+export JULIA_DEPOT_PATH=$HOME/julia/julia-depot
+```
+
+To use the Julia binding with an existing `libmxnet` installation, set the 
`MXNET_HOME` environment variable to the MXNet source root. For example:
+```bash
+export MXNET_HOME=$HOME/incubator-mxnet
+```
 
-```julia
-Pkg.add("MXNet")
+Now set the `LD_LIBRARY_PATH` environment variable to where `libmxnet.so` is 
found. If you can't find it, you might have skipped the building MXNet step. Go 
back and [build MXNet](#build-the-shared-library) first. For example:
+```bash
+export LD_LIBRARY_PATH=$HOME/incubator-mxnet/lib:$LD_LIBRARY_PATH
+```
+
+Verify the location of `libjemalloc.so` and set the `LD_PRELOAD` environment 
variable.
+```bash
+export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so:$LD_PRELOAD
+```
+
+With all of these updates, here's an example of what you might want to have in 
your `.zshrc`, `.bashrc`, `.profile` or `.bash_profile`.
+
+```
+export PATH=$HOME/bin:$HOME/.local/bin:$HOME/julia/julia-1.0.3/bin:$PATH
+export JULIA_DEPOT_PATH=$HOME/julia/julia-depot
+export MXNET_HOME=$HOME/incubator-mxnet
+export LD_LIBRARY_PATH=$HOME/incubator-mxnet/lib:$LD_LIBRARY_PATH
+export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so:$LD_PRELOAD
+```
+
+Install MXNet with Julia:
+
+```bash
+julia --color=yes --project=./ -e \
+ 'using Pkg; \
+  Pkg.develop(PackageSpec(name="MXNet", path = 
joinpath(ENV["MXNET_HOME"], "julia")))'
 
 Review comment:
   Can we still change the web page via PR after a release rolled out, or the 
web page only update once for a single release?
   If yes, I'm okay for waiting the package registry setup.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on a change in pull request #15497: Independent gradients requests check with respect to weights and bias of convolution

2019-07-10 Thread GitBox
pengzhao-intel commented on a change in pull request #15497: Independent 
gradients requests check with respect to weights and bias of convolution
URL: https://github.com/apache/incubator-mxnet/pull/15497#discussion_r302346944
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_convolution.cc
 ##
 @@ -662,21 +662,21 @@ void MKLDNNConvolutionBackward(const nnvm::NodeAttrs& 
attrs, const OpContext &ct
 in_grad[conv::kWeight],
 convBwdWeight.bwdWeights_pd.diff_weights_primitive_desc(),
 req[conv::kWeight]);
-mkldnn_output_t in_grad_bias;
-if (param.no_bias) {
-  convBwdWeight.SetWeightNewMem(*data_mem, *out_grad_mem,
-  *in_grad_weight.second);
-  MKLDNNStream::Get()->RegisterPrim(convBwdWeight.GetBwdWeights());
-} else {
-  in_grad_bias = CreateMKLDNNMem(
+
+if (!param.no_bias && req[conv::kBias]) {
+  auto in_grad_bias = CreateMKLDNNMem(
   in_grad[conv::kBias],
   convBwdWeight.bwdWeights_pd.diff_bias_primitive_desc(), 
req[conv::kBias]);
   convBwdWeight.SetWeightNewMem(*data_mem, *out_grad_mem,
-  *in_grad_weight.second, *in_grad_bias.second);
+  *in_grad_weight.second, *in_grad_bias.second);
   MKLDNNStream::Get()->RegisterPrim(convBwdWeight.GetBwdWeights());
   CommitOutput(in_grad[conv::kBias], in_grad_bias);
+} else if (req[conv::kWeight]) {
+  convBwdWeight.SetWeightNewMem(*data_mem, *out_grad_mem,
+  *in_grad_weight.second);
+  MKLDNNStream::Get()->RegisterPrim(convBwdWeight.GetBwdWeights());
 }
-CommitOutput(in_grad[conv::kWeight], in_grad_weight);
+if (req[conv::kWeight]) CommitOutput(in_grad[conv::kWeight], 
in_grad_weight);
 
 Review comment:
   what's the behavior of req[conv::bias]?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #15448: [MKLDNN]Enhance Quantization APIs and Tutorial

2019-07-10 Thread GitBox
pengzhao-intel commented on issue #15448: [MKLDNN]Enhance Quantization APIs and 
Tutorial
URL: https://github.com/apache/incubator-mxnet/pull/15448#issuecomment-510310011
 
 
   @ThomasDelteil Would you mind to take a review again?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #15130: Add NaiveEngine tests in CI

2019-07-10 Thread GitBox
pengzhao-intel commented on issue #15130: Add NaiveEngine tests in CI
URL: https://github.com/apache/incubator-mxnet/pull/15130#issuecomment-510311947
 
 
   Closing this PR. We're not urgent to enable naive engine test. 
   Let's wait if someone requests this in the future.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ckt624 commented on a change in pull request #15349: Numpy Tensordot Operator

2019-07-10 Thread GitBox
ckt624 commented on a change in pull request #15349: Numpy Tensordot Operator 
URL: https://github.com/apache/incubator-mxnet/pull/15349#discussion_r302348771
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -33,7 +33,88 @@
'clip', 'split', 'swapaxes', 'expand_dims', 'tile', 'linspace',
'sin', 'cos', 'sinh', 'cosh', 'log10', 'sqrt', 'abs', 'exp', 
'arctan', 'sign', 'log',
'degrees', 'log2', 'rint', 'radians', 'mean', 'reciprocal', 
'square', 'arcsin',
-   'argsort']
+   'argsort', 'tensordot']
+
 
 Review comment:
   Fixed. Thx.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel closed pull request #15130: Add NaiveEngine tests in CI

2019-07-10 Thread GitBox
pengzhao-intel closed pull request #15130: Add NaiveEngine tests in CI
URL: https://github.com/apache/incubator-mxnet/pull/15130
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ckt624 commented on a change in pull request #15349: Numpy Tensordot Operator

2019-07-10 Thread GitBox
ckt624 commented on a change in pull request #15349: Numpy Tensordot Operator 
URL: https://github.com/apache/incubator-mxnet/pull/15349#discussion_r302348845
 
 

 ##
 File path: python/mxnet/numpy/multiarray.py
 ##
 @@ -45,10 +45,90 @@
 
 __all__ = ['ndarray', 'empty', 'array', 'zeros', 'ones', 'maximum', 'minimum', 
'stack', 'arange',
'argmax', 'add', 'subtract', 'multiply', 'divide', 'mod', 'power', 
'concatenate',
-   'clip', 'split', 'swapaxes', 'expand_dims', 'tile', 'linspace', 
'sin', 'cos',
'sin', 'cos', 'sinh', 'cosh', 'log10', 'sqrt', 'abs', 'exp', 
'arctan', 'sign', 'log',
'degrees', 'log2', 'rint', 'radians', 'mean', 'reciprocal', 
'square', 'arcsin',
-   'argsort']
+   'argsort', 'tensordot']
+
 
 Review comment:
   Fixed. Thx.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 merged pull request #15455: Improve docs for AMP

2019-07-10 Thread GitBox
anirudh2290 merged pull request #15455: Improve docs for AMP
URL: https://github.com/apache/incubator-mxnet/pull/15455
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on issue #15349: Numpy Tensordot Operator

2019-07-10 Thread GitBox
reminisce commented on issue #15349: Numpy Tensordot Operator 
URL: https://github.com/apache/incubator-mxnet/pull/15349#issuecomment-510312746
 
 
   Please fix the CI failures.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ckt624 commented on a change in pull request #15349: Numpy Tensordot Operator

2019-07-10 Thread GitBox
ckt624 commented on a change in pull request #15349: Numpy Tensordot Operator 
URL: https://github.com/apache/incubator-mxnet/pull/15349#discussion_r302349667
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -26,7 +26,151 @@
 from mxnet.test_utils import check_numeric_gradient
 from common import assertRaises, with_seed
 import random
+import collections
 
+@with_seed()
+@npx.use_np_shape
+def test_np_tensordot():
+class TestTensordot(HybridBlock):
+def __init__(self, axes):
+super(TestTensordot, self).__init__()
+self._axes = axes
+
+def hybrid_forward(self, F, a, b):
+return F.np.tensordot(a, b, self._axes)
+
+def tensordot_backward(a, b, axes = 2):
+if (a.ndim < 1) or (b.ndim < 1):
+raise ValueError('An input is zero-dim')
+
+if isinstance(axes, collections.abc.Sequence):
+if len(axes) != 2:
+raise ValueError('Axes must consist of two arrays.')
+a_axes_summed, b_axes_summed = axes
+if _np.isscalar(a_axes_summed):
+a_axes_summed = a_axes_summed,
+if _np.isscalar(b_axes_summed):
+b_axes_summed = b_axes_summed,
+else:
+a_axes_summed = [i + a.ndim - axes for i in range(axes)]
+b_axes_summed = [i for i in range(axes)]
+
+if len(a_axes_summed) != len(b_axes_summed):
+raise ValueError('Axes length mismatch') 
+
+a_axes_remained = []
+for i in range(a.ndim):
+if not (i in a_axes_summed):
+a_axes_remained.append(i)
+a_axes = a_axes_remained[:] + a_axes_summed[:]
+
+b_axes_remained = []
+for i in range(b.ndim):
+if not (i in b_axes_summed):
+b_axes_remained.append(i)
+b_axes = b_axes_summed[:] + b_axes_remained[:]
+
+ad1 = _np.prod([a.shape[i] for i in a_axes_remained]) if 
len(a_axes_remained) > 0 else 1
+ad2 = _np.prod([a.shape[i] for i in a_axes_summed]) if 
len(a_axes_summed) > 0 else 1
+bd1 = _np.prod([b.shape[i] for i in b_axes_summed]) if 
len(b_axes_summed) > 0 else 1
+bd2 = _np.prod([b.shape[i] for i in b_axes_remained]) if 
len(b_axes_remained) > 0 else 1
+
+out_grad = _np.ones((ad1, bd2))
+
+new_a = _np.transpose(a, a_axes)
+new_a_shape = new_a.shape[:]
+new_a = new_a.reshape((ad1, ad2)) 
+new_b = _np.transpose(b, b_axes) 
+new_b_shape = new_b.shape[:]
+new_b = new_b.reshape((bd1, bd2))
+
+reverse_a_axes = [0 for i in a_axes]
+for i in range(len(a_axes)):
+reverse_a_axes[a_axes[i]] = i
+
+reverse_b_axes = [0 for i in b_axes]
+for i in range(len(b_axes)):
+reverse_b_axes[b_axes[i]] = i
+
+grad_b = _np.dot(new_a.T, out_grad).reshape(new_b_shape)
+grad_b = _np.transpose(grad_b, reverse_b_axes)
+grad_a = _np.dot(out_grad, new_b.T).reshape(new_a_shape)
+grad_a = _np.transpose(grad_a, reverse_a_axes)
+
+return [grad_a, grad_b]
+
+# test non zero size input
+tensor_shapes = [ 
+((3, 5), (5, 4), 1),  # (a_shape, b_shape, axes)
+((3,), (3,), 1),   
+((3, 4, 5, 6, 7), (5, 6, 7, 1, 2), 3),
+((3, 5, 4, 6, 7), (7, 6, 5, 1, 2), [[1, 3, 4], [2, 1, 0]]),
+((2, 2), (2, 2), 2),
+((3, 5, 4), (5, ), [[1], [0]]),  
+((2,), (2, 3), 1),
+((3,), (3,), 0),
+((2,), (2, 3), 0),
+((3, 5, 4), (5, ), 0)
+]
+
+for hybridize in [True, False]:
+for a_shape, b_shape, axes in tensor_shapes:
+for dtype in [_np.float32, _np.float64]:
+test_tensordot = TestTensordot(axes)
+if hybridize:
+test_tensordot.hybridize()
+a = rand_ndarray(shape = a_shape, dtype = 
dtype).as_np_ndarray() 
+b = rand_ndarray(shape = b_shape, dtype = 
dtype).as_np_ndarray() 
+a.attach_grad()
+b.attach_grad()
+
+np_out = _np.tensordot(a.asnumpy(), b.asnumpy(), axes)
+with mx.autograd.record():
+mx_out = test_tensordot(a, b)   
+assert mx_out.shape == np_out.shape
+assert_almost_equal(mx_out.asnumpy(), np_out, rtol = 1e-3, 
atol = 1e-5)
+mx_out.backward()
+np_backward = tensordot_backward(a.asnumpy(), b.asnumpy(), 
axes)
+assert_almost_equal(a.grad.asnumpy(), np_backward[0], rtol = 
1e-3, atol=1e-5)
+assert_almost_equal(b.grad.asnumpy(), np_backward[1], rtol = 
1e-3, atol=1e-5)
+
+# Test imperative once again
+mx_out = np.tensordot(a, b, axes)
+np_out = _np.tensordot(a.asnumpy(), b.asnumpy(), axes)
+assert_almo

[GitHub] [incubator-mxnet] ckt624 commented on a change in pull request #15349: Numpy Tensordot Operator

2019-07-10 Thread GitBox
ckt624 commented on a change in pull request #15349: Numpy Tensordot Operator 
URL: https://github.com/apache/incubator-mxnet/pull/15349#discussion_r302349487
 
 

 ##
 File path: src/operator/numpy/np_tensordot_op-inl.h
 ##
 @@ -0,0 +1,399 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_tensordot_op-inl.h
+ * \brief CPU Implementation of numpy-compatible tensordot
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_TENSORDOT_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_TENSORDOT_OP_INL_H_
+
+#include 
+#include "np_matrix_op-inl.h"
+
+namespace mxnet {
+namespace op {
+
+using namespace mxnet;
+using namespace mshadow;
+
+struct TensordotParam : public dmlc::Parameter {
+  mxnet::Tuple a_axes_summed, b_axes_summed;
+  DMLC_DECLARE_PARAMETER(TensordotParam) {
+DMLC_DECLARE_FIELD(a_axes_summed);
+DMLC_DECLARE_FIELD(b_axes_summed);
+  }
+};
+
+/**
+ * Gets matrix dimensions of a and b after transpose and reshape.
+ */
+inline void GetMatrixDimensions(
+int* ad1,
+int* ad2,
+int* bd1,
+int* bd2,
 
 Review comment:
   We need to change these values in the function. Using reference can't pass 
the sanity check.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ckt624 commented on a change in pull request #15349: Numpy Tensordot Operator

2019-07-10 Thread GitBox
ckt624 commented on a change in pull request #15349: Numpy Tensordot Operator 
URL: https://github.com/apache/incubator-mxnet/pull/15349#discussion_r302350329
 
 

 ##
 File path: src/operator/numpy/np_tensordot_int_axes_op-inl.h
 ##
 @@ -0,0 +1,213 @@
+/*
 
 Review comment:
   Fixed. Thx.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #15491: [Doc] Add MKL install method apt/yum into tutorial

2019-07-10 Thread GitBox
pengzhao-intel commented on issue #15491: [Doc] Add MKL install method apt/yum 
into tutorial
URL: https://github.com/apache/incubator-mxnet/pull/15491#issuecomment-510314614
 
 
   Thanks, merging now :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel merged pull request #15491: [Doc] Add MKL install method apt/yum into tutorial

2019-07-10 Thread GitBox
pengzhao-intel merged pull request #15491: [Doc] Add MKL install method apt/yum 
into tutorial
URL: https://github.com/apache/incubator-mxnet/pull/15491
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ckt624 commented on a change in pull request #15349: Numpy Tensordot Operator

2019-07-10 Thread GitBox
ckt624 commented on a change in pull request #15349: Numpy Tensordot Operator 
URL: https://github.com/apache/incubator-mxnet/pull/15349#discussion_r302351664
 
 

 ##
 File path: src/operator/numpy/np_tensordot_int_axes_op-inl.h
 ##
 @@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_tensordot_int_axes_op-inl.h
+ * \brief Implementation of numpy-compatible tensordot_int_axes
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_TENSORDOT_INT_AXES_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_TENSORDOT_INT_AXES_OP_INL_H_
+
+#include 
+#include "np_tensordot_op-inl.h"
+
+namespace mxnet {
+namespace op {
+
+using namespace mxnet;
+using namespace mshadow;
+
+struct TensordotIntAxesParam : public dmlc::Parameter {
+  int axes;
+  DMLC_DECLARE_PARAMETER(TensordotIntAxesParam) {
+DMLC_DECLARE_FIELD(axes);
+  }
+};
+
+/**
+ * gets summed axes of a and b from parameter axes.
+ */
+inline void GetSummedAxes(
+mxnet::Tuple* a_axes_summed_ptr,
+mxnet::Tuple* b_axes_summed_ptr,
+const int axes,
+const mxnet::TShape& a_shape) {
+  std::vector a_axes_summed_vector;
+  for (int i = 0; i < axes; i++) {
+a_axes_summed_vector.push_back(a_shape.ndim() - axes + i);
+  }
+  *a_axes_summed_ptr = mxnet::Tuple(a_axes_summed_vector);
+
+  std::vector b_axes_summed_vector;
+  for (int i = 0; i < axes; i++) {
+b_axes_summed_vector.push_back(i);
+  }
+  *b_axes_summed_ptr = mxnet::Tuple(b_axes_summed_vector);
+}
+
+/**
+ * Calculates tensordot.
+ */
+template
+void TensordotIntAxesImpl(
 
 Review comment:
   Fixed. Thx.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ckt624 commented on a change in pull request #15349: Numpy Tensordot Operator

2019-07-10 Thread GitBox
ckt624 commented on a change in pull request #15349: Numpy Tensordot Operator 
URL: https://github.com/apache/incubator-mxnet/pull/15349#discussion_r302351820
 
 

 ##
 File path: src/operator/numpy/np_tensordot_int_axes_op-inl.h
 ##
 @@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_tensordot_int_axes_op-inl.h
+ * \brief Implementation of numpy-compatible tensordot_int_axes
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_TENSORDOT_INT_AXES_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_TENSORDOT_INT_AXES_OP_INL_H_
+
+#include 
+#include "np_tensordot_op-inl.h"
+
+namespace mxnet {
+namespace op {
+
+using namespace mxnet;
+using namespace mshadow;
+
+struct TensordotIntAxesParam : public dmlc::Parameter {
+  int axes;
+  DMLC_DECLARE_PARAMETER(TensordotIntAxesParam) {
+DMLC_DECLARE_FIELD(axes);
+  }
+};
+
+/**
+ * gets summed axes of a and b from parameter axes.
+ */
+inline void GetSummedAxes(
+mxnet::Tuple* a_axes_summed_ptr,
+mxnet::Tuple* b_axes_summed_ptr,
+const int axes,
+const mxnet::TShape& a_shape) {
+  std::vector a_axes_summed_vector;
+  for (int i = 0; i < axes; i++) {
+a_axes_summed_vector.push_back(a_shape.ndim() - axes + i);
+  }
+  *a_axes_summed_ptr = mxnet::Tuple(a_axes_summed_vector);
+
+  std::vector b_axes_summed_vector;
+  for (int i = 0; i < axes; i++) {
+b_axes_summed_vector.push_back(i);
+  }
+  *b_axes_summed_ptr = mxnet::Tuple(b_axes_summed_vector);
+}
+
+/**
+ * Calculates tensordot.
+ */
+template
+void TensordotIntAxesImpl(
+const int axes,
+const OpContext& ctx,
+const TBlob& a,
+const TBlob& b,
+const TBlob& out,
+const std::vector& req) {
+
+  if (req[0] == kNullOp) {
+return;
+  }
+
+  if (out.shape_.Size() == 0U) {
+return;  // zero-size output, no need to launch kernel
+  }
+
+  const mxnet::TShape& a_shape = a.shape_;
+  const mxnet::TShape& b_shape = b.shape_;
+
+  mshadow::Stream *s = ctx.get_stream();
+  CHECK_EQ(out.type_flag_, a.type_flag_)
+  << "Binary function only support input/output with the same type";
+  CHECK_EQ(out.type_flag_, b.type_flag_)
+  << "Binary function only support input/output with the same type";
+  CHECK(out.type_flag_ == kFloat32 || out.type_flag_ == kFloat64 ||
 
 Review comment:
   Fixed. Thx.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] iblis17 commented on a change in pull request #15454: Julia docs

2019-07-10 Thread GitBox
iblis17 commented on a change in pull request #15454: Julia docs
URL: https://github.com/apache/incubator-mxnet/pull/15454#discussion_r302346283
 
 

 ##
 File path: docs/install/ubuntu_setup.md
 ##
 @@ -310,25 +310,93 @@ Refer to the [Clojure setup 
guide](https://github.com/apache/incubator-mxnet/tre
 
 ### Install the MXNet Package for Julia
 
-The MXNet package for Julia is hosted in a separate repository, MXNet.jl, 
which is available on [GitHub](https://github.com/dmlc/MXNet.jl). To use Julia 
binding it with an existing libmxnet installation, set the ```MXNET_HOME``` 
environment variable by running the following command:
+ Install Julia
+The package available through `apt-get` is old and not compatible with the 
latest version of MXNet.
+Fetch the latest version (1.0.3 at the time of this writing).
 
 ```bash
-export MXNET_HOME=//libmxnet
+wget -qO julia-10.tar.gz 
https://julialang-s3.julialang.org/bin/linux/x64/1.0/julia-1.0.3-linux-x86_64.tar.gz
 ```
 
-The path to the existing libmxnet installation should be the root directory of 
libmxnet. In other words, you should be able to find the ```libmxnet.so``` file 
at ```$MXNET_HOME/lib```. For example, if the root directory of libmxnet is 
```~```, you would run the following command:
+Place the extracted files somewhere like a julia folder in your home dir.
 
 ```bash
-export MXNET_HOME=/~/libmxnet
+mkdir ~/julia
+mv julia-10.tar.gz ~/julia
+cd ~/julia
+tar xvf julia-10.tar.gz
 ```
 
-You might want to add this command to your ```~/.bashrc``` file. If you do, 
you can install the Julia package in the Julia console using the following 
command:
+Test Julia.
+```bash
+cd julia-1.0.3/bin
+julia -e 'using InteractiveUtils; versioninfo()'
+```
+
+If you're still getting the old version, remove it.
+```bash
+sudo apt remove julia
+```
+
+Update your PATH to have Julia's new location. Add this to your `.zshrc`, 
`.bashrc`, `.profile` or `.bash_profile`.
+```bash
+export PATH=~/julia/julia-1.0.3/bin:$PATH
+```
+
+Validate your PATH.
+```bash
+echo $PATH
+```
+
+Validate Julia works and is the expected version.
+```bash
+julia -e 'using InteractiveUtils; versioninfo()'
+```
+
+ Setup Your MXNet-Julia Environment
+
+**For each of the following environment variables, add the commands to your 
`.zshrc`, `.bashrc`, `.profile` or `.bash_profile` to make them persist.**
+
+Create a `julia-depot` folder and environment variable.
+```bash
+mkdir julia-depot
+export JULIA_DEPOT_PATH=$HOME/julia/julia-depot
+```
+
+To use the Julia binding with an existing `libmxnet` installation, set the 
`MXNET_HOME` environment variable to the MXNet source root. For example:
+```bash
+export MXNET_HOME=$HOME/incubator-mxnet
+```
 
-```julia
-Pkg.add("MXNet")
+Now set the `LD_LIBRARY_PATH` environment variable to where `libmxnet.so` is 
found. If you can't find it, you might have skipped the building MXNet step. Go 
back and [build MXNet](#build-the-shared-library) first. For example:
+```bash
+export LD_LIBRARY_PATH=$HOME/incubator-mxnet/lib:$LD_LIBRARY_PATH
+```
+
+Verify the location of `libjemalloc.so` and set the `LD_PRELOAD` environment 
variable.
+```bash
+export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so:$LD_PRELOAD
+```
+
+With all of these updates, here's an example of what you might want to have in 
your `.zshrc`, `.bashrc`, `.profile` or `.bash_profile`.
+
+```
+export PATH=$HOME/bin:$HOME/.local/bin:$HOME/julia/julia-1.0.3/bin:$PATH
+export JULIA_DEPOT_PATH=$HOME/julia/julia-depot
+export MXNET_HOME=$HOME/incubator-mxnet
+export LD_LIBRARY_PATH=$HOME/incubator-mxnet/lib:$LD_LIBRARY_PATH
+export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so:$LD_PRELOAD
+```
+
+Install MXNet with Julia:
+
+```bash
+julia --color=yes --project=./ -e \
+ 'using Pkg; \
+  Pkg.develop(PackageSpec(name="MXNet", path = 
joinpath(ENV["MXNET_HOME"], "julia")))'
 
 Review comment:
   Can we still change the web page via PR after a release rolled out, or the 
web page only update once for a single release?
   If yes, I'm okay for waiting the package registry setup, then add some 
descriptions for `Pkg.add`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ckt624 commented on a change in pull request #15349: Numpy Tensordot Operator

2019-07-10 Thread GitBox
ckt624 commented on a change in pull request #15349: Numpy Tensordot Operator 
URL: https://github.com/apache/incubator-mxnet/pull/15349#discussion_r302353380
 
 

 ##
 File path: src/operator/numpy/np_tensordot_int_axes_op-inl.h
 ##
 @@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_tensordot_int_axes_op-inl.h
+ * \brief Implementation of numpy-compatible tensordot_int_axes
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_TENSORDOT_INT_AXES_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_TENSORDOT_INT_AXES_OP_INL_H_
+
+#include 
+#include "np_tensordot_op-inl.h"
+
+namespace mxnet {
+namespace op {
+
+using namespace mxnet;
+using namespace mshadow;
+
+struct TensordotIntAxesParam : public dmlc::Parameter {
+  int axes;
+  DMLC_DECLARE_PARAMETER(TensordotIntAxesParam) {
+DMLC_DECLARE_FIELD(axes);
+  }
+};
+
+/**
+ * gets summed axes of a and b from parameter axes.
+ */
+inline void GetSummedAxes(
+mxnet::Tuple* a_axes_summed_ptr,
+mxnet::Tuple* b_axes_summed_ptr,
+const int axes,
+const mxnet::TShape& a_shape) {
+  std::vector a_axes_summed_vector;
+  for (int i = 0; i < axes; i++) {
+a_axes_summed_vector.push_back(a_shape.ndim() - axes + i);
+  }
+  *a_axes_summed_ptr = mxnet::Tuple(a_axes_summed_vector);
+
+  std::vector b_axes_summed_vector;
+  for (int i = 0; i < axes; i++) {
+b_axes_summed_vector.push_back(i);
+  }
+  *b_axes_summed_ptr = mxnet::Tuple(b_axes_summed_vector);
+}
+
+/**
+ * Calculates tensordot.
+ */
+template
+void TensordotIntAxesImpl(
+const int axes,
+const OpContext& ctx,
+const TBlob& a,
+const TBlob& b,
+const TBlob& out,
+const std::vector& req) {
+
+  if (req[0] == kNullOp) {
+return;
+  }
+
+  if (out.shape_.Size() == 0U) {
+return;  // zero-size output, no need to launch kernel
+  }
+
+  const mxnet::TShape& a_shape = a.shape_;
+  const mxnet::TShape& b_shape = b.shape_;
+
+  mshadow::Stream *s = ctx.get_stream();
+  CHECK_EQ(out.type_flag_, a.type_flag_)
+  << "Binary function only support input/output with the same type";
+  CHECK_EQ(out.type_flag_, b.type_flag_)
+  << "Binary function only support input/output with the same type";
+  CHECK(out.type_flag_ == kFloat32 || out.type_flag_ == kFloat64 ||
+  (out.type_flag_ == kFloat16 && ctx.run_ctx.ctx.dev_mask() == 
mshadow::gpu::kDevMask))
+  << "Tensordot only supports float32/float64 for CPU, and 
float16/float32/float64 for GPU";
+
+  Tuple a_axes_summed;
+  Tuple b_axes_summed;
+  GetSummedAxes(&a_axes_summed, &b_axes_summed, axes, a_shape);
+
+  Tuple a_axes_remained;
+  Tuple b_axes_remained;
+  Tuple a_axes;
+  Tuple b_axes;
+  GetReorderedAxes(a_axes_summed, &a_axes_remained, &a_axes, b_axes_summed, 
&b_axes_remained,
+&b_axes, a_shape, b_shape);
 
 Review comment:
   Fixed. Thx.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ckt624 commented on a change in pull request #15349: Numpy Tensordot Operator

2019-07-10 Thread GitBox
ckt624 commented on a change in pull request #15349: Numpy Tensordot Operator 
URL: https://github.com/apache/incubator-mxnet/pull/15349#discussion_r302353567
 
 

 ##
 File path: src/operator/numpy/np_tensordot_op.cc
 ##
 @@ -0,0 +1,186 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_tensordot_op.cc
+ * \brief CPU Implementation of numpy-compatible tensordot
+ */
+
+#include 
+#include "np_tensordot_op-inl.h"
+
+namespace mxnet {
+namespace op {
+
+bool TensordotOpShape(const nnvm::NodeAttrs& attrs,
+ mxnet::ShapeVector *in_attrs,
 
 Review comment:
   Fixed. Thx.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] endvroy commented on a change in pull request #15293: Numpy bitwise OR operator

2019-07-10 Thread GitBox
endvroy commented on a change in pull request #15293: Numpy bitwise OR operator
URL: https://github.com/apache/incubator-mxnet/pull/15293#discussion_r302354149
 
 

 ##
 File path: src/operator/numpy/np_elemwise_binary_op.cc
 ##
 @@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_elemwise_binary_op.cc
+ * \brief CPU Implementation of numpy-compatible element-wise binary operations
+ */
+
+#include 
+#include "../mshadow_op.h"
+#include "../operator_common.h"
+#include "../tensor/elemwise_binary_op.h"
+#include "../tensor/elemwise_binary_broadcast_op.h"
+
+namespace mxnet {
+namespace op {
+
+NNVM_REGISTER_OP(_np_bitwise_or)
+.set_num_inputs(2)
+.set_num_outputs(1)
+.set_attr("FInferShape", BinaryBroadcastShape)
+.set_attr("FInferType", ElemwiseType<2, 1>)
 
 Review comment:
   fixed in the most recent commit


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] iblis17 commented on a change in pull request #15454: Julia docs

2019-07-10 Thread GitBox
iblis17 commented on a change in pull request #15454: Julia docs
URL: https://github.com/apache/incubator-mxnet/pull/15454#discussion_r302354265
 
 

 ##
 File path: docs/install/ubuntu_setup.md
 ##
 @@ -310,25 +310,93 @@ Refer to the [Clojure setup 
guide](https://github.com/apache/incubator-mxnet/tre
 
 ### Install the MXNet Package for Julia
 
-The MXNet package for Julia is hosted in a separate repository, MXNet.jl, 
which is available on [GitHub](https://github.com/dmlc/MXNet.jl). To use Julia 
binding it with an existing libmxnet installation, set the ```MXNET_HOME``` 
environment variable by running the following command:
+ Install Julia
+The package available through `apt-get` is old and not compatible with the 
latest version of MXNet.
+Fetch the latest version (1.0.3 at the time of this writing).
 
 ```bash
-export MXNET_HOME=//libmxnet
+wget -qO julia-10.tar.gz 
https://julialang-s3.julialang.org/bin/linux/x64/1.0/julia-1.0.3-linux-x86_64.tar.gz
 ```
 
-The path to the existing libmxnet installation should be the root directory of 
libmxnet. In other words, you should be able to find the ```libmxnet.so``` file 
at ```$MXNET_HOME/lib```. For example, if the root directory of libmxnet is 
```~```, you would run the following command:
+Place the extracted files somewhere like a julia folder in your home dir.
 
 ```bash
-export MXNET_HOME=/~/libmxnet
+mkdir ~/julia
+mv julia-10.tar.gz ~/julia
+cd ~/julia
+tar xvf julia-10.tar.gz
 ```
 
-You might want to add this command to your ```~/.bashrc``` file. If you do, 
you can install the Julia package in the Julia console using the following 
command:
+Test Julia.
+```bash
+cd julia-1.0.3/bin
+julia -e 'using InteractiveUtils; versioninfo()'
+```
+
+If you're still getting the old version, remove it.
+```bash
+sudo apt remove julia
+```
+
+Update your PATH to have Julia's new location. Add this to your `.zshrc`, 
`.bashrc`, `.profile` or `.bash_profile`.
+```bash
+export PATH=~/julia/julia-1.0.3/bin:$PATH
+```
+
+Validate your PATH.
+```bash
+echo $PATH
+```
+
+Validate Julia works and is the expected version.
+```bash
+julia -e 'using InteractiveUtils; versioninfo()'
+```
+
+ Setup Your MXNet-Julia Environment
+
+**For each of the following environment variables, add the commands to your 
`.zshrc`, `.bashrc`, `.profile` or `.bash_profile` to make them persist.**
+
+Create a `julia-depot` folder and environment variable.
+```bash
+mkdir julia-depot
+export JULIA_DEPOT_PATH=$HOME/julia/julia-depot
+```
+
+To use the Julia binding with an existing `libmxnet` installation, set the 
`MXNET_HOME` environment variable to the MXNet source root. For example:
+```bash
+export MXNET_HOME=$HOME/incubator-mxnet
+```
 
-```julia
-Pkg.add("MXNet")
+Now set the `LD_LIBRARY_PATH` environment variable to where `libmxnet.so` is 
found. If you can't find it, you might have skipped the building MXNet step. Go 
back and [build MXNet](#build-the-shared-library) first. For example:
+```bash
+export LD_LIBRARY_PATH=$HOME/incubator-mxnet/lib:$LD_LIBRARY_PATH
+```
+
+Verify the location of `libjemalloc.so` and set the `LD_PRELOAD` environment 
variable.
+```bash
+export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so:$LD_PRELOAD
+```
+
+With all of these updates, here's an example of what you might want to have in 
your `.zshrc`, `.bashrc`, `.profile` or `.bash_profile`.
+
+```
+export PATH=$HOME/bin:$HOME/.local/bin:$HOME/julia/julia-1.0.3/bin:$PATH
+export JULIA_DEPOT_PATH=$HOME/julia/julia-depot
 
 Review comment:
   > This combines with Pkg.develop?
   
   This is not necessary for `Pkg.develop`. Actually, there is a default value 
for  `JULIA_DEPOT_PATH`, and the default is fine for most of user.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ckt624 commented on a change in pull request #15349: Numpy Tensordot Operator

2019-07-10 Thread GitBox
ckt624 commented on a change in pull request #15349: Numpy Tensordot Operator 
URL: https://github.com/apache/incubator-mxnet/pull/15349#discussion_r302354817
 
 

 ##
 File path: src/operator/numpy/np_tensordot_op.cc
 ##
 @@ -0,0 +1,186 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_tensordot_op.cc
+ * \brief CPU Implementation of numpy-compatible tensordot
+ */
+
+#include 
+#include "np_tensordot_op-inl.h"
+
+namespace mxnet {
+namespace op {
+
+bool TensordotOpShape(const nnvm::NodeAttrs& attrs,
+ mxnet::ShapeVector *in_attrs,
+ mxnet::ShapeVector *out_attrs) {
+  CHECK_EQ(in_attrs->size(), 2U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  const mxnet::TShape& a_shape = in_attrs->at(0);
+  const mxnet::TShape& b_shape = in_attrs->at(1);
+
+  if (!ndim_is_known(a_shape) || !ndim_is_known(b_shape)) {
+return false;
+  }
+
+  if ((a_shape.ndim() < 1) || (b_shape.ndim() < 1)) {
+return false;
+  }
+
+  const TensordotParam& param = nnvm::get(attrs.parsed);
+  const Tuple& a_axes_summed = param.a_axes_summed;
+  const Tuple& b_axes_summed = param.b_axes_summed;
+
+  Tuple a_axes_remained;
+  Tuple b_axes_remained;
+  Tuple a_axes;
+  Tuple b_axes;
+  GetReorderedAxes(a_axes_summed, &a_axes_remained, &a_axes, b_axes_summed, 
&b_axes_remained,
+&b_axes, a_shape, b_shape);
+
+  CHECK_EQ(a_axes_summed.ndim(), b_axes_summed.ndim());
+
+  mxnet::TShape out_shape(a_axes_remained.ndim() + b_axes_remained.ndim(), -1);
+  for (int i = 0; i < a_axes_remained.ndim(); i++) {
+out_shape[i] = a_shape[a_axes_remained[i]];
+  }
+  for (int i = 0; i < b_axes_remained.ndim(); i++) {
+out_shape[a_axes_remained.ndim() + i] = b_shape[b_axes_remained[i]];
+  }
+  SHAPE_ASSIGN_CHECK(*out_attrs, 0, out_shape);
+
+  mxnet::TShape tem_shape1(a_axes.ndim(), -1);
+  for (int i = 0; i < a_axes_remained.ndim(); i++) {
+tem_shape1[a_axes_remained[i]] = out_shape[i];
+  }
+  for (int i = 0; i < a_axes_summed.ndim(); i++) {
+tem_shape1[a_axes_summed[i]] = b_shape[b_axes_summed[i]];
+  }
+  SHAPE_ASSIGN_CHECK(*in_attrs, 0, tem_shape1);
+
+  mxnet::TShape tem_shape2(b_axes.ndim(), -1);
+  for (int i = 0; i < b_axes_remained.ndim(); i++) {
+tem_shape2[b_axes_remained[i]] = out_shape[a_axes_remained.ndim() + i];
+  }
+  for (int i = 0; i < b_axes_summed.ndim(); i++) {
+tem_shape2[b_axes_summed[i]] = a_shape[a_axes_summed[i]];
+  }
+  SHAPE_ASSIGN_CHECK(*in_attrs, 1, tem_shape2);
+
+  return shape_is_known(*in_attrs) && shape_is_known(*out_attrs);
+}
+
+DMLC_REGISTER_PARAMETER(TensordotParam);
+
+NNVM_REGISTER_OP(tensordot)
 
 Review comment:
   Fixed. Thx.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #15494: [Numpy] Added operator logaddexp2; added support for zero-size tensor in BinaryBroadcastBackwardUseIn

2019-07-10 Thread GitBox
reminisce commented on a change in pull request #15494: [Numpy] Added operator 
logaddexp2; added support for zero-size tensor in BinaryBroadcastBackwardUseIn
URL: https://github.com/apache/incubator-mxnet/pull/15494#discussion_r302350415
 
 

 ##
 File path: src/operator/numpy/np_elemwise_broadcast_op.cu
 ##
 @@ -48,6 +48,14 @@ NNVM_REGISTER_OP(_npi_maximum)
 NNVM_REGISTER_OP(_npi_minimum)
 .set_attr("FCompute", BinaryBroadcastCompute);
 
+NNVM_REGISTER_OP(_npi_logaddexp2)
+.set_attr("FCompute", BinaryBroadcastCompute);
+
+NNVM_REGISTER_OP(_backward_logaddexp2)
+.set_attr("FCompute", BinaryBroadcastBackwardUseIn("FCompute",
   BinaryBroadcastBackwardUseIn);
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #15494: [Numpy] Added operator logaddexp2; added support for zero-size tensor in BinaryBroadcastBackwardUseIn

2019-07-10 Thread GitBox
reminisce commented on a change in pull request #15494: [Numpy] Added operator 
logaddexp2; added support for zero-size tensor in BinaryBroadcastBackwardUseIn
URL: https://github.com/apache/incubator-mxnet/pull/15494#discussion_r302350456
 
 

 ##
 File path: src/operator/numpy/np_elemwise_broadcast_op.cu
 ##
 @@ -78,5 +86,12 @@ NNVM_REGISTER_OP(_npi_maximum_scalar)
 NNVM_REGISTER_OP(_npi_minimum_scalar)
 .set_attr("FCompute", BinaryScalarOp::Compute);
 
+NNVM_REGISTER_OP(_npi_logaddexp2_scalar)
+.set_attr("FCompute", BinaryScalarOp::Compute);
+
+NNVM_REGISTER_OP(_backward_logaddexp2_scalar)
+.set_attr("FCompute", BinaryScalarOp::Backward

[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #15494: [Numpy] Added operator logaddexp2; added support for zero-size tensor in BinaryBroadcastBackwardUseIn

2019-07-10 Thread GitBox
reminisce commented on a change in pull request #15494: [Numpy] Added operator 
logaddexp2; added support for zero-size tensor in BinaryBroadcastBackwardUseIn
URL: https://github.com/apache/incubator-mxnet/pull/15494#discussion_r302350738
 
 

 ##
 File path: src/operator/tensor/elemwise_binary_broadcast_op.h
 ##
 @@ -580,17 +580,19 @@ inline void BinaryBroadcastBackwardUseInImpl(const 
OpContext& ctx,
   const TBlob ograd = inputs[0].reshape(new_oshape);
   const TBlob lhs = inputs[1].reshape(new_lshape);
   const TBlob rhs = inputs[2].reshape(new_rshape);
-  size_t workspace_size_l = ReduceWorkspaceSize(
-  s, lgrad.shape_, req[0], ograd.shape_, lhs.shape_, rhs.shape_);
-  size_t workspace_size_r = ReduceWorkspaceSize(
-  s, rgrad.shape_, req[1], ograd.shape_, lhs.shape_, rhs.shape_);
-  size_t workspace_size = std::max(workspace_size_l, workspace_size_r);
-  Tensor workspace =
-  ctx.requested[0].get_space_typed(Shape1(workspace_size), 
s);
-  Reduce(s, lgrad, req[0], 
workspace,
-ograd, lhs, rhs);
-  Reduce(s, rgrad, req[1], 
workspace,
-ograd, lhs, rhs);
+  if (inputs[0].shape_.Size() != 0) {
 
 Review comment:
   Instead of adding this condition check, it's better to return earlier if the 
output gradient is a zero-size tensor.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on issue #15494: [Numpy] Added operator logaddexp2; added support for zero-size tensor in BinaryBroadcastBackwardUseIn

2019-07-10 Thread GitBox
reminisce commented on issue #15494: [Numpy] Added operator logaddexp2; added 
support for zero-size tensor in BinaryBroadcastBackwardUseIn
URL: https://github.com/apache/incubator-mxnet/pull/15494#issuecomment-510321190
 
 
   Please rebase with the upstream numpy branch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zixuanweeei commented on a change in pull request #15497: Independent gradients requests check with respect to weights and bias of convolution

2019-07-10 Thread GitBox
zixuanweeei commented on a change in pull request #15497: Independent gradients 
requests check with respect to weights and bias of convolution
URL: https://github.com/apache/incubator-mxnet/pull/15497#discussion_r302361443
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_convolution.cc
 ##
 @@ -662,21 +662,21 @@ void MKLDNNConvolutionBackward(const nnvm::NodeAttrs& 
attrs, const OpContext &ct
 in_grad[conv::kWeight],
 convBwdWeight.bwdWeights_pd.diff_weights_primitive_desc(),
 req[conv::kWeight]);
-mkldnn_output_t in_grad_bias;
-if (param.no_bias) {
-  convBwdWeight.SetWeightNewMem(*data_mem, *out_grad_mem,
-  *in_grad_weight.second);
-  MKLDNNStream::Get()->RegisterPrim(convBwdWeight.GetBwdWeights());
-} else {
-  in_grad_bias = CreateMKLDNNMem(
+
+if (!param.no_bias && req[conv::kBias]) {
+  auto in_grad_bias = CreateMKLDNNMem(
   in_grad[conv::kBias],
   convBwdWeight.bwdWeights_pd.diff_bias_primitive_desc(), 
req[conv::kBias]);
   convBwdWeight.SetWeightNewMem(*data_mem, *out_grad_mem,
-  *in_grad_weight.second, *in_grad_bias.second);
+  *in_grad_weight.second, *in_grad_bias.second);
   MKLDNNStream::Get()->RegisterPrim(convBwdWeight.GetBwdWeights());
   CommitOutput(in_grad[conv::kBias], in_grad_bias);
+} else if (req[conv::kWeight]) {
+  convBwdWeight.SetWeightNewMem(*data_mem, *out_grad_mem,
+  *in_grad_weight.second);
+  MKLDNNStream::Get()->RegisterPrim(convBwdWeight.GetBwdWeights());
 }
-CommitOutput(in_grad[conv::kWeight], in_grad_weight);
+if (req[conv::kWeight]) CommitOutput(in_grad[conv::kWeight], 
in_grad_weight);
 
 Review comment:
   It has the same behavior as req[kWeight]. Both of them return the operation 
request type (`OpReqType`) to Forward and Backward. We can use it to control 
the behavior of handling memory of result, like add/copy the result back to the 
source memory or just do nothing with them.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kshitij12345 commented on issue #15331: [fix] missing input log higher order.

2019-07-10 Thread GitBox
kshitij12345 commented on issue #15331: [fix] missing input log higher order.
URL: https://github.com/apache/incubator-mxnet/pull/15331#issuecomment-510326851
 
 
   
https://github.com/apache/incubator-mxnet/blob/5171e1d92cfc5eefa2c20dfe8ac3fac5351ad19a/src/operator/tensor/elemwise_unary_op_basic.cc#L1120
   `dL/dygrad` for this one right?
   
   @larroy @apeforest , I was also wondering if we can check the number of 
inputs passed at compile time? I have observed the `MakeNode` gets the Op from 
dynamic registry based on the name. However we actually have information about 
the number of inputs and outputs for a given Op at compile time.  I tried but 
couldn't actually figure out. What are your thoughts? How easy or hard would it 
be to check for valid number of inputs in `MakeNode`? This would help catch 
these sort of errors at compile time itself. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zixuanweeei commented on a change in pull request #15497: Independent gradients requests check with respect to weights and bias of convolution

2019-07-10 Thread GitBox
zixuanweeei commented on a change in pull request #15497: Independent gradients 
requests check with respect to weights and bias of convolution
URL: https://github.com/apache/incubator-mxnet/pull/15497#discussion_r302361443
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_convolution.cc
 ##
 @@ -662,21 +662,21 @@ void MKLDNNConvolutionBackward(const nnvm::NodeAttrs& 
attrs, const OpContext &ct
 in_grad[conv::kWeight],
 convBwdWeight.bwdWeights_pd.diff_weights_primitive_desc(),
 req[conv::kWeight]);
-mkldnn_output_t in_grad_bias;
-if (param.no_bias) {
-  convBwdWeight.SetWeightNewMem(*data_mem, *out_grad_mem,
-  *in_grad_weight.second);
-  MKLDNNStream::Get()->RegisterPrim(convBwdWeight.GetBwdWeights());
-} else {
-  in_grad_bias = CreateMKLDNNMem(
+
+if (!param.no_bias && req[conv::kBias]) {
+  auto in_grad_bias = CreateMKLDNNMem(
   in_grad[conv::kBias],
   convBwdWeight.bwdWeights_pd.diff_bias_primitive_desc(), 
req[conv::kBias]);
   convBwdWeight.SetWeightNewMem(*data_mem, *out_grad_mem,
-  *in_grad_weight.second, *in_grad_bias.second);
+  *in_grad_weight.second, *in_grad_bias.second);
   MKLDNNStream::Get()->RegisterPrim(convBwdWeight.GetBwdWeights());
   CommitOutput(in_grad[conv::kBias], in_grad_bias);
+} else if (req[conv::kWeight]) {
+  convBwdWeight.SetWeightNewMem(*data_mem, *out_grad_mem,
+  *in_grad_weight.second);
+  MKLDNNStream::Get()->RegisterPrim(convBwdWeight.GetBwdWeights());
 }
-CommitOutput(in_grad[conv::kWeight], in_grad_weight);
+if (req[conv::kWeight]) CommitOutput(in_grad[conv::kWeight], 
in_grad_weight);
 
 Review comment:
   It has the same behavior as req[kWeight]. Both of them return the operation 
request type (OpReqType) to Forward and Backward. We can use it to control the 
behavior of handling memory of result, like add/copy the result back to the 
source memory or just do nothing with them.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kshitij12345 commented on a change in pull request #15253: [MXNET-978] Add higher order gradient support `tan`, `tanh`

2019-07-10 Thread GitBox
kshitij12345 commented on a change in pull request #15253: [MXNET-978] Add 
higher order gradient support `tan`, `tanh`
URL: https://github.com/apache/incubator-mxnet/pull/15253#discussion_r302362521
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_trig.cc
 ##
 @@ -139,7 +139,31 @@ The storage type of ``tan`` output depends upon the input 
storage type:
 )code" ADD_FILELINE)
 .set_attr("FGradient", ElemwiseGradUseOut{ "_backward_tan" });
 
-MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_tan, 
unary_bwd);
+MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_tan, 
unary_bwd)
+.set_attr("FGradient",
+  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+  // NodeEntry{n} : y_grad * f'(x)
+  // n->inputs[0] : y_grad
+  // n->inputs[1] : f(x) = tan(x)
+  // ograds[0] : head_grads
+  // f'(x) = sec^2(x)
+  // f''(x) = 2 * f'(x) * f(x)
+  const std::unordered_map args = {{"scalar", 
"2.0"}};
+  auto two_y = MakeNode("_mul_scalar", n->attrs.name + "_mul_two", 
{n->inputs[1]}, &args, &n);
+  auto grad_grad_mid = MakeNode("elemwise_mul", n->attrs.name + 
"_grad_mul",
 
 Review comment:
   Makes sense. Thanks. Will get to it. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] gyshi commented on a change in pull request #15312: Numpy bitwise_xor operator

2019-07-10 Thread GitBox
gyshi commented on a change in pull request #15312: Numpy bitwise_xor operator
URL: https://github.com/apache/incubator-mxnet/pull/15312#discussion_r302366555
 
 

 ##
 File path: src/operator/numpy/np_elemwise_binary_op.cc
 ##
 @@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_elemwise_binary_op.cc
+ * \brief CPU Implementation of numpy-compatible bitwise_xor operators
+ */
+
+#include 
+#include "../mshadow_op.h"  // mshadow operations
+#include "../operator_common.h"  // MakeZeroGradNodes
+#include "../tensor/elemwise_binary_op.h"  // ElemwiseShape, ElemwiseType
+#include "../tensor/elemwise_binary_broadcast_op.h"  // BinaryBroadcastCompute
+
+namespace mxnet {
+namespace op {
+
+NNVM_REGISTER_OP(_np_bitwise_xor)
+.set_num_inputs(2)
+.set_num_outputs(1)
+.set_attr("FInferShape", BinaryBroadcastShape)
+.set_attr("FInferType", ElemwiseType<2, 1>)
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hgt312 opened a new pull request #15508: Numpy Tril (Lower triangle) operator

2019-07-10 Thread GitBox
hgt312 opened a new pull request #15508: Numpy Tril (Lower triangle) operator
URL: https://github.com/apache/incubator-mxnet/pull/15508
 
 
   ## Description ##
   Implementation of numpy tril operator in mxnet.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - add `tril` op
   
   ## Comments ##
   Welcome @reminisce and others for reviewing.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ThomasDelteil commented on issue #15448: [MKLDNN]Enhance Quantization APIs and Tutorial

2019-07-10 Thread GitBox
ThomasDelteil commented on issue #15448: [MKLDNN]Enhance Quantization APIs and 
Tutorial
URL: https://github.com/apache/incubator-mxnet/pull/15448#issuecomment-510336674
 
 
   Will do, at a conference this week, limited bandwidth but next week I'll 
have some availability to look into quantization again and get back to you on 
the different email threads as well, apologies for the delay! 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] yidawang commented on issue #15465: [RFC] Integrate TVM into Apache MXNet

2019-07-10 Thread GitBox
yidawang commented on issue #15465: [RFC] Integrate TVM into Apache MXNet
URL: 
https://github.com/apache/incubator-mxnet/issues/15465#issuecomment-510345628
 
 
   > @yidawang If we all agree that we should avoid using more than one thread 
pool implementation in the runtime, then I guess OMP is the only choice, is 
that true?
   
   Sorry for the late reply, traveling these days. In this case, I think we 
should also benchmark to compare the performance of between different scenarios 
to decide. Theoretically, if a model runs both TVM ops and original MXNet ops 
on CPUs, I agree that using OpenMP may be a short term solution.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Maicus closed issue #15484: Binding Model fails with simple_bind error

2019-07-11 Thread GitBox
Maicus closed issue #15484: Binding Model fails with simple_bind error
URL: https://github.com/apache/incubator-mxnet/issues/15484
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] CynthiaProtector opened a new issue #15509: Train models under the director '/mxnet/example/gluon/'

2019-07-11 Thread GitBox
CynthiaProtector opened a new issue #15509: Train models under the director 
'/mxnet/example/gluon/'
URL: https://github.com/apache/incubator-mxnet/issues/15509
 
 
   I conduct distributed training on almost all the models provided under the 
directory '/mxnet/example/gluon', however, when running the models with the 
asynchronous mechanism (--kvstore dist_async), none of the models is convergent.
   For instance, I train the resnet18_v1 with cifar10 dataset, the training 
accuracy is around 0.09 under the asynchronous mechanism. Nevertheless, 
training with the synchronous mechanism can achieve the target accuracy.
   I do not know the reason why the asynchronous SGD is not applicable to these 
models, and what should I do to fix this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zoeygxy commented on a change in pull request #15494: [Numpy] Added operator logaddexp2; added support for zero-size tensor in BinaryBroadcastBackwardUseIn

2019-07-11 Thread GitBox
zoeygxy commented on a change in pull request #15494: [Numpy] Added operator 
logaddexp2; added support for zero-size tensor in BinaryBroadcastBackwardUseIn
URL: https://github.com/apache/incubator-mxnet/pull/15494#discussion_r302417072
 
 

 ##
 File path: src/operator/numpy/np_elemwise_broadcast_op.cu
 ##
 @@ -78,5 +86,12 @@ NNVM_REGISTER_OP(_npi_maximum_scalar)
 NNVM_REGISTER_OP(_npi_minimum_scalar)
 .set_attr("FCompute", BinaryScalarOp::Compute);
 
+NNVM_REGISTER_OP(_npi_logaddexp2_scalar)
+.set_attr("FCompute", BinaryScalarOp::Compute);
+
+NNVM_REGISTER_OP(_backward_logaddexp2_scalar)
+.set_attr("FCompute", BinaryScalarOp::Backward

[GitHub] [incubator-mxnet] zoeygxy commented on a change in pull request #15494: [Numpy] Added operator logaddexp2; added support for zero-size tensor in BinaryBroadcastBackwardUseIn

2019-07-11 Thread GitBox
zoeygxy commented on a change in pull request #15494: [Numpy] Added operator 
logaddexp2; added support for zero-size tensor in BinaryBroadcastBackwardUseIn
URL: https://github.com/apache/incubator-mxnet/pull/15494#discussion_r302417006
 
 

 ##
 File path: src/operator/numpy/np_elemwise_broadcast_op.cu
 ##
 @@ -48,6 +48,14 @@ NNVM_REGISTER_OP(_npi_maximum)
 NNVM_REGISTER_OP(_npi_minimum)
 .set_attr("FCompute", BinaryBroadcastCompute);
 
+NNVM_REGISTER_OP(_npi_logaddexp2)
+.set_attr("FCompute", BinaryBroadcastCompute);
+
+NNVM_REGISTER_OP(_backward_logaddexp2)
+.set_attr("FCompute", BinaryBroadcastBackwardUseIn

[GitHub] [incubator-mxnet] zoeygxy commented on a change in pull request #15494: [Numpy] Added operator logaddexp2; added support for zero-size tensor in BinaryBroadcastBackwardUseIn

2019-07-11 Thread GitBox
zoeygxy commented on a change in pull request #15494: [Numpy] Added operator 
logaddexp2; added support for zero-size tensor in BinaryBroadcastBackwardUseIn
URL: https://github.com/apache/incubator-mxnet/pull/15494#discussion_r302417138
 
 

 ##
 File path: src/operator/tensor/elemwise_binary_broadcast_op.h
 ##
 @@ -580,17 +580,19 @@ inline void BinaryBroadcastBackwardUseInImpl(const 
OpContext& ctx,
   const TBlob ograd = inputs[0].reshape(new_oshape);
   const TBlob lhs = inputs[1].reshape(new_lshape);
   const TBlob rhs = inputs[2].reshape(new_rshape);
-  size_t workspace_size_l = ReduceWorkspaceSize(
-  s, lgrad.shape_, req[0], ograd.shape_, lhs.shape_, rhs.shape_);
-  size_t workspace_size_r = ReduceWorkspaceSize(
-  s, rgrad.shape_, req[1], ograd.shape_, lhs.shape_, rhs.shape_);
-  size_t workspace_size = std::max(workspace_size_l, workspace_size_r);
-  Tensor workspace =
-  ctx.requested[0].get_space_typed(Shape1(workspace_size), 
s);
-  Reduce(s, lgrad, req[0], 
workspace,
-ograd, lhs, rhs);
-  Reduce(s, rgrad, req[1], 
workspace,
-ograd, lhs, rhs);
+  if (inputs[0].shape_.Size() != 0) {
 
 Review comment:
   Will do, thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] gyshi opened a new pull request #15510: [Numpy] . Operator moveaxis

2019-07-11 Thread GitBox
gyshi opened a new pull request #15510: [Numpy] . Operatormoveaxis
URL: https://github.com/apache/incubator-mxnet/pull/15510
 
 
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15449: cuda/cuDNN lib version checking. Force cuDNN v7 usage.

2019-07-11 Thread GitBox
larroy commented on a change in pull request #15449: cuda/cuDNN lib version 
checking.  Force cuDNN v7 usage.
URL: https://github.com/apache/incubator-mxnet/pull/15449#discussion_r302428179
 
 

 ##
 File path: src/common/cuda_utils.cc
 ##
 @@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file cuda_utils.cc
+ * \brief CUDA debugging utilities.
+ */
+
+#include 
+#include "cuda_utils.h"
+
+#if MXNET_USE_CUDA == 1
+
+namespace mxnet {
+namespace common {
+namespace cuda {
+
+// The oldest version of cuda used in upstream MXNet CI testing, both for unix 
and windows.
+// Users that have rebuilt MXNet against older versions will we advised with a 
warning to upgrade
+// their systems to match the CI level.  Minimally, users should rerun the CI 
locally.
+#if defined(_MSC_VER)
+#define MXNET_CI_OLDEST_CUDA_VERSION  9020
+#else
+#define MXNET_CI_OLDEST_CUDA_VERSION 1
+#endif
+
+
+// Start-up check that the version of cuda compiled-against matches the 
linked-against version.
+bool CudaVersionChecks() {
+  // Don't bother with checks if there are no GPUs visible (e.g. with 
CUDA_VISIBLE_DEVICES="")
+  if (dmlc::GetEnv("MXNET_CUDA_VERSION_CHECKING", true) && 
Context::GetGPUCount() > 0) {
+int linkedAgainstCudaVersion = 0;
+CUDA_CALL(cudaRuntimeGetVersion(&linkedAgainstCudaVersion));
+if (linkedAgainstCudaVersion != CUDA_VERSION)
+  LOG(WARNING) << "cuda library mismatch: linked-against version " << 
linkedAgainstCudaVersion
+   << " != compiled-against version " << CUDA_VERSION << "."
+   << "Set MXNET_CUDA_VERSION_CHECKING=0 to quiet this 
warning.";
+if (CUDA_VERSION < MXNET_CI_OLDEST_CUDA_VERSION)
+  LOG(WARNING) << "Upgrade advisory: this mxnet has been built against 
cuda library version "
+   << CUDA_VERSION << ", which is older than the oldest 
version tested by CI ("
+   << MXNET_CI_OLDEST_CUDA_VERSION << ").  "
+   << "Set MXNET_CUDA_VERSION_CHECKING=0 to quiet this 
warning.";
+  }
+  return true;
+}
+
+// Dynamic initialization here will emit a warning if runtime and compile-time 
versions mismatch.
+// Also if the user has recompiled their source to a version no longer tested 
by upstream CI.
+bool cuda_version_ok = CudaVersionChecks();
+
+}  // namespace cuda
+}  // namespace common
+}  // namespace mxnet
+
+#endif  // MXNET_USE_CUDA
+
+#if MXNET_USE_CUDNN == 1
+
+namespace mxnet {
+namespace common {
+namespace cudnn {
+
+// The oldest version of CUDNN used in upstream MXNet CI testing, both for 
unix and windows.
+// Users that have rebuilt MXNet against older versions will we advised with a 
warning to upgrade
+// their systems to match the CI level.  Minimally, users should rerun the CI 
locally.
+#if defined(_MSC_VER)
+#define MXNET_CI_OLDEST_CUDNN_VERSION 7600
+#else
+#define MXNET_CI_OLDEST_CUDNN_VERSION 7600
+#endif
+
+// Start-up check that the version of cudnn compiled-against matches the 
linked-against version.
+// Also if the user has recompiled their source to a version no longer tested 
by upstream CI.
+bool CuDNNVersionChecks() {
+  // Don't bother with checks if there are no GPUs visible (e.g. with 
CUDA_VISIBLE_DEVICES="")
+  if (dmlc::GetEnv("MXNET_CUDNN_VERSION_CHECKING", true) && 
Context::GetGPUCount() > 0) {
+size_t linkedAgainstCudnnVersion = cudnnGetVersion();
+if (linkedAgainstCudnnVersion != CUDNN_VERSION)
+  LOG(WARNING) << "cuDNN library mismatch: linked-against version " << 
linkedAgainstCudnnVersion
+   << " != compiled-against version " << CUDNN_VERSION << ".  "
+   << "Set MXNET_CUDNN_VERSION_CHECKING=0 to quiet this 
warning.";
+if (CUDNN_VERSION < MXNET_CI_OLDEST_CUDNN_VERSION)
+  LOG(WARNING) << "Upgrade advisory: this mxnet has been built against 
cuDNN library version "
+   <<  CUDNN_VERSION << ", which is older than the oldest 
version tested by CI ("
+   << MXNET_CI_OLDEST_CUDNN_VERSION << ").  "
+   << "Set MXNET_CUDNN_VERSION_CHECKING=0 to quiet this 
warning.";
+  }
+  return true;
+}
+
+// Dynamic initializa

[GitHub] [incubator-mxnet] hgt312 opened a new pull request #15511: Numpy Identity operator

2019-07-11 Thread GitBox
hgt312 opened a new pull request #15511: Numpy Identity operator
URL: https://github.com/apache/incubator-mxnet/pull/15511
 
 
   ## Description ##
   Implementation of numpy `identity` operator in mxnet.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - add op `identity`
   - fix some typos/mistakes in doc of `zeros` and `ones`
   
   ## Comments ##
   Welcome @reminisce and others for reviewing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wshuail opened a new issue #15512: possible bug in nd.gather_nd

2019-07-11 Thread GitBox
wshuail opened a new issue #15512: possible bug in nd.gather_nd
URL: https://github.com/apache/incubator-mxnet/issues/15512
 
 
   When I use nd.gather_nd with gpu, a unexpected error happened sometimes. 
Generally it may raise error like RuntimeError: cuda runtime error (77) : an 
illegal memory access was encountered.
   
   An error "terminate called without an active exception" could happen with 
the code below. 
   
   `import os
   import sys
   import mxnet as mx
   from mxnet import nd
   sys.path.insert(0, os.path.expanduser('~/gluon_detector'))
   
   ctx = mx.gpu(0)
   
   for i in range(1):
   if i % 100 == 0:
   print (i)
   
   batch_size = 2
   num_classes = 2
   width, height = 5, 5
   k = 3

  hm_origin = 
nd.random.uniform(0, 10, (batch_size, num_classes, width, height), ctx=ctx)
   
   hm = nd.reshape(hm_origin, (0, 0, -1))
   
   topk_scores, topk_idx = nd.topk(hm, k=k, ret_typ='both')
   
   topk_x_idx = nd.floor(topk_idx/width)
   topk_y_idx = (topk_idx%height)
   
   batch_idx = nd.repeat(nd.arange(batch_size), 
repeats=num_classes*k).reshape((1, -1))
   batch_idx = batch_idx.as_in_context(ctx)
   class_idx = nd.repeat(nd.arange(num_classes), 
repeats=batch_size*k).reshape((1, -1))
   class_idx = class_idx.as_in_context(ctx)
   
   topk_x_idx = nd.reshape(topk_x_idx, (1, -1))
   topk_y_idx = nd.reshape(topk_y_idx, (1, -1))
   
   indices = nd.concat(batch_idx, class_idx, topk_x_idx, topk_y_idx, dim=0)
   
   results = nd.gather_nd(hm_origin, indices)
   results = nd.reshape(results, (batch_size, num_classes, k))
   
   `
   
   when I add nd.waitall() at last, it works well. 
   
   any suggestions? 
   
   BTW, does mxnet has a function like torch.gather?  gather_nd works close, 
but not so convenient.  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #15512: possible bug in nd.gather_nd

2019-07-11 Thread GitBox
mxnet-label-bot commented on issue #15512: possible bug in nd.gather_nd
URL: 
https://github.com/apache/incubator-mxnet/issues/15512#issuecomment-510399079
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended labels: Bug


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn opened a new pull request #15513: [WIP] add contrib op in cpp-pacakge

2019-07-11 Thread GitBox
wkcn opened a new pull request #15513: [WIP] add contrib op in cpp-pacakge
URL: https://github.com/apache/incubator-mxnet/pull/15513
 
 
   ## Description ##
   Hi, there.
   I add the contrib operators into cpp-package.
   Currently, `dim_t (int64_t)` could not be parsed. I'm waiting for the merge 
of [this PR dmlc-core#540](https://github.com/dmlc/dmlc-core/pull/540).
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] lebeg commented on issue #15452: fix nightly CI failure

2019-07-11 Thread GitBox
lebeg commented on issue #15452: fix nightly CI failure
URL: https://github.com/apache/incubator-mxnet/pull/15452#issuecomment-510455260
 
 
   @roywei could you apply changes that @szha requested? The nightly tests are 
still broken.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] qqaatw commented on issue #15484: Binding Model fails with simple_bind error

2019-07-11 Thread GitBox
qqaatw commented on issue #15484: Binding Model fails with simple_bind error
URL: 
https://github.com/apache/incubator-mxnet/issues/15484#issuecomment-510480630
 
 
   I have the same problem as you, could you tell more in detail about the 
Target and the TargetCode Section? thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] gyshi commented on a change in pull request #15312: Numpy bitwise_xor operator

2019-07-11 Thread GitBox
gyshi commented on a change in pull request #15312: Numpy bitwise_xor operator
URL: https://github.com/apache/incubator-mxnet/pull/15312#discussion_r302538899
 
 

 ##
 File path: src/operator/numpy/np_elemwise_binary_op.cc
 ##
 @@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_elemwise_binary_op.cc
+ * \brief CPU Implementation of numpy-compatible bitwise_xor operators
+ */
+
+#include 
+#include "../mshadow_op.h"  // mshadow operations
+#include "../operator_common.h"  // MakeZeroGradNodes
+#include "../tensor/elemwise_binary_op.h"  // ElemwiseShape, ElemwiseType
+#include "../tensor/elemwise_binary_broadcast_op.h"  // BinaryBroadcastCompute
+
+namespace mxnet {
+namespace op {
+
+NNVM_REGISTER_OP(_np_bitwise_xor)
+.set_num_inputs(2)
+.set_num_outputs(1)
+.set_attr("FInferShape", BinaryBroadcastShape)
+.set_attr("FInferType", ElemwiseType<2, 1>)
 
 Review comment:
   resolved, it's my first op, i donot use npx.set_np(), and i attempt again, 
in mxnet, bool will be changed  to 1.0 or 0.0, so it can support bool. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on issue #15506: Improving error message

2019-07-11 Thread GitBox
marcoabreu commented on issue #15506: Improving error message
URL: 
https://github.com/apache/incubator-mxnet/issues/15506#issuecomment-510483367
 
 
   ```No space left on device```
   
   Sounds pretty clear to me. Is this not talking about disk space? What was 
the root cause?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] smissan opened a new issue #15514: dmlc.lib is not generated

2019-07-11 Thread GitBox
smissan opened a new issue #15514: dmlc.lib is not generated
URL: https://github.com/apache/incubator-mxnet/issues/15514
 
 
   when trying to build latest head, dmlc.dll is being generated, but not 
dmlc.lib (using ms visual studio 2015). Therefore, building mxnet dll fails with
   
   1>-- Build started: Project: mxnet, Configuration: Release x64 --
   1>LINK : fatal error LNK1181: cannot open input file 
'3rdparty\dmlc-core\Release\dmlc.lib'


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #15514: dmlc.lib is not generated

2019-07-11 Thread GitBox
mxnet-label-bot commented on issue #15514: dmlc.lib is not generated
URL: 
https://github.com/apache/incubator-mxnet/issues/15514#issuecomment-510484008
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended labels: Build


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kshitij12345 opened a new pull request #15515: [MXNET-978] Higher Order Gradient Support `arcsin`, `arccos`.

2019-07-11 Thread GitBox
kshitij12345 opened a new pull request #15515: [MXNET-978] Higher Order 
Gradient Support `arcsin`, `arccos`.
URL: https://github.com/apache/incubator-mxnet/pull/15515
 
 
   ## Description ##
   PR intends to add support for higher order gradient for `arcsin`, `arccos`.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA-978 issue](https://issues.apache.org/jira/browse/MXNET-978) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] higher order gradient for a `arcsin`, `arccos`.
   - [x] unit test for the same.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] qqaatw edited a comment on issue #15484: Binding Model fails with simple_bind error

2019-07-11 Thread GitBox
qqaatw edited a comment on issue #15484: Binding Model fails with simple_bind 
error
URL: 
https://github.com/apache/incubator-mxnet/issues/15484#issuecomment-510480630
 
 
   @Maicus I have the same problem as you, could you tell more in detail about 
the Target and the TargetCode Section? thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #15454: Julia docs

2019-07-11 Thread GitBox
aaronmarkham commented on a change in pull request #15454: Julia docs
URL: https://github.com/apache/incubator-mxnet/pull/15454#discussion_r302618951
 
 

 ##
 File path: docs/install/ubuntu_setup.md
 ##
 @@ -310,25 +310,93 @@ Refer to the [Clojure setup 
guide](https://github.com/apache/incubator-mxnet/tre
 
 ### Install the MXNet Package for Julia
 
-The MXNet package for Julia is hosted in a separate repository, MXNet.jl, 
which is available on [GitHub](https://github.com/dmlc/MXNet.jl). To use Julia 
binding it with an existing libmxnet installation, set the ```MXNET_HOME``` 
environment variable by running the following command:
+ Install Julia
+The package available through `apt-get` is old and not compatible with the 
latest version of MXNet.
+Fetch the latest version (1.0.3 at the time of this writing).
 
 ```bash
-export MXNET_HOME=//libmxnet
+wget -qO julia-10.tar.gz 
https://julialang-s3.julialang.org/bin/linux/x64/1.0/julia-1.0.3-linux-x86_64.tar.gz
 ```
 
-The path to the existing libmxnet installation should be the root directory of 
libmxnet. In other words, you should be able to find the ```libmxnet.so``` file 
at ```$MXNET_HOME/lib```. For example, if the root directory of libmxnet is 
```~```, you would run the following command:
+Place the extracted files somewhere like a julia folder in your home dir.
 
 ```bash
-export MXNET_HOME=/~/libmxnet
+mkdir ~/julia
+mv julia-10.tar.gz ~/julia
+cd ~/julia
+tar xvf julia-10.tar.gz
 ```
 
-You might want to add this command to your ```~/.bashrc``` file. If you do, 
you can install the Julia package in the Julia console using the following 
command:
+Test Julia.
+```bash
+cd julia-1.0.3/bin
+julia -e 'using InteractiveUtils; versioninfo()'
+```
+
+If you're still getting the old version, remove it.
+```bash
+sudo apt remove julia
+```
+
+Update your PATH to have Julia's new location. Add this to your `.zshrc`, 
`.bashrc`, `.profile` or `.bash_profile`.
+```bash
+export PATH=~/julia/julia-1.0.3/bin:$PATH
+```
+
+Validate your PATH.
+```bash
+echo $PATH
+```
+
+Validate Julia works and is the expected version.
+```bash
+julia -e 'using InteractiveUtils; versioninfo()'
+```
+
+ Setup Your MXNet-Julia Environment
+
+**For each of the following environment variables, add the commands to your 
`.zshrc`, `.bashrc`, `.profile` or `.bash_profile` to make them persist.**
+
+Create a `julia-depot` folder and environment variable.
+```bash
+mkdir julia-depot
+export JULIA_DEPOT_PATH=$HOME/julia/julia-depot
+```
+
+To use the Julia binding with an existing `libmxnet` installation, set the 
`MXNET_HOME` environment variable to the MXNet source root. For example:
+```bash
+export MXNET_HOME=$HOME/incubator-mxnet
+```
 
-```julia
-Pkg.add("MXNet")
+Now set the `LD_LIBRARY_PATH` environment variable to where `libmxnet.so` is 
found. If you can't find it, you might have skipped the building MXNet step. Go 
back and [build MXNet](#build-the-shared-library) first. For example:
+```bash
+export LD_LIBRARY_PATH=$HOME/incubator-mxnet/lib:$LD_LIBRARY_PATH
+```
+
+Verify the location of `libjemalloc.so` and set the `LD_PRELOAD` environment 
variable.
+```bash
+export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so:$LD_PRELOAD
+```
+
+With all of these updates, here's an example of what you might want to have in 
your `.zshrc`, `.bashrc`, `.profile` or `.bash_profile`.
+
+```
+export PATH=$HOME/bin:$HOME/.local/bin:$HOME/julia/julia-1.0.3/bin:$PATH
+export JULIA_DEPOT_PATH=$HOME/julia/julia-depot
+export MXNET_HOME=$HOME/incubator-mxnet
+export LD_LIBRARY_PATH=$HOME/incubator-mxnet/lib:$LD_LIBRARY_PATH
+export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so:$LD_PRELOAD
+```
+
+Install MXNet with Julia:
+
+```bash
+julia --color=yes --project=./ -e \
+ 'using Pkg; \
+  Pkg.develop(PackageSpec(name="MXNet", path = 
joinpath(ENV["MXNET_HOME"], "julia")))'
 
 Review comment:
   Yes, let's update things in another PR when 1.5 comes out.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #15454: Julia docs

2019-07-11 Thread GitBox
aaronmarkham commented on a change in pull request #15454: Julia docs
URL: https://github.com/apache/incubator-mxnet/pull/15454#discussion_r302619949
 
 

 ##
 File path: docs/install/ubuntu_setup.md
 ##
 @@ -310,25 +310,93 @@ Refer to the [Clojure setup 
guide](https://github.com/apache/incubator-mxnet/tre
 
 ### Install the MXNet Package for Julia
 
-The MXNet package for Julia is hosted in a separate repository, MXNet.jl, 
which is available on [GitHub](https://github.com/dmlc/MXNet.jl). To use Julia 
binding it with an existing libmxnet installation, set the ```MXNET_HOME``` 
environment variable by running the following command:
+ Install Julia
+The package available through `apt-get` is old and not compatible with the 
latest version of MXNet.
+Fetch the latest version (1.0.3 at the time of this writing).
 
 ```bash
-export MXNET_HOME=//libmxnet
+wget -qO julia-10.tar.gz 
https://julialang-s3.julialang.org/bin/linux/x64/1.0/julia-1.0.3-linux-x86_64.tar.gz
 ```
 
-The path to the existing libmxnet installation should be the root directory of 
libmxnet. In other words, you should be able to find the ```libmxnet.so``` file 
at ```$MXNET_HOME/lib```. For example, if the root directory of libmxnet is 
```~```, you would run the following command:
+Place the extracted files somewhere like a julia folder in your home dir.
 
 ```bash
-export MXNET_HOME=/~/libmxnet
+mkdir ~/julia
+mv julia-10.tar.gz ~/julia
+cd ~/julia
+tar xvf julia-10.tar.gz
 ```
 
-You might want to add this command to your ```~/.bashrc``` file. If you do, 
you can install the Julia package in the Julia console using the following 
command:
+Test Julia.
+```bash
+cd julia-1.0.3/bin
+julia -e 'using InteractiveUtils; versioninfo()'
+```
+
+If you're still getting the old version, remove it.
+```bash
+sudo apt remove julia
+```
+
+Update your PATH to have Julia's new location. Add this to your `.zshrc`, 
`.bashrc`, `.profile` or `.bash_profile`.
+```bash
+export PATH=~/julia/julia-1.0.3/bin:$PATH
+```
+
+Validate your PATH.
+```bash
+echo $PATH
+```
+
+Validate Julia works and is the expected version.
+```bash
+julia -e 'using InteractiveUtils; versioninfo()'
+```
+
+ Setup Your MXNet-Julia Environment
+
+**For each of the following environment variables, add the commands to your 
`.zshrc`, `.bashrc`, `.profile` or `.bash_profile` to make them persist.**
+
+Create a `julia-depot` folder and environment variable.
+```bash
+mkdir julia-depot
+export JULIA_DEPOT_PATH=$HOME/julia/julia-depot
+```
+
+To use the Julia binding with an existing `libmxnet` installation, set the 
`MXNET_HOME` environment variable to the MXNet source root. For example:
+```bash
+export MXNET_HOME=$HOME/incubator-mxnet
+```
 
-```julia
-Pkg.add("MXNet")
+Now set the `LD_LIBRARY_PATH` environment variable to where `libmxnet.so` is 
found. If you can't find it, you might have skipped the building MXNet step. Go 
back and [build MXNet](#build-the-shared-library) first. For example:
+```bash
+export LD_LIBRARY_PATH=$HOME/incubator-mxnet/lib:$LD_LIBRARY_PATH
+```
+
+Verify the location of `libjemalloc.so` and set the `LD_PRELOAD` environment 
variable.
+```bash
+export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so:$LD_PRELOAD
+```
+
+With all of these updates, here's an example of what you might want to have in 
your `.zshrc`, `.bashrc`, `.profile` or `.bash_profile`.
+
+```
+export PATH=$HOME/bin:$HOME/.local/bin:$HOME/julia/julia-1.0.3/bin:$PATH
+export JULIA_DEPOT_PATH=$HOME/julia/julia-depot
 
 Review comment:
   Ok, since it doesn't break anything but is unnecessary let's remove it in 
the next PR. That way I don't have to shepherd CI again.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] iblis17 commented on a change in pull request #15454: Julia docs

2019-07-11 Thread GitBox
iblis17 commented on a change in pull request #15454: Julia docs
URL: https://github.com/apache/incubator-mxnet/pull/15454#discussion_r302620627
 
 

 ##
 File path: docs/install/ubuntu_setup.md
 ##
 @@ -310,25 +310,93 @@ Refer to the [Clojure setup 
guide](https://github.com/apache/incubator-mxnet/tre
 
 ### Install the MXNet Package for Julia
 
-The MXNet package for Julia is hosted in a separate repository, MXNet.jl, 
which is available on [GitHub](https://github.com/dmlc/MXNet.jl). To use Julia 
binding it with an existing libmxnet installation, set the ```MXNET_HOME``` 
environment variable by running the following command:
+ Install Julia
+The package available through `apt-get` is old and not compatible with the 
latest version of MXNet.
+Fetch the latest version (1.0.3 at the time of this writing).
 
 ```bash
-export MXNET_HOME=//libmxnet
+wget -qO julia-10.tar.gz 
https://julialang-s3.julialang.org/bin/linux/x64/1.0/julia-1.0.3-linux-x86_64.tar.gz
 ```
 
-The path to the existing libmxnet installation should be the root directory of 
libmxnet. In other words, you should be able to find the ```libmxnet.so``` file 
at ```$MXNET_HOME/lib```. For example, if the root directory of libmxnet is 
```~```, you would run the following command:
+Place the extracted files somewhere like a julia folder in your home dir.
 
 ```bash
-export MXNET_HOME=/~/libmxnet
+mkdir ~/julia
+mv julia-10.tar.gz ~/julia
+cd ~/julia
+tar xvf julia-10.tar.gz
 ```
 
-You might want to add this command to your ```~/.bashrc``` file. If you do, 
you can install the Julia package in the Julia console using the following 
command:
+Test Julia.
+```bash
+cd julia-1.0.3/bin
+julia -e 'using InteractiveUtils; versioninfo()'
+```
+
+If you're still getting the old version, remove it.
+```bash
+sudo apt remove julia
+```
+
+Update your PATH to have Julia's new location. Add this to your `.zshrc`, 
`.bashrc`, `.profile` or `.bash_profile`.
+```bash
+export PATH=~/julia/julia-1.0.3/bin:$PATH
+```
+
+Validate your PATH.
+```bash
+echo $PATH
+```
+
+Validate Julia works and is the expected version.
+```bash
+julia -e 'using InteractiveUtils; versioninfo()'
+```
+
+ Setup Your MXNet-Julia Environment
+
+**For each of the following environment variables, add the commands to your 
`.zshrc`, `.bashrc`, `.profile` or `.bash_profile` to make them persist.**
+
+Create a `julia-depot` folder and environment variable.
+```bash
+mkdir julia-depot
+export JULIA_DEPOT_PATH=$HOME/julia/julia-depot
+```
+
+To use the Julia binding with an existing `libmxnet` installation, set the 
`MXNET_HOME` environment variable to the MXNet source root. For example:
+```bash
+export MXNET_HOME=$HOME/incubator-mxnet
+```
 
-```julia
-Pkg.add("MXNet")
+Now set the `LD_LIBRARY_PATH` environment variable to where `libmxnet.so` is 
found. If you can't find it, you might have skipped the building MXNet step. Go 
back and [build MXNet](#build-the-shared-library) first. For example:
+```bash
+export LD_LIBRARY_PATH=$HOME/incubator-mxnet/lib:$LD_LIBRARY_PATH
+```
+
+Verify the location of `libjemalloc.so` and set the `LD_PRELOAD` environment 
variable.
+```bash
+export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so:$LD_PRELOAD
+```
+
+With all of these updates, here's an example of what you might want to have in 
your `.zshrc`, `.bashrc`, `.profile` or `.bash_profile`.
+
+```
+export PATH=$HOME/bin:$HOME/.local/bin:$HOME/julia/julia-1.0.3/bin:$PATH
+export JULIA_DEPOT_PATH=$HOME/julia/julia-depot
+export MXNET_HOME=$HOME/incubator-mxnet
+export LD_LIBRARY_PATH=$HOME/incubator-mxnet/lib:$LD_LIBRARY_PATH
+export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so:$LD_PRELOAD
+```
+
+Install MXNet with Julia:
+
+```bash
+julia --color=yes --project=./ -e \
+ 'using Pkg; \
+  Pkg.develop(PackageSpec(name="MXNet", path = 
joinpath(ENV["MXNET_HOME"], "julia")))'
 
 Review comment:
   ah, :ok_hand: 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] iblis17 commented on issue #15454: Julia docs

2019-07-11 Thread GitBox
iblis17 commented on issue #15454: Julia docs
URL: https://github.com/apache/incubator-mxnet/pull/15454#issuecomment-510545691
 
 
   Nice work! @aaronmarkham 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on issue #15490: Utility to help developers debug operators: Tensor Inspector

2019-07-11 Thread GitBox
access2rohit commented on issue #15490: Utility to help developers debug 
operators: Tensor Inspector
URL: https://github.com/apache/incubator-mxnet/pull/15490#issuecomment-510561397
 
 
   Can you split the PR into 2 separate PRs. One for CPU and one for GPU ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] matteosal commented on issue #15497: [MKLDNN] Independent gradients requests check with respect to weights and bias of convolution

2019-07-11 Thread GitBox
matteosal commented on issue #15497: [MKLDNN] Independent gradients requests 
check with respect to weights and bias of convolution
URL: https://github.com/apache/incubator-mxnet/pull/15497#issuecomment-510563332
 
 
   The example of https://github.com/apache/incubator-mxnet/issues/15464 is 
fixed here, but I see a failure with this one, where the weights gradient is 
requested in isolation (so the opposite of 
https://github.com/apache/incubator-mxnet/issues/15464 ):
   ```
   import mxnet as mx
   
   sym = mx.sym.Convolution(
mx.sym.Variable('in'), 
mx.sym.Variable('w'), 
mx.sym.Variable('b'),
kernel=(1, 1), 
num_filter=1
   )
   args = {
'in': mx.nd.ones([1, 1, 3, 3]),
'w': mx.nd.ones([1, 1, 1, 1]),
'b': mx.nd.ones([1]),
   }
   grad = {
'in': mx.nd.zeros([1, 1, 3, 3]),
'w': mx.nd.zeros([1, 1, 1, 1]),
'b': mx.nd.zeros([1]),
   }
   req = {'in': 'null', 'w': 'write', 'b': 'null'}
   outgrad = mx.nd.ones([1, 1, 3, 3])
   
   ex = sym.bind(mx.cpu(), args, args_grad=grad, grad_req=req)
   
   ex.forward(True);
   ex.backward(out_grads=outgrad);
   mx.ndarray.waitall()
   ```
   This is what gets printed to command line:
   ```
   Traceback (most recent call last):
 File "script2.py", line 27, in 
   mx.ndarray.waitall()
 File "/home/matteo/Git/mxnet/python/mxnet/ndarray/ndarray.py", line 166, 
in waitall
   check_call(_LIB.MXNDArrayWaitAll())
 File "/home/matteo/Git/mxnet/python/mxnet/base.py", line 253, in check_call
   raise MXNetError(py_str(_LIB.MXGetLastError()))
   mxnet.base.MXNetError: std::exception
   ```
   
   It doesn't fail on master


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kshitij12345 commented on a change in pull request #15253: [MXNET-978] Add higher order gradient support `tan`, `tanh`

2019-07-11 Thread GitBox
kshitij12345 commented on a change in pull request #15253: [MXNET-978] Add 
higher order gradient support `tan`, `tanh`
URL: https://github.com/apache/incubator-mxnet/pull/15253#discussion_r30264
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_trig.cc
 ##
 @@ -139,7 +139,31 @@ The storage type of ``tan`` output depends upon the input 
storage type:
 )code" ADD_FILELINE)
 .set_attr("FGradient", ElemwiseGradUseOut{ "_backward_tan" });
 
-MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_tan, 
unary_bwd);
+MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_tan, 
unary_bwd)
+.set_attr("FGradient",
+  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+  // NodeEntry{n} : y_grad * f'(x)
+  // n->inputs[0] : y_grad
+  // n->inputs[1] : f(x) = tan(x)
+  // ograds[0] : head_grads
+  // f'(x) = sec^2(x)
+  // f''(x) = 2 * f'(x) * f(x)
+  const std::unordered_map args = {{"scalar", 
"2.0"}};
+  auto two_y = MakeNode("_mul_scalar", n->attrs.name + "_mul_two", 
{n->inputs[1]}, &args, &n);
+  auto grad_grad_mid = MakeNode("elemwise_mul", n->attrs.name + 
"_grad_mul",
 
 Review comment:
   I have updated the comment. See if it is okay? Or maybe the phrasing can be 
improved.
   Thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] frankfliu commented on issue #15509: Train models under the director '/mxnet/example/gluon/'

2019-07-11 Thread GitBox
frankfliu commented on issue #15509: Train models under the director 
'/mxnet/example/gluon/'
URL: 
https://github.com/apache/incubator-mxnet/issues/15509#issuecomment-510566851
 
 
   @mxnet-label-bot add [Question, distributed, training]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] frankfliu commented on issue #15514: dmlc.lib is not generated

2019-07-11 Thread GitBox
frankfliu commented on issue #15514: dmlc.lib is not generated
URL: 
https://github.com/apache/incubator-mxnet/issues/15514#issuecomment-510568547
 
 
   @mxnet-label-bot add [build]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] frankfliu commented on issue #15512: possible bug in nd.gather_nd

2019-07-11 Thread GitBox
frankfliu commented on issue #15512: possible bug in nd.gather_nd
URL: 
https://github.com/apache/incubator-mxnet/issues/15512#issuecomment-510568295
 
 
   @mxnet-label-bot add [backend, cuda, question] 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #15506: Improving error message

2019-07-11 Thread GitBox
ChaiBapchya commented on issue #15506: Improving error message
URL: 
https://github.com/apache/incubator-mxnet/issues/15506#issuecomment-510574285
 
 
   I had an issue with this part of that error
   ` when writing output to : `
   
   And I am running this on p2.8x large GPU-optimized AWS EC2 instance with EBS 
of 100gig
   But somehow falls into the space issue
   I'm surely missing something.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] DickJC123 commented on a change in pull request #15449: cuda/cuDNN lib version checking. Force cuDNN v7 usage.

2019-07-11 Thread GitBox
DickJC123 commented on a change in pull request #15449: cuda/cuDNN lib version 
checking.  Force cuDNN v7 usage.
URL: https://github.com/apache/incubator-mxnet/pull/15449#discussion_r302657800
 
 

 ##
 File path: src/common/cuda_utils.cc
 ##
 @@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file cuda_utils.cc
+ * \brief CUDA debugging utilities.
+ */
+
+#include 
+#include "cuda_utils.h"
+
+#if MXNET_USE_CUDA == 1
+
+namespace mxnet {
+namespace common {
+namespace cuda {
+
+// The oldest version of cuda used in upstream MXNet CI testing, both for unix 
and windows.
+// Users that have rebuilt MXNet against older versions will we advised with a 
warning to upgrade
+// their systems to match the CI level.  Minimally, users should rerun the CI 
locally.
+#if defined(_MSC_VER)
+#define MXNET_CI_OLDEST_CUDA_VERSION  9020
+#else
+#define MXNET_CI_OLDEST_CUDA_VERSION 1
+#endif
+
+// Dynamic init here will emit a warning if runtime and compile-time cuda lib 
versions mismatch.
+// Also if the user has recompiled their source to a version no longer tested 
by upstream CI.
+bool cuda_version_check_performed = []() {
+  // Don't bother with checks if there are no GPUs visible (e.g. with 
CUDA_VISIBLE_DEVICES="")
+  if (dmlc::GetEnv("MXNET_CUDA_VERSION_CHECKING", true) && 
Context::GetGPUCount() > 0) {
+int linkedAgainstCudaVersion = 0;
+CUDA_CALL(cudaRuntimeGetVersion(&linkedAgainstCudaVersion));
+if (linkedAgainstCudaVersion != CUDA_VERSION)
+  LOG(WARNING) << "cuda library mismatch: linked-against version " << 
linkedAgainstCudaVersion
 
 Review comment:
   So 'yes' there would be a warning if the user built against 10.1, but ran 
with 10.2.  These warnings can be turned off with an environment variable 
setting MXNET_CUDA_VERSION_CHECKING=0.  The idea behind the 'advisory' is that 
the user may want to rebuild to get the new functionality present in 10.2, or 
perhaps to avoid work-arounds for any issues of 10.1.  It's probably more 
useful with the CUDNN version checks, where we have far more compile guards 
based on version minor numbers.  Do you feel these warnings would be unwelcome 
to users?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #15506: Improving error message

2019-07-11 Thread GitBox
ChaiBapchya commented on issue #15506: Improving error message
URL: 
https://github.com/apache/incubator-mxnet/issues/15506#issuecomment-510581504
 
 
   Issue resolved by
   1. Switching to Ubuntu AMI (instead of DL AMI that comes with loads of other 
packages and libraries I don't need)
   2. Increasing the  root storage capacity (default 8gig to 20)
   
   @marcoabreu 
   coming to this specific issue - "when writing output to " seems abrupt. We 
can either do away with that  or just mention the location where was the 
process writing output to. What do you reckon?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya edited a comment on issue #15506: Improving error message

2019-07-11 Thread GitBox
ChaiBapchya edited a comment on issue #15506: Improving error message
URL: 
https://github.com/apache/incubator-mxnet/issues/15506#issuecomment-510581504
 
 
   So you're right. Root cause was related to disk space. (Though it is weird 
how even after mounting 100gig of EBS volume on /data directory, it ran into 
this?)
   Issue resolved by
   1. Switching to Ubuntu AMI (instead of DL AMI that comes with loads of other 
packages and libraries I don't need)
   2. Increasing the  root storage capacity (default 8gig to 20)
   
   @marcoabreu 
   coming to this specific issue - "when writing output to " seems abrupt. We 
can either do away with that  or just mention the location where was the 
process writing output to. What do you reckon?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kevinzh92 opened a new pull request #15516: Fix memory leak reported by ASAN in NNVM to ONNX conversion

2019-07-11 Thread GitBox
kevinzh92 opened a new pull request #15516: Fix memory leak reported by ASAN in 
NNVM to ONNX conversion
URL: https://github.com/apache/incubator-mxnet/pull/15516
 
 
   ## Description ##
   Fix memory leak reported by ASAN in NNVM to ONNX conversion of a constant.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Fixed a memory leak in NNVM to ONNX constants conversion code. The 
`shared_ptr`, as written, does not call `delete[]` on a dynamically allocated 
array.
   
   ## Comments ##


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15449: cuda/cuDNN lib version checking. Force cuDNN v7 usage.

2019-07-11 Thread GitBox
larroy commented on a change in pull request #15449: cuda/cuDNN lib version 
checking.  Force cuDNN v7 usage.
URL: https://github.com/apache/incubator-mxnet/pull/15449#discussion_r302695533
 
 

 ##
 File path: src/common/cuda_utils.cc
 ##
 @@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file cuda_utils.cc
+ * \brief CUDA debugging utilities.
+ */
+
+#include 
+#include "cuda_utils.h"
+
+#if MXNET_USE_CUDA == 1
+
+namespace mxnet {
+namespace common {
+namespace cuda {
+
+// The oldest version of cuda used in upstream MXNet CI testing, both for unix 
and windows.
+// Users that have rebuilt MXNet against older versions will we advised with a 
warning to upgrade
+// their systems to match the CI level.  Minimally, users should rerun the CI 
locally.
+#if defined(_MSC_VER)
+#define MXNET_CI_OLDEST_CUDA_VERSION  9020
+#else
+#define MXNET_CI_OLDEST_CUDA_VERSION 1
+#endif
+
+// Dynamic init here will emit a warning if runtime and compile-time cuda lib 
versions mismatch.
+// Also if the user has recompiled their source to a version no longer tested 
by upstream CI.
+bool cuda_version_check_performed = []() {
+  // Don't bother with checks if there are no GPUs visible (e.g. with 
CUDA_VISIBLE_DEVICES="")
+  if (dmlc::GetEnv("MXNET_CUDA_VERSION_CHECKING", true) && 
Context::GetGPUCount() > 0) {
+int linkedAgainstCudaVersion = 0;
+CUDA_CALL(cudaRuntimeGetVersion(&linkedAgainstCudaVersion));
+if (linkedAgainstCudaVersion != CUDA_VERSION)
+  LOG(WARNING) << "cuda library mismatch: linked-against version " << 
linkedAgainstCudaVersion
 
 Review comment:
   I think the question is what can be the issues when linking against a 
smaller cuda, leaving performance gains on the table? I think you guys are the 
experts, I was getting some info from here: 
https://docs.nvidia.com/deploy/cuda-compatibility/#binary-compatibility 
   Does this warning indicate a real problem or will it confuse users, when 
there's nothing wrong on running with a newer cuda.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy edited a comment on issue #15285: Graph dumper

2019-07-11 Thread GitBox
larroy edited a comment on issue #15285: Graph dumper
URL: https://github.com/apache/incubator-mxnet/pull/15285#issuecomment-509858209
 
 
   @ptrendx Thanks
   I will try to use your suggestion in the next iteration.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15253: [MXNET-978] Add higher order gradient support `tan`, `tanh`

2019-07-11 Thread GitBox
larroy commented on a change in pull request #15253: [MXNET-978] Add higher 
order gradient support `tan`, `tanh`
URL: https://github.com/apache/incubator-mxnet/pull/15253#discussion_r302708484
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_trig.cc
 ##
 @@ -139,7 +139,31 @@ The storage type of ``tan`` output depends upon the input 
storage type:
 )code" ADD_FILELINE)
 .set_attr("FGradient", ElemwiseGradUseOut{ "_backward_tan" });
 
-MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_tan, 
unary_bwd);
+MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_tan, 
unary_bwd)
+.set_attr("FGradient",
+  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+  // NodeEntry{n} : y_grad * f'(x)
+  // n->inputs[0] : y_grad
+  // n->inputs[1] : f(x) = tan(x)
+  // ograds[0] : head_grads
+  // f'(x) = sec^2(x)
+  // f''(x) = 2 * f'(x) * f(x)
+  const std::unordered_map args = {{"scalar", 
"2.0"}};
+  auto two_y = MakeNode("_mul_scalar", n->attrs.name + "_mul_two", 
{n->inputs[1]}, &args, &n);
+  auto grad_grad_mid = MakeNode("elemwise_mul", n->attrs.name + 
"_grad_mul",
 
 Review comment:
   Thanks. About the outputs, I think we should write some documentation 
explaining what we are doing as I find it non trivial. Can you help me 
understand the y_grad_grad (first output)?
   
   If you want, we can move the conversation to the dev list or slack, as the 
PR LGTM.
   
   
![IMG_20190711_122948__01](https://user-images.githubusercontent.com/928489/61079539-f1b55600-a3d7-11e9-9bfe-36b45706e47e.jpg)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   6   7   8   9   10   >