[GitHub] [incubator-mxnet] anirudh2290 opened a new pull request #16107: Skip coverage files find for nightly tests

2019-09-05 Thread GitBox
anirudh2290 opened a new pull request #16107: Skip coverage files find for 
nightly tests
URL: https://github.com/apache/incubator-mxnet/pull/16107
 
 
   ## Description ##
   Skip coverage files find for nightly tests. Certain stages were assuming 
ENABLE_TESTCOVERAGE was on, but it has been turned off here : 
https://github.com/apache/incubator-mxnet/pull/15981/files#diff-1335fbaf3930b1438d9be18edb07a1a6L784
   
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ZhennanQin commented on issue #16100: Infra for tvm op runtime dispatch

2019-09-05 Thread GitBox
ZhennanQin commented on issue #16100: Infra for tvm op runtime dispatch
URL: https://github.com/apache/incubator-mxnet/pull/16100#issuecomment-528712768
 
 
   Just for curious. Based on my knowledge, tvm op kernel is pre-compiled and 
then linked together with MXNet. How can it be configured according to the 
runtime input shapes? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ZhennanQin commented on issue #16106: src/executor/graph_executor.cc:1847: Check failed: arg_names.size() == in_args_map.size() (2 vs. 1)

2019-09-05 Thread GitBox
ZhennanQin commented on issue #16106: src/executor/graph_executor.cc:1847: 
Check failed: arg_names.size() == in_args_map.size() (2 vs. 1)
URL: 
https://github.com/apache/incubator-mxnet/issues/16106#issuecomment-528710380
 
 
   Hi @adc17 , probably caused by enabling `MKLDNN` subgraph backend by default 
on master. Subgraph assumes that the input variables have unique name, if not, 
the error message you provided will be shown. I don't have the environment to 
run clojure-package, but I guess it's caused by the improper test cases that 
use same name for multiple variables. May I know the case names that failed? 
Then I can have a look for the test to verify my suspect.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 commented on issue #16102: Usability degradation

2019-09-05 Thread GitBox
anirudh2290 commented on issue #16102: Usability degradation
URL: 
https://github.com/apache/incubator-mxnet/issues/16102#issuecomment-528701725
 
 
   There are already tests for exception handling: 
https://github.com/apache/incubator-mxnet/blob/master/tests/python/unittest/test_exc_handling.py
 . The problem is we don't test for macos, do we ? Having said that this was 
working at some point in time, so as @reminisce suggests this may be issue only 
with specific builds. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 commented on issue #15148: Very Large CPU RAM Memory Consumption (>1GB)

2019-09-05 Thread GitBox
anirudh2290 commented on issue #15148: Very Large CPU RAM Memory Consumption 
(>1GB)
URL: 
https://github.com/apache/incubator-mxnet/issues/15148#issuecomment-528700879
 
 
   @rvardimon thanks for raising the issue. Have you looked at the profiler 
output from mxnet , when you load the model params it initially loads on cpu so 
you may see a spike but then it should be released after set_params is called. 
if not module api is adding some overhead which should be looked at: @karan6181 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xinyu-intel commented on a change in pull request #15910: [Quantization]support exclude operators while quantization

2019-09-05 Thread GitBox
xinyu-intel commented on a change in pull request #15910: [Quantization]support 
exclude operators while quantization
URL: https://github.com/apache/incubator-mxnet/pull/15910#discussion_r321569610
 
 

 ##
 File path: src/operator/quantization/quantize_graph_pass.cc
 ##
 @@ -102,28 +102,57 @@ std::vector 
OfflineParams(std::vector&& outputs,
   return outputs;
 }
 
-inline NodePtr NeedQuantize(NodePtr node, const 
std::unordered_set& excluded_nodes) {
+// To check if a node is registered with a computation function on a target 
device.
+bool isRegistered(NodePtr node, const int& dev_type) {
+  const auto& op = node->op();
+  Context ctx = Context::Create(static_cast(dev_type), 0);
+  FCompute fcompute = common::GetFCompute(op, "FCompute", ctx);
+  FComputeEx fcomp_ex = common::GetFCompute(op, "FComputeEx", ctx);
+  FStatefulCompute fcomputestateful =
+  common::GetFCompute(op, "FStatefulCompute", ctx);
+  FStatefulComputeEx fcomputestateful_ex =
+  common::GetFCompute(op, "FStatefulComputeEx", ctx);
+  return (fcompute != nullptr || fcomp_ex != nullptr ||
+  fcomputestateful != nullptr || fcomputestateful_ex != nullptr);
+}
+
+inline NodePtr NeedQuantize(
+NodePtr node, const std::unordered_set& excluded_nodes,
+const std::unordered_set& excluded_ops,
+const int& dev_type) {
   std::unordered_map quantized_node;
   static auto& quantized_op_map = 
Op::GetAttr("FQuantizedOp");
   static auto& fexec_type = nnvm::Op::GetAttr("FExecType");
   const auto& op = node->op();
 
   if (op && quantized_op_map.count(op)) {
 bool need = true;
-if (excluded_nodes.count(node->attrs.name)) {
+// If the quantized node is not registered with a computation function, 
the node
+// will be excluded automatically.
+auto q_ptr = quantized_op_map[node->op()];
+auto qnode = q_ptr(node->attrs);
+if (!isRegistered(qnode, dev_type)) {
+  LOG(INFO) << "Neither FCompute nor FComputeEx registered, " << 
node->op()->name
+<< " excluded automatically.";
   need = false;
-} else if (!node->attrs.subgraphs.empty()) {
-  ExecType exec_type = fexec_type.count(op) ? fexec_type[op](node->attrs) 
: ExecType::kSync;
-  if (exec_type != ExecType::kSubgraphExec) {
-// This is a fused subgraph node, try to match inner node.
-CHECK_EQ(node->attrs.subgraphs.size(), 1);
-auto subgraph_sym = node->attrs.subgraphs[0];
-DFSVisit(subgraph_sym->outputs, [&](const nnvm::NodePtr& n) {
-  if (n->is_variable()) return;
-  if (excluded_nodes.count(n->attrs.name)) {
-need = false;
-  }
-});
+} else {
+  if (excluded_nodes.count(node->attrs.name) ||
+  excluded_ops.count(node->op()->name)) {
+need = false;
+  } else if (!node->attrs.subgraphs.empty()) {
+ExecType exec_type = fexec_type.count(op) ? 
fexec_type[op](node->attrs) : ExecType::kSync;
+if (exec_type != ExecType::kSubgraphExec) {
+  // This is a fused subgraph node, try to match inner node.
+  CHECK_EQ(node->attrs.subgraphs.size(), 1);
+  auto subgraph_sym = node->attrs.subgraphs[0];
+  DFSVisit(subgraph_sym->outputs, [&](const nnvm::NodePtr& n) {
+if (n->is_variable()) return;
+if (excluded_nodes.count(n->attrs.name) ||
+excluded_ops.count(node->op()->name)) {
 
 Review comment:
   I found we cannot exclude fused conv layers when 
setting`excluded_op_names=['Convolution']`. Is it necessary to check the inner 
node here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wuxun-zhang commented on issue #16104: Faster Transpose 2D

2019-09-05 Thread GitBox
wuxun-zhang commented on issue #16104: Faster Transpose 2D
URL: https://github.com/apache/incubator-mxnet/pull/16104#issuecomment-528699244
 
 
   @ChaiBapchya From your performance table, I also noticed that int32 has a 
very close performance number with int64 (if I am right, here int64 means using 
large tensor) for non-mkl 2D transpose op in master repo. Is there any 
performance regression with 3-D or 4-D input shape? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #16089: [Numpy] Random.choice implemented

2019-09-05 Thread GitBox
xidulu commented on a change in pull request #16089: [Numpy] Random.choice 
implemented
URL: https://github.com/apache/incubator-mxnet/pull/16089#discussion_r321561161
 
 

 ##
 File path: src/operator/numpy/random/np_choice_op.h
 ##
 @@ -0,0 +1,245 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file np_choice_op.h
+ * \brief Operator for random subset sampling
+ */
+
+#ifndef MXNET_OPERATOR_NUMPY_RANDOM_NP_CHOICE_OP_H_
+#define MXNET_OPERATOR_NUMPY_RANDOM_NP_CHOICE_OP_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "../../elemwise_op_common.h"
+#include "../../mshadow_op.h"
+#include "../../mxnet_op.h"
+#include "../../operator_common.h"
+#include "../../tensor/elemwise_binary_broadcast_op.h"
+
+namespace mxnet {
+namespace op {
+
+struct NumpyChoiceParam : public dmlc::Parameter {
+  dmlc::optional a;
+  std::string ctx;
+  dmlc::optional> size;
+  bool replace;
+  bool weighted;
+  DMLC_DECLARE_PARAMETER(NumpyChoiceParam) {
+DMLC_DECLARE_FIELD(a);
+DMLC_DECLARE_FIELD(size);
+DMLC_DECLARE_FIELD(ctx).set_default("cpu");
+DMLC_DECLARE_FIELD(replace).set_default(true);
+DMLC_DECLARE_FIELD(weighted).set_default(false);
+  }
+};
+
+inline bool NumpyChoiceOpType(const nnvm::NodeAttrs ,
+  std::vector *in_attrs,
+  std::vector *out_attrs) {
+  (*out_attrs)[0] = mshadow::kInt64;
+  return true;
+}
+
+inline bool NumpyChoiceOpShape(const nnvm::NodeAttrs ,
+   std::vector *in_attrs,
+   std::vector *out_attrs) {
+  const NumpyChoiceParam  = nnvm::get(attrs.parsed);
+  if (param.size.has_value()) {
+// Size declared.
+std::vector oshape_vec;
+const mxnet::Tuple  = param.size.value();
+for (int i = 0; i < size.ndim(); ++i) {
+  oshape_vec.emplace_back(size[i]);
+}
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, TShape(oshape_vec));
+  } else {
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, TShape(0, -1))
+  }
+  return true;
+}
+
+template 
+void _sort(float *key, int64_t *data, index_t length);
+
+namespace mxnet_op {
+
+// Uniform sample without replacement.
+struct generate_samples {
+  MSHADOW_XINLINE static void Map(index_t i, int64_t k, unsigned *rands) {
+rands[i] = rands[i] % (i + k + 1);
+  }
+};
+
+template 
+struct generate_reservoir {
+  MSHADOW_XINLINE static void Map(index_t dummy_index, int64_t *indices,
+  unsigned *samples, int64_t nb_iterations,
+  int64_t k) {
+for (int64_t i = 0; i < nb_iterations; i++) {
+  int64_t z = samples[i];
+  if (z < k) {
+int64_t t = indices[z];
+indices[z] = indices[i + k];
+indices[i + k] = t;
+  }
+}
+  }
+};
+
+// Uniform sample with replacement.
+struct random_indices {
+  MSHADOW_XINLINE static void Map(index_t i, unsigned *samples, int64_t *outs,
+  int64_t k) {
+outs[i] = samples[i] % k;
+  }
+};
+
+// Weighted sample without replacement.
+// Use perturbed Gumbel variates as keys.
+struct generate_keys {
+  MSHADOW_XINLINE static void Map(index_t i, float *uniforms, float *weights) {
+uniforms[i] = -logf(-logf(uniforms[i])) + logf(weights[i]);
+  }
+};
+
+// Weighted sample with replacement.
+struct categorical_sampling {
+  MSHADOW_XINLINE static void Map(index_t i, float *weights, size_t length,
+  float *uniforms, int64_t *outs) {
+outs[i] = 0;
+float acc = 0.0;
+float threshold = uniforms[i];
+for (size_t k = 0; k < length; k++) {
+  acc += weights[k];
+  if (acc < threshold) {
+outs[i] += 1;
+  }
+}
+  }
+};
+
+}  // namespace mxnet_op
+
+template 
+void NumpyChoiceForward(const nnvm::NodeAttrs , const OpContext ,
+const std::vector ,
+const std::vector ,
+const std::vector ) {
+  using namespace mshadow;
+  using namespace mxnet_op;
+  const NumpyChoiceParam  = nnvm::get(attrs.parsed);
+  Stream *s = ctx.get_stream();
+  bool replace = param.replace;
+  bool weighted = 

[GitHub] [incubator-mxnet] access2rohit commented on issue #16104: Faster Transpose 2D

2019-09-05 Thread GitBox
access2rohit commented on issue #16104: Faster Transpose 2D
URL: https://github.com/apache/incubator-mxnet/pull/16104#issuecomment-528686527
 
 
   
   
   
   > Will the GPU side be accelerated?
   
   @sxjscience not yet! work will be done on that after CPU one is accelerated 
first 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16089: [Numpy] Random.choice implemented

2019-09-05 Thread GitBox
haojin2 commented on a change in pull request #16089: [Numpy] Random.choice 
implemented
URL: https://github.com/apache/incubator-mxnet/pull/16089#discussion_r321558598
 
 

 ##
 File path: src/operator/numpy/random/np_choice_op.h
 ##
 @@ -0,0 +1,245 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file np_choice_op.h
+ * \brief Operator for random subset sampling
+ */
+
+#ifndef MXNET_OPERATOR_NUMPY_RANDOM_NP_CHOICE_OP_H_
+#define MXNET_OPERATOR_NUMPY_RANDOM_NP_CHOICE_OP_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "../../elemwise_op_common.h"
+#include "../../mshadow_op.h"
+#include "../../mxnet_op.h"
+#include "../../operator_common.h"
+#include "../../tensor/elemwise_binary_broadcast_op.h"
+
+namespace mxnet {
+namespace op {
+
+struct NumpyChoiceParam : public dmlc::Parameter {
+  dmlc::optional a;
+  std::string ctx;
+  dmlc::optional> size;
+  bool replace;
+  bool weighted;
+  DMLC_DECLARE_PARAMETER(NumpyChoiceParam) {
+DMLC_DECLARE_FIELD(a);
+DMLC_DECLARE_FIELD(size);
+DMLC_DECLARE_FIELD(ctx).set_default("cpu");
+DMLC_DECLARE_FIELD(replace).set_default(true);
+DMLC_DECLARE_FIELD(weighted).set_default(false);
+  }
+};
+
+inline bool NumpyChoiceOpType(const nnvm::NodeAttrs ,
+  std::vector *in_attrs,
+  std::vector *out_attrs) {
+  (*out_attrs)[0] = mshadow::kInt64;
+  return true;
+}
+
+inline bool NumpyChoiceOpShape(const nnvm::NodeAttrs ,
+   std::vector *in_attrs,
+   std::vector *out_attrs) {
+  const NumpyChoiceParam  = nnvm::get(attrs.parsed);
+  if (param.size.has_value()) {
+// Size declared.
+std::vector oshape_vec;
+const mxnet::Tuple  = param.size.value();
+for (int i = 0; i < size.ndim(); ++i) {
+  oshape_vec.emplace_back(size[i]);
+}
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, TShape(oshape_vec));
+  } else {
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, TShape(0, -1))
+  }
+  return true;
+}
+
+template 
+void _sort(float *key, int64_t *data, index_t length);
+
+namespace mxnet_op {
+
+// Uniform sample without replacement.
+struct generate_samples {
+  MSHADOW_XINLINE static void Map(index_t i, int64_t k, unsigned *rands) {
+rands[i] = rands[i] % (i + k + 1);
+  }
+};
+
+template 
+struct generate_reservoir {
+  MSHADOW_XINLINE static void Map(index_t dummy_index, int64_t *indices,
+  unsigned *samples, int64_t nb_iterations,
+  int64_t k) {
+for (int64_t i = 0; i < nb_iterations; i++) {
+  int64_t z = samples[i];
+  if (z < k) {
+int64_t t = indices[z];
+indices[z] = indices[i + k];
+indices[i + k] = t;
+  }
+}
+  }
+};
+
+// Uniform sample with replacement.
+struct random_indices {
+  MSHADOW_XINLINE static void Map(index_t i, unsigned *samples, int64_t *outs,
+  int64_t k) {
+outs[i] = samples[i] % k;
+  }
+};
+
+// Weighted sample without replacement.
+// Use perturbed Gumbel variates as keys.
+struct generate_keys {
+  MSHADOW_XINLINE static void Map(index_t i, float *uniforms, float *weights) {
+uniforms[i] = -logf(-logf(uniforms[i])) + logf(weights[i]);
+  }
+};
+
+// Weighted sample with replacement.
+struct categorical_sampling {
+  MSHADOW_XINLINE static void Map(index_t i, float *weights, size_t length,
+  float *uniforms, int64_t *outs) {
+outs[i] = 0;
+float acc = 0.0;
+float threshold = uniforms[i];
+for (size_t k = 0; k < length; k++) {
+  acc += weights[k];
+  if (acc < threshold) {
+outs[i] += 1;
+  }
+}
+  }
+};
+
+}  // namespace mxnet_op
+
+template 
+void NumpyChoiceForward(const nnvm::NodeAttrs , const OpContext ,
+const std::vector ,
+const std::vector ,
+const std::vector ) {
+  using namespace mshadow;
+  using namespace mxnet_op;
+  const NumpyChoiceParam  = nnvm::get(attrs.parsed);
+  Stream *s = ctx.get_stream();
+  bool replace = param.replace;
+  bool weighted = 

[GitHub] [incubator-mxnet] xinyu-intel commented on a change in pull request #16075: Integrate MKL-DNN leakyrelu

2019-09-05 Thread GitBox
xinyu-intel commented on a change in pull request #16075: Integrate MKL-DNN 
leakyrelu
URL: https://github.com/apache/incubator-mxnet/pull/16075#discussion_r321557922
 
 

 ##
 File path: src/operator/leaky_relu.cc
 ##
 @@ -25,27 +25,123 @@
 */
 
 #include "./leaky_relu-inl.h"
+#if MXNET_USE_MKLDNN == 1
+#include "./nn/mkldnn/mkldnn_base-inl.h"
+#include "./nn/mkldnn/mkldnn_ops-inl.h"
+#endif  // MXNET_USE_MKLDNN == 1
 
 #include 
 namespace mxnet {
 namespace op {
-template<>
-Operator *CreateOp(LeakyReLUParam param, int dtype) {
-  Operator* op = nullptr;
-  MSHADOW_REAL_TYPE_SWITCH(dtype, DType, {
-op = new LeakyReLUOp(param);
-  });
-  return op;
+
+DMLC_REGISTER_PARAMETER(LeakyReLUParam);
+
+static bool LeakyReLUType(const nnvm::NodeAttrs& attrs,
+  std::vector *in_type,
+  std::vector *out_type) {
+  int dtype = -1;
+  for (const int& type : *in_type) {
+type_assign(, type);
+  }
+  for (const int& type : *out_type) {
+type_assign(, type);
+  }
+  for (size_t i = 0; i < in_type->size(); ++i) {
+TYPE_ASSIGN_CHECK(*in_type, i, dtype);
+  }
+  for (size_t i = 0; i < out_type->size(); ++i) {
+TYPE_ASSIGN_CHECK(*out_type, i, dtype);
+  }
+  return dtype != -1;
 }
 
-Operator *LeakyReLUProp::CreateOperatorEx(Context ctx, mxnet::ShapeVector 
*in_shape,
-  std::vector *in_type) const {
-  DO_BIND_DISPATCH(CreateOp, param_, in_type->at(0));
+static bool LeakyReLUShape(const nnvm::NodeAttrs& attrs,
+   std::vector *in_shape,
+   std::vector *out_shape) {
+  using namespace mshadow;
+  const LeakyReLUParam _ = nnvm::get(attrs.parsed);
+  if (param_.act_type == leakyrelu::kPReLU) {
+CHECK_EQ(in_shape->size(), 2U) << "Input:[data, gamma]";
+  } else {
+CHECK_EQ(in_shape->size(), 1U) << "Input:[data]";
+  }
+  const mxnet::TShape  = in_shape->at(leakyrelu::kData);
+  if (!mxnet::ndim_is_known(dshape)) return false;
+  if (param_.act_type == leakyrelu::kPReLU) {
+const mxnet::TShape  = in_shape->at(leakyrelu::kGamma);
+if (!mxnet::ndim_is_known(gshape)) {
+  in_shape->at(leakyrelu::kGamma) = mxnet::TShape(Shape1(dshape[1]));
+}
+if (dshape == gshape) {
+  SHAPE_ASSIGN_CHECK(*out_shape, 0, dshape);
+}
+  }
+  out_shape->clear();
+  out_shape->push_back(dshape);
+  if (param_.act_type == leakyrelu::kRReLU) {
+out_shape->push_back(dshape);
+  }
+  return true;
 }
 
-DMLC_REGISTER_PARAMETER(LeakyReLUParam);
+#if MXNET_USE_MKLDNN == 1
+static void LeakyReLUComputeExCPU(const nnvm::NodeAttrs& attrs,
+   const OpContext& ctx,
+   const std::vector& inputs,
+   const std::vector& req,
+   const std::vector& outputs) {
+  const LeakyReLUParam& param = nnvm::get(attrs.parsed);
+  size_t expected = param.act_type == leakyrelu::kPReLU ? 2 : 1;
+  CHECK_EQ(inputs.size(), expected);
+  if (SupportMKLDNNLeakyRelu(param, inputs[0])) {
+MKLDNN_OPCHECK_INIT(false, outputs.size(), inputs, outputs);
+MKLDNNLeakyReluForward(attrs, ctx, inputs[0], req[0], outputs[0]);
+MKLDNN_OPCHECK_RUN(LeakyReLUCompute, attrs, ctx, inputs, req, 
outputs);
+return;
+  }
+  FallBackCompute(LeakyReLUCompute, attrs, ctx, inputs, req, outputs);
+}
+
+void LeakyReLUGradComputeExCPU(const nnvm::NodeAttrs& attrs,
+const OpContext& ctx,
+const std::vector& inputs,
+const std::vector& req,
+const std::vector& outputs) {
+  const LeakyReLUParam& param = nnvm::get(attrs.parsed);
+  if (SupportMKLDNNLeakyRelu(param, inputs[0])) {
+MKLDNN_OPCHECK_INIT(true, outputs.size(), inputs, outputs);
+MKLDNNLeakyReluBackward(attrs, ctx, inputs.at(0), inputs.at(1), req[0],
 
 Review comment:
   use inputs vector directly and we need first two of three NDArrays, so add a 
`CHECK_GE(inputs, 2U)`. Is it make sense?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16089: [Numpy] Random.choice implemented

2019-09-05 Thread GitBox
haojin2 commented on a change in pull request #16089: [Numpy] Random.choice 
implemented
URL: https://github.com/apache/incubator-mxnet/pull/16089#discussion_r321557692
 
 

 ##
 File path: src/operator/numpy/random/np_choice_op.h
 ##
 @@ -0,0 +1,245 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file np_choice_op.h
+ * \brief Operator for random subset sampling
+ */
+
+#ifndef MXNET_OPERATOR_NUMPY_RANDOM_NP_CHOICE_OP_H_
+#define MXNET_OPERATOR_NUMPY_RANDOM_NP_CHOICE_OP_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "../../elemwise_op_common.h"
+#include "../../mshadow_op.h"
+#include "../../mxnet_op.h"
+#include "../../operator_common.h"
+#include "../../tensor/elemwise_binary_broadcast_op.h"
+
+namespace mxnet {
+namespace op {
+
+struct NumpyChoiceParam : public dmlc::Parameter {
+  dmlc::optional a;
+  std::string ctx;
+  dmlc::optional> size;
+  bool replace;
+  bool weighted;
+  DMLC_DECLARE_PARAMETER(NumpyChoiceParam) {
+DMLC_DECLARE_FIELD(a);
+DMLC_DECLARE_FIELD(size);
+DMLC_DECLARE_FIELD(ctx).set_default("cpu");
+DMLC_DECLARE_FIELD(replace).set_default(true);
+DMLC_DECLARE_FIELD(weighted).set_default(false);
+  }
+};
+
+inline bool NumpyChoiceOpType(const nnvm::NodeAttrs ,
+  std::vector *in_attrs,
+  std::vector *out_attrs) {
+  (*out_attrs)[0] = mshadow::kInt64;
+  return true;
+}
+
+inline bool NumpyChoiceOpShape(const nnvm::NodeAttrs ,
+   std::vector *in_attrs,
+   std::vector *out_attrs) {
+  const NumpyChoiceParam  = nnvm::get(attrs.parsed);
+  if (param.size.has_value()) {
+// Size declared.
+std::vector oshape_vec;
+const mxnet::Tuple  = param.size.value();
+for (int i = 0; i < size.ndim(); ++i) {
+  oshape_vec.emplace_back(size[i]);
+}
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, TShape(oshape_vec));
+  } else {
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, TShape(0, -1))
+  }
+  return true;
+}
+
+template 
+void _sort(float *key, int64_t *data, index_t length);
+
+namespace mxnet_op {
+
+// Uniform sample without replacement.
+struct generate_samples {
+  MSHADOW_XINLINE static void Map(index_t i, int64_t k, unsigned *rands) {
+rands[i] = rands[i] % (i + k + 1);
+  }
+};
+
+template 
+struct generate_reservoir {
+  MSHADOW_XINLINE static void Map(index_t dummy_index, int64_t *indices,
+  unsigned *samples, int64_t nb_iterations,
+  int64_t k) {
+for (int64_t i = 0; i < nb_iterations; i++) {
+  int64_t z = samples[i];
+  if (z < k) {
+int64_t t = indices[z];
+indices[z] = indices[i + k];
+indices[i + k] = t;
+  }
+}
+  }
+};
+
+// Uniform sample with replacement.
+struct random_indices {
+  MSHADOW_XINLINE static void Map(index_t i, unsigned *samples, int64_t *outs,
+  int64_t k) {
+outs[i] = samples[i] % k;
+  }
+};
+
+// Weighted sample without replacement.
+// Use perturbed Gumbel variates as keys.
+struct generate_keys {
+  MSHADOW_XINLINE static void Map(index_t i, float *uniforms, float *weights) {
+uniforms[i] = -logf(-logf(uniforms[i])) + logf(weights[i]);
+  }
+};
+
+// Weighted sample with replacement.
+struct categorical_sampling {
+  MSHADOW_XINLINE static void Map(index_t i, float *weights, size_t length,
+  float *uniforms, int64_t *outs) {
+outs[i] = 0;
+float acc = 0.0;
+float threshold = uniforms[i];
+for (size_t k = 0; k < length; k++) {
+  acc += weights[k];
+  if (acc < threshold) {
+outs[i] += 1;
+  }
+}
+  }
+};
+
+}  // namespace mxnet_op
+
+template 
+void NumpyChoiceForward(const nnvm::NodeAttrs , const OpContext ,
+const std::vector ,
+const std::vector ,
+const std::vector ) {
+  using namespace mshadow;
+  using namespace mxnet_op;
+  const NumpyChoiceParam  = nnvm::get(attrs.parsed);
+  Stream *s = ctx.get_stream();
+  bool replace = param.replace;
+  bool weighted = 

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16089: [Numpy] Random.choice implemented

2019-09-05 Thread GitBox
haojin2 commented on a change in pull request #16089: [Numpy] Random.choice 
implemented
URL: https://github.com/apache/incubator-mxnet/pull/16089#discussion_r321555272
 
 

 ##
 File path: python/mxnet/numpy/random.py
 ##
 @@ -180,3 +180,60 @@ def multinomial(n, pvals, size=None, **kwargs):
 array([32, 68])
 """
 return _mx_nd_np.random.multinomial(n, pvals, size, **kwargs)
+
+
+def choice(a, size=None, replace=True, p=None, ctx=None, out=None):
+"""Generates a random sample from a given 1-D array
+
+Parameters
+---
+a : 1-D array-like or int
+If an ndarray, a random sample is generated from its elements.
+If an int, the random sample is generated as if a were np.arange(a)
+size : int or tuple of ints, optional
+Output shape.  If the given shape is, e.g., ``(m, n, k)``, then
+``m * n * k`` samples are drawn.  Default is None, in which case a
+single value is returned.
+replace : boolean, optional
+Whether the sample is with or without replacement
+p : 1-D array-like, optional
+The probabilities associated with each entry in a.
+If not given the sample assumes a uniform distribution over all
+entries in a.
+ctx : Context, optional
+Device context of output. Default is current context.
+out : ``ndarray``, optional
 
 Review comment:
   What's the usage of `out` here? I think you can get rid of it here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #16106: src/executor/graph_executor.cc:1847: Check failed: arg_names.size() == in_args_map.size() (2 vs. 1)

2019-09-05 Thread GitBox
mxnet-label-bot commented on issue #16106: src/executor/graph_executor.cc:1847: 
Check failed: arg_names.size() == in_args_map.size() (2 vs. 1)
URL: 
https://github.com/apache/incubator-mxnet/issues/16106#issuecomment-528678751
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended label(s): Bug


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] adc17 opened a new issue #16106: src/executor/graph_executor.cc:1847: Check failed: arg_names.size() == in_args_map.size() (2 vs. 1)

2019-09-05 Thread GitBox
adc17 opened a new issue #16106: src/executor/graph_executor.cc:1847: Check 
failed: arg_names.size() == in_args_map.size() (2 vs. 1)
URL: https://github.com/apache/incubator-mxnet/issues/16106
 
 
   ## Description
   Running `lein test` in the clojure package directory results in two 
occurrences of the following error, causing the test suite to fail: 
`org.apache.mxnet.MXNetError: [01:57:41] src/executor/graph_executor.cc:1847: 
Check failed: arg_names.size() == in_args_map.size() (2 vs. 1)`
   
   ## Environment info (Required)
   
   ```
   --Python Info--
   Version  : 3.6.8
   Compiler : GCC 8.0.1 20180414 (experimental) [trunk revision 259383
   Build: ('default', 'Jan 14 2019 11:02:34')
   Arch : ('64bit', 'ELF')
   Pip Info---
   No corresponding pip install for current python.
   --MXNet Info---
   No MXNet installed.
   --System Info--
   Platform : Linux-4.15.0-58-generic-x86_64-with-Ubuntu-18.04-bionic
   system   : Linux
   node : ubuntu-c-4-8gib-tor1-01
   release  : 4.15.0-58-generic
   version  : #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019
   --Hardware Info--
   machine  : x86_64
   processor: x86_64
   Architecture:x86_64
   CPU op-mode(s):  32-bit, 64-bit
   Byte Order:  Little Endian
   CPU(s):  4
   On-line CPU(s) list: 0-3
   Thread(s) per core:  1
   Core(s) per socket:  1
   Socket(s):   4
   NUMA node(s):1
   Vendor ID:   GenuineIntel
   CPU family:  6
   Model:   79
   Model name:  Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz
   Stepping:1
   CPU MHz: 2599.996
   BogoMIPS:5199.99
   Virtualization:  VT-x
   Hypervisor vendor:   KVM
   Virtualization type: full
   L1d cache:   32K
   L1i cache:   32K
   L2 cache:256K
   L3 cache:40960K
   NUMA node0 CPU(s):   0-3
   Flags:   fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm 
constant_tsc arch_perfmon rep_good nopl cpuid tsc_known_freq pni pclmulqdq vmx 
ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes 
xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault 
invpcid_single pti ssbd ibrs ibpb tpr_shadow vnmi flexpriority ept vpid 
fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap 
xsaveopt md_clear
   --Network Test--
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0030 
sec, LOAD: 0.3819 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0009 sec, LOAD: 
0.1919 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0009 sec, LOAD: 
0.2619 sec.
   Timing for FashionMNIST: 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz,
 DNS: 0.0006 sec, LOAD: 0.3695 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0008 sec, LOAD: 
0.0332 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0003 sec, 
LOAD: 0.0335 sec.
   --Environment--
   ```
   
   Package used (Python/R/Scala/Julia): Clojure
   
   For Scala user, please provide:
   1. Java version: (`java -version`)
   ```
   openjdk version "1.8.0_222"
   OpenJDK Runtime Environment (build 1.8.0_222-8u222-b10-1ubuntu1~18.04.1-b10)
   OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)
   ```
   2. Maven version: (`mvn -version`)
   ```
   Apache Maven 3.6.0
   Maven home: /usr/share/maven
   Java version: 1.8.0_222, vendor: Private Build, runtime: 
/usr/lib/jvm/java-8-openjdk-amd64/jre
   Default locale: en, platform encoding: UTF-8
   OS name: "linux", version: "4.15.0-58-generic", arch: "amd64", family: "unix"
   ```
   3. Scala runtime if applicable: (`scala -version`)
   ```
   Scala code runner version 2.11.12 -- Copyright 2002-2017, LAMP/EPFL
   ```
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio): n/a (using nightly snapshot)
   
   MXNet commit hash: `7f57e8e10c504fe7a463ba695321b11c6dd4912d`
   
   Build config: n/a (using nightly snapshot)
   
   ## Error Message:
   ```
actual: org.apache.mxnet.MXNetError: [01:57:41] 
src/executor/graph_executor.cc:1847: Check failed: arg_names.size() == 
in_args_map.size() (2 vs. 1) :
   Stack trace:
 [bt] (0) /tmp/mxnet4687853178926540653/libmxnet.so(+0x2ac5eb) 
[0x7fb8f7f495eb]
 [bt] (1) /tmp/mxnet4687853178926540653/libmxnet.so(+0x2be36e8) 
[0x7fb8fa8806e8]
 [bt] (2) 
/tmp/mxnet4687853178926540653/libmxnet.so(mxnet::Executor::Bind(nnvm::Symbol, 
mxnet::Context const&, std::map, std::allocator > > const&, std::vector > const&, std::vector > const&, std::vector > const&, std::vector > const&, 

[GitHub] [incubator-mxnet] sxjscience commented on issue #16104: Faster Transpose 2D

2019-09-05 Thread GitBox
sxjscience commented on issue #16104: Faster Transpose 2D
URL: https://github.com/apache/incubator-mxnet/pull/16104#issuecomment-528676506
 
 
   Will the GPU side be accelerated?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zachgk opened a new pull request #16105: Update python dependencies

2019-09-05 Thread GitBox
zachgk opened a new pull request #16105: Update python dependencies
URL: https://github.com/apache/incubator-mxnet/pull/16105
 
 
   ## Description ##
   Update dependencies to match pip release
   
   @lanking520 @access2rohit 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (d60be31 -> 7f57e8e)

2019-09-05 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d60be31  Fix gradient tensor mutate in 
`{adam/ftrl/rmprop/rmspropalex}_update`. (#15768)
 add 7f57e8e  [WIP] New Website: New Docs [1/3] (#15884)

No new revisions were added by this update.

Summary of changes:
 docs/conf.py   |2 +-
 docs/cpp_docs/Doxyfile | 2370 +++
 docs/cpp_docs/Makefile |   26 +
 docs/python_docs/README.md |   24 +
 docs/python_docs/_static/apache_incubator_logo.png |  Bin 0 -> 16552 bytes
 docs/python_docs/_static/google_analytics.js   |   26 +
 docs/python_docs/_static/minima-social-icons.svg   |   33 +
 docs/python_docs/_static/mxnet-icon.png|  Bin 0 -> 2741 bytes
 docs/python_docs/_static/mxnet.css |  199 ++
 docs/python_docs/_static/mxnet_logo.png|  Bin 0 -> 22390 bytes
 docs/python_docs/environment.yml   |   38 +
 docs/python_docs/python/.gitignore |   20 +
 docs/python_docs/python/Makefile   |   57 +
 docs/python_docs/python/Makefile_sphinx|  216 ++
 docs/python_docs/python/README.md  |  130 ++
 docs/python_docs/python/api/advanced/index.rst |   74 +
 .../python/api/advanced/mxnet.engine.rst   |   34 +
 .../python/api/advanced/mxnet.executor.rst |   34 +
 .../python/api/advanced/mxnet.executor_manager.rst |   38 +
 .../python/api/advanced/mxnet.kvstore_server.rst   |   36 +
 docs/python_docs/python/api/advanced/mxnet.rtc.rst |   36 +
 .../python/api/advanced/mxnet.test_utils.rst   |   91 +
 .../python_docs/python/api/advanced/mxnet.util.rst |   31 +
 .../python_docs/python/api/gluon-related/index.rst |  111 +
 .../python/api/gluon-related/mxnet.autograd.rst|   38 +
 .../python/api/gluon-related/mxnet.context.rst |   33 +
 .../python/api/gluon-related/mxnet.image.rst   |   99 +
 .../python/api/gluon-related/mxnet.initializer.rst |   58 +
 .../python/api/gluon-related/mxnet.io.rst  |   52 +
 .../api/gluon-related/mxnet.kvstore.KVStore.rst|   61 +
 .../api/gluon-related/mxnet.kvstore.create.rst |   23 +
 .../python/api/gluon-related/mxnet.kvstore.rst |   27 +
 .../api/gluon-related/mxnet.lr_scheduler.rst   |   31 +
 .../python/api/gluon-related/mxnet.metric.rst  |   57 +
 .../python/api/gluon-related/mxnet.optimizer.rst   |   55 +
 .../python/api/gluon-related/mxnet.profiler.rst|   54 +
 .../python/api/gluon-related/mxnet.random.rst  |   26 +
 .../python/api/gluon-related/mxnet.recordio.rst|   43 +
 docs/python_docs/python/api/gluon/index.rst|  156 ++
 .../python/api/gluon/mxnet.gluon.Constant.rst  |   23 +
 .../python/api/gluon/mxnet.gluon.HybridBlock.rst   |   40 +
 .../python/api/gluon/mxnet.gluon.ParameterDict.rst |   79 +
 .../python/api/gluon/mxnet.gluon.SymbolBlock.rst   |   28 +
 .../python/api/gluon/mxnet.gluon.Trainer.rst   |   51 +
 .../python/api/gluon/mxnet.gluon.contrib.rst   |  173 ++
 .../python/api/gluon/mxnet.gluon.data.rst  |   50 +
 .../python/api/gluon/mxnet.gluon.data.vision.rst   |   58 +
 .../python/api/gluon/mxnet.gluon.loss.rst  |   40 +
 .../python/api/gluon/mxnet.gluon.model_zoo.rst |  167 ++
 .../python/api/gluon/mxnet.gluon.nn.Block.rst  |   86 +
 .../api/gluon/mxnet.gluon.nn.HybridBlock.rst   |   66 +
 .../api/gluon/mxnet.gluon.nn.SymbolBlock.rst   |   67 +
 .../python/api/gluon/mxnet.gluon.parameter.rst |   68 +
 .../python/api/gluon/mxnet.gluon.utils.rst |   31 +
 docs/python_docs/python/api/gluon/nn.rst   |  156 ++
 docs/python_docs/python/api/gluon/rnn.rst  |   68 +
 docs/python_docs/python/api/index.rst  |   77 +
 docs/python_docs/python/api/ndarray/index.rst  |  124 +
 .../python/api/ndarray/mxnet.ndarray.NDArray.rst   |  310 +++
 .../ndarray/mxnet.ndarray.sparse.CSRNDArray.rst|  203 ++
 .../mxnet.ndarray.sparse.RowSparseNDArray.rst  |  183 ++
 docs/python_docs/python/api/ndarray/routines.rst   |  461 
 .../python/api/ndarray/sparse_routines.rst |  200 ++
 .../python/api/symbol-related/index.rst|   53 +
 .../python/api/symbol-related/mxnet.callback.rst   |   45 +
 .../python/api/symbol-related/mxnet.model.rst  |   45 +
 .../python/api/symbol-related/mxnet.module.rst |   35 +
 .../python/api/symbol-related/mxnet.monitor.rst|   35 +
 .../api/symbol-related/mxnet.visualization.rst |   35 +
 docs/python_docs/python/api/symbol/index.rst   |   65 +
 .../python/api/symbol/mxnet.symbol.Symbol.rst  |  335 +++
 .../python/api/symbol/mxnet.symbol.linalg.rst  |   49 +
 docs/python_docs/python/index.rst  |   52 +
 docs/{ => python_docs/python/scripts}/conf.py  

[GitHub] [incubator-mxnet] aaronmarkham merged pull request #15884: [WIP] New Website: New Docs [1/3]

2019-09-05 Thread GitBox
aaronmarkham merged pull request #15884: [WIP] New Website: New Docs [1/3]
URL: https://github.com/apache/incubator-mxnet/pull/15884
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] vandanavk commented on issue #14942: ONNX export: Slice op - Handle None value for ends

2019-09-05 Thread GitBox
vandanavk commented on issue #14942: ONNX export: Slice op - Handle None value 
for ends
URL: https://github.com/apache/incubator-mxnet/pull/14942#issuecomment-528660874
 
 
   @zhreshold for review/merge


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16104: Faster Transpose 2D

2019-09-05 Thread GitBox
ChaiBapchya commented on issue #16104: Faster Transpose 2D
URL: https://github.com/apache/incubator-mxnet/pull/16104#issuecomment-528657079
 
 
   @apeforest 
   
   Here numpy transpose calls the same TransposeImpl function that I have 
already handled.
   
   Function Call
   
https://github.com/apache/incubator-mxnet/blob/d60be31df6b05385cd8adc4b8b26b34a33e7693c/src/operator/numpy/np_matrix_op-inl.h#L59
   
   Function Definition
   
https://github.com/apache/incubator-mxnet/blob/d60be31df6b05385cd8adc4b8b26b34a33e7693c/src/operator/tensor/matrix_op-inl.h#L261


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] vishaalkapoor removed a comment on issue #15884: [WIP] New Website: New Docs [1/3]

2019-09-05 Thread GitBox
vishaalkapoor removed a comment on issue #15884: [WIP] New Website: New Docs 
[1/3]
URL: https://github.com/apache/incubator-mxnet/pull/15884#issuecomment-528610290
 
 
   Approved!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] vishaalkapoor commented on issue #15884: [WIP] New Website: New Docs [1/3]

2019-09-05 Thread GitBox
vishaalkapoor commented on issue #15884: [WIP] New Website: New Docs [1/3]
URL: https://github.com/apache/incubator-mxnet/pull/15884#issuecomment-528610290
 
 
   Approved!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #16104: Faster Transpose 2D

2019-09-05 Thread GitBox
access2rohit commented on a change in pull request #16104: Faster Transpose 2D
URL: https://github.com/apache/incubator-mxnet/pull/16104#discussion_r321501352
 
 

 ##
 File path: src/operator/tensor/matrix_op-inl.h
 ##
 @@ -257,6 +257,29 @@ struct TransposeParam : public 
dmlc::Parameter {
   }
 };
 
+
+template
+MSHADOW_XINLINE void Transpose2D(DType *in, DType *out, index_t shape_0, 
index_t shape_1) {
+// ensure cache line hits and prevent cache miss for any configuration
+index_t blocksize = 32;
+index_t n = shape_0;
+index_t p = shape_1;
+
+for (index_t i = 0; i < n; i += blocksize) {
+  #pragma omp parallel for
+for (index_t j = 0; j < p; j += blocksize) {
+// transpose the block
+#pragma unroll 4
 
 Review comment:
   add comment explaining why unroll by 4 ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on issue #16104: Faster Transpose 2D

2019-09-05 Thread GitBox
access2rohit commented on issue #16104: Faster Transpose 2D
URL: https://github.com/apache/incubator-mxnet/pull/16104#issuecomment-528606194
 
 
   Can you add more description of how are you achieving the faster transpose ?
   You can either paste a link that describes your approach or write it briefly 
yourself.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #16104: Faster Transpose 2D

2019-09-05 Thread GitBox
access2rohit commented on a change in pull request #16104: Faster Transpose 2D
URL: https://github.com/apache/incubator-mxnet/pull/16104#discussion_r321501256
 
 

 ##
 File path: src/operator/tensor/matrix_op-inl.h
 ##
 @@ -257,6 +257,29 @@ struct TransposeParam : public 
dmlc::Parameter {
   }
 };
 
+
+template
+MSHADOW_XINLINE void Transpose2D(DType *in, DType *out, index_t shape_0, 
index_t shape_1) {
+// ensure cache line hits and prevent cache miss for any configuration
+index_t blocksize = 32;
 
 Review comment:
   Also add comment Why you chose blocksize to be 32 ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #16104: Faster Transpose 2D

2019-09-05 Thread GitBox
access2rohit commented on a change in pull request #16104: Faster Transpose 2D
URL: https://github.com/apache/incubator-mxnet/pull/16104#discussion_r321500521
 
 

 ##
 File path: src/operator/tensor/matrix_op-inl.h
 ##
 @@ -257,6 +257,29 @@ struct TransposeParam : public 
dmlc::Parameter {
   }
 };
 
+
+template
+MSHADOW_XINLINE void Transpose2D(DType *in, DType *out, index_t shape_0, 
index_t shape_1) {
+// ensure cache line hits and prevent cache miss for any configuration
+index_t blocksize = 32;
+index_t n = shape_0;
+index_t p = shape_1;
+
+for (index_t i = 0; i < n; i += blocksize) {
+  #pragma omp parallel for
 
 Review comment:
   nit: align #pragma with underlying for loop 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #16104: Faster Transpose 2D

2019-09-05 Thread GitBox
access2rohit commented on a change in pull request #16104: Faster Transpose 2D
URL: https://github.com/apache/incubator-mxnet/pull/16104#discussion_r321500319
 
 

 ##
 File path: src/operator/tensor/matrix_op-inl.h
 ##
 @@ -257,6 +257,29 @@ struct TransposeParam : public 
dmlc::Parameter {
   }
 };
 
+
+template
+MSHADOW_XINLINE void Transpose2D(DType *in, DType *out, index_t shape_0, 
index_t shape_1) {
+// ensure cache line hits and prevent cache miss for any configuration
+index_t blocksize = 32;
 
 Review comment:
   nit: indent the body by 2 spaces ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #16104: Faster Transpose 2D

2019-09-05 Thread GitBox
access2rohit commented on a change in pull request #16104: Faster Transpose 2D
URL: https://github.com/apache/incubator-mxnet/pull/16104#discussion_r321500142
 
 

 ##
 File path: src/operator/tensor/matrix_op-inl.h
 ##
 @@ -257,6 +257,29 @@ struct TransposeParam : public 
dmlc::Parameter {
   }
 };
 
+
+template
+MSHADOW_XINLINE void Transpose2D(DType *in, DType *out, index_t shape_0, 
index_t shape_1) {
+// ensure cache line hits and prevent cache miss for any configuration
+index_t blocksize = 32;
+index_t n = shape_0;
+index_t p = shape_1;
+
+for (index_t i = 0; i < n; i += blocksize) {
+  #pragma omp parallel for
+for (index_t j = 0; j < p; j += blocksize) {
+// transpose the block
+#pragma unroll 4
+for (index_t a = 0; a < blocksize && j + a < n; ++a) {
+  for (index_t b = 0; b < blocksize && i + b < p; ++b) {
+  out[(j + a) * n + i + b] = in[(i + b) * p + (j + a)];
+  }
+}
+}
+}
 
 Review comment:
   nit: indent


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #16040: Revert accidental change to CMakelists

2019-09-05 Thread GitBox
larroy commented on issue #16040: Revert accidental change to CMakelists
URL: https://github.com/apache/incubator-mxnet/pull/16040#issuecomment-528597448
 
 
   @mxnet-label-bot add [pr-awaiting-merge]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-09-05 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 4021360  Bump the publish timestamp.
4021360 is described below

commit 402136027d99bdc7354dffdc692bb32efdc122bd
Author: mxnet-ci 
AuthorDate: Thu Sep 5 21:02:41 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..091ba21
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Thu Sep  5 21:02:41 UTC 2019



[GitHub] [incubator-mxnet] apeforest commented on issue #16104: Faster Transpose 2D

2019-09-05 Thread GitBox
apeforest commented on issue #16104: Faster Transpose 2D
URL: https://github.com/apache/incubator-mxnet/pull/16104#issuecomment-528574060
 
 
   Please fix the test_numpy_op.test_np_transpose


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest opened a new pull request #14251: [WIP] Fix unary operator ceil/floor/trunc when data type is integer

2019-09-05 Thread GitBox
apeforest opened a new pull request #14251: [WIP] Fix unary operator 
ceil/floor/trunc when data type is integer
URL: https://github.com/apache/incubator-mxnet/pull/14251
 
 
   ## Description ##
   Many operators implicitly cast data to float and return inaccurate results. 
This PR will fix issue reported in 
https://github.com/apache/incubator-mxnet/issues/13220. 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the 
relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [X] Changes are complete (i.e. I finished coding on this PR)
   - [X] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   
   
   ## Comments ##
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-09-05 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new c5c67a1  Bump the publish timestamp.
c5c67a1 is described below

commit c5c67a15366bd25c1e9e78d9e35abe17b8f4da26
Author: mxnet-ci 
AuthorDate: Thu Sep 5 19:33:31 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..57675e8
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Thu Sep  5 19:33:31 UTC 2019



[GitHub] [incubator-mxnet] ChaiBapchya opened a new pull request #16104: Faster Transpose 2D

2019-09-05 Thread GitBox
ChaiBapchya opened a new pull request #16104: Faster Transpose 2D
URL: https://github.com/apache/incubator-mxnet/pull/16104
 
 
   ## Description ##
   Faster 2D transpose
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - [x] Code is well-documented: 
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Bypass expression template way of execution
   - [ ] Implement cache-optimal transpose
   
   ## Performance ##
   | Shape  | Branch| MKL |   | P50/Median | P90 Time   | 
P99 Time   | Avg Time   |
   
||---|-|---|||||
   | (1024, 1024)   | new transpose | off | int64 | 2.97046| 3.02903| 
5.23159| 3.12827|
   || master| off | int64 | 7.87614| 7.92953| 
10.18385   | 8.04834|
   | (1, 1) | new transpose | off | int64 | 804.39574  | 836.96015  | 
858.47099  | 812.50145  |
   || master| off | int64 | 2973.75703 | 3045.19813 | 
3070.58677 | 2980.31202 |
   | (1024, 1024)   | new transpose | off | int32 | 2.97993| 3.04809| 
5.3323 | 3.14604|
   || master| off | int32 | 7.417  | 7.47346| 
9.63628| 7.58372|
   | (1, 1) | new transpose | off | int32 | 806.47784  | 845.70507  | 
868.55227  | 815.48621  |
   || master| off | int32 | 2941.61317 | 3007.3768  | 
3053.5961  | 2966.09583 |


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zhreshold commented on issue #15629: DataLoader Error using Multi processing

2019-09-05 Thread GitBox
zhreshold commented on issue #15629: DataLoader Error using Multi processing
URL: 
https://github.com/apache/incubator-mxnet/issues/15629#issuecomment-528537790
 
 
   I can confirm that this can be reproduced with anaconda environment + pip 
installed mxnet. 
   However, 
   - locally built mxnet + python3.7 have no issue.
   - mac + python3.7 + pip installed mxnet have no issue.
   
So I suggest to tranfer this issue to pypi package building pipeline to see 
if the statically linked libs are causing the error. 
   @szha 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (d0fa8c0 -> d60be31)

2019-09-05 Thread apeforest
This is an automated email from the ASF dual-hosted git repository.

apeforest pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d0fa8c0  Test large vector mean operator and fix a few bugs (#16079)
 add d60be31  Fix gradient tensor mutate in 
`{adam/ftrl/rmprop/rmspropalex}_update`. (#15768)

No new revisions were added by this update.

Summary of changes:
 src/operator/optimizer_op-inl.h   | 279 ++
 tests/python/unittest/test_ndarray.py |  67 +++-
 2 files changed, 216 insertions(+), 130 deletions(-)



[GitHub] [incubator-mxnet] sxjscience commented on issue #15759: [Optimizer][Bug] Gradient is mutated in the Adam optimizer

2019-09-05 Thread GitBox
sxjscience commented on issue #15759: [Optimizer][Bug] Gradient is mutated in 
the Adam optimizer
URL: 
https://github.com/apache/incubator-mxnet/issues/15759#issuecomment-528517102
 
 
   Thanks @kshitij12345 !


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest closed issue #15759: [Optimizer][Bug] Gradient is mutated in the Adam optimizer

2019-09-05 Thread GitBox
apeforest closed issue #15759: [Optimizer][Bug] Gradient is mutated in the Adam 
optimizer
URL: https://github.com/apache/incubator-mxnet/issues/15759
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest merged pull request #15768: Fix gradient tensor mutate in `{adam/ftrl/rmprop/rmspropalex}_update`.

2019-09-05 Thread GitBox
apeforest merged pull request #15768: Fix gradient tensor mutate in 
`{adam/ftrl/rmprop/rmspropalex}_update`.
URL: https://github.com/apache/incubator-mxnet/pull/15768
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #16103: Different default values for NDArray and Symbol in random.multinomial

2019-09-05 Thread GitBox
mxnet-label-bot commented on issue #16103: Different default values for NDArray 
and Symbol in random.multinomial
URL: 
https://github.com/apache/incubator-mxnet/issues/16103#issuecomment-528515545
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended label(s): Bug


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] igolan opened a new issue #16103: Different default values for NDArray and Symbol in random.multinomial

2019-09-05 Thread GitBox
igolan opened a new issue #16103: Different default values for NDArray and 
Symbol in random.multinomial
URL: https://github.com/apache/incubator-mxnet/issues/16103
 
 
   ## Description
   ```mxnet.symbol.random.multinomial```
and 
   ```mxnet.ndarray.random.multinomial```
have different default values for the argument get_prob.
   That confusing, specifically when using hybrid blocks, and calling
   ```F.random.multinomial```
without specifying a value for get_prob.
   
   ## Environment info (Required)
   N/A
   
   Package used (Python/R/Scala/Julia):
   MXNET1.5 python API
   
   ## Build info (Required if built from source)
   N/A
   
   ## Error Message:
   N/A
   
   ## Minimum reproducible example
   N/A
   
   ## Steps to reproduce
   
   
https://mxnet.incubator.apache.org/versions/master/api/python/ndarray/random.html#mxnet.ndarray.random.multinomial
   and
   
https://mxnet.incubator.apache.org/versions/master/api/python/symbol/symbol.html#mxnet.symbol.random.multinomial
   
   ## What have you tried to solve it?
   N/A


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on issue #16101: [Bug, Feature Request] mx.nd.where()

2019-09-05 Thread GitBox
reminisce commented on issue #16101: [Bug,  Feature Request]  mx.nd.where()
URL: 
https://github.com/apache/incubator-mxnet/issues/16101#issuecomment-528511224
 
 
   Scalar tensors like below is not supported in `mx.nd` module. We need to 
implement a numpy-compatible where op for this purpose. I will add this op to 
the list and prioritize it.
   ```python
   y = mx.nd.array(4)#y.shape: ()
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on issue #16102: Usability degradation

2019-09-05 Thread GitBox
reminisce commented on issue #16102: Usability degradation
URL: 
https://github.com/apache/incubator-mxnet/issues/16102#issuecomment-528510590
 
 
   I suspect this may be related to some variations on the nightly build 
platforms. I built the latest master from source on mac os; the error message 
and stack trace can be printed correctly. However, the latest nightly build 
gives the seg fault and the stack trace is irrelevant with the code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on issue #16102: Usability degradation

2019-09-05 Thread GitBox
sxjscience commented on issue #16102: Usability degradation
URL: 
https://github.com/apache/incubator-mxnet/issues/16102#issuecomment-528500857
 
 
   I think we should also test for the exceptions. We need to test
   ```python
   import mxnet.numpy as np
   from mxnet.base import MXNetError
   try:
   a = np.ones((10, 10))
   b = a.reshape((1,))
   except MXNetError:
   pass
   except:
   raise
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on issue #15674: Straggler in latest mxnet when training with distributed parameter server

2019-09-05 Thread GitBox
apeforest commented on issue #15674: Straggler in latest mxnet when training 
with distributed parameter server
URL: 
https://github.com/apache/incubator-mxnet/issues/15674#issuecomment-528495094
 
 
   @YouhuiBai Thanks for the explanation.When you say "current newest mxnet" 
has straggler, are you   implying there was no such behavior before? If so, do 
you remember the last working version? I can do a trace back to see which PR 
introduced this. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ngupta23 commented on issue #5979: Same prediction values for linear regression while using mxnet in R

2019-09-05 Thread GitBox
ngupta23 commented on issue #5979: Same prediction values for linear regression 
while using mxnet in R
URL: 
https://github.com/apache/incubator-mxnet/issues/5979#issuecomment-528492660
 
 
   I had a similar problem with all outputs predicting the same value. For my 
case, there were a couple of things that I had to change to fix this. 
   
   Change the architecture of the neural network (number of neurons in the 
layers). There was no fixed rule that worked for me. In some cases, when I 
increased the number of neurons, the predictions were same for all 
observations. In other cases, when I increased the neurons further, the 
predictions were better.
   
   What also helped was increasing the number of epochs (num.round). I was 
initially using the default 10, after increasing it to 100 and above, it gave 
better results. Maybe 10 epochs was not enough to update the weights enough 
from the random initialization.
   
   Another thing that impacted the results was the learning rate. Decreasing it 
too  much (1e-5 for my dataset) caused me to get the same predictions for all 
observations. I had to keep it at around 1e-3 to make it work.
   
   All the above changes were made orthogonally (make change to a single 
hyperparameter and observe the change in the predictions). It is possible that 
changing these hyperparameters simultaneously might lead to other conclusions. 
But the bottom line is that changing the architecture and hyperparaneter values 
will solve the issue, just that it might take a while to figure out what is the 
right range for the hyperparameters.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #16102: Usability degradation

2019-09-05 Thread GitBox
mxnet-label-bot commented on issue #16102: Usability degradation
URL: 
https://github.com/apache/incubator-mxnet/issues/16102#issuecomment-528490759
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin opened a new issue #16102: Usability degradation

2019-09-05 Thread GitBox
eric-haibin-lin opened a new issue #16102: Usability degradation
URL: https://github.com/apache/incubator-mxnet/issues/16102
 
 
   ```
   >>> pip install mxnet==1.6.0b20190821
   ➜  gluon-nlp git:(allow) ✗ python -c 'import mxnet as mx; a = 
mx.np.ones((10,)); print(a.reshape((1,)))'
   Traceback (most recent call last):
 File "", line 1, in 
 File 
"/Users/haibilin/miniconda3/lib/python3.7/site-packages/mxnet/numpy/multiarray.py",
 line 637, in reshape
   return _mx_np_op.reshape(self, newshape=args[0], order=order)
 File "", line 39, in reshape
 File 
"/Users/haibilin/miniconda3/lib/python3.7/site-packages/mxnet/_ctypes/ndarray.py",
 line 100, in _imperative_invoke
   ctypes.byref(out_stypes)))
 File 
"/Users/haibilin/miniconda3/lib/python3.7/site-packages/mxnet/base.py", line 
254, in check_call
   raise MXNetError(py_str(_LIB.MXGetLastError()))
   mxnet.base.MXNetError: [10:26:08] src/operator/numpy/np_matrix_op.cc:111: 
Check failed: src.Size() == dst->Size() (10 vs. 1) : Cannot reshape array of 
size 10 into shape [1]
   Stack trace:
 [bt] (0) 1   libmxnet.so 0x00010e98a649 
mxnet::op::NDArrayOpProp::~NDArrayOpProp() + 4473
 [bt] (1) 2   libmxnet.so 0x00010f397f61 
mxnet::op::FullyConnectedComputeExCPU(nnvm::NodeAttrs const&, mxnet::OpContext 
const&, std::__1::vector > 
const&, std::__1::vector > const&, 
std::__1::vector > const&) 
+ 6826753
 [bt] (2) 3   libmxnet.so 0x00010f39854f 
mxnet::op::FullyConnectedComputeExCPU(nnvm::NodeAttrs const&, mxnet::OpContext 
const&, std::__1::vector > 
const&, std::__1::vector > const&, 
std::__1::vector > const&) 
+ 6828271
 [bt] (3) 4   libmxnet.so 0x0001103aa79f 
mxnet::imperative::SetShapeType(mxnet::Context const&, nnvm::NodeAttrs const&, 
std::__1::vector > 
const&, std::__1::vector 
> const&, mxnet::DispatchMode*) + 1583
 [bt] (4) 5   libmxnet.so 0x0001103a929c 
mxnet::Imperative::Invoke(mxnet::Context const&, nnvm::NodeAttrs const&, 
std::__1::vector > 
const&, std::__1::vector 
> const&) + 716
 [bt] (5) 6   libmxnet.so 0x0001102eadbe 
SetNDInputsOutputs(nnvm::Op const*, std::__1::vector >*, std::__1::vector >*, int, void* const*, int*, int, int, 
void***) + 1582
 [bt] (6) 7   libmxnet.so 0x0001102ebb00 
MXImperativeInvokeEx + 176
 [bt] (7) 8   libffi.6.dylib  0x0001077f8884 
ffi_call_unix64 + 76
   
   ```
   But recent build hides error messages:
   ```
   ➜  gluon-nlp git:(allow) ✗ pip install mxnet==1.6.0b20190822
   
   ➜  gluon-nlp git:(allow) ✗ python -c 'import mxnet as mx; a = 
mx.np.ones((10,)); print(a.reshape((1,)))'
   
   Segmentation fault: 11
   
   Stack trace:
 [bt] (0) 1   libmxnet.so 0x00011255cdb0 
mxnet::Storage::Get() + 7968
 [bt] (1) 2   libsystem_platform.dylib0x7fffb27a7b3a 
_sigtramp + 26
 [bt] (2) 3   ??? 0x3419c9fe3dc10094 0x0 + 
3754253858184954004
 [bt] (3) 4   libmxnet.so 0x00011270c1d6 
mxnet::Storage::Get() + 1774406
 [bt] (4) 5   libmxnet.so 0x000110109dee 
std::__1::__tree, std::__1::allocator >, 
mxnet::NDArrayFunctionReg*>, 
std::__1::__map_value_compare, std::__1::allocator >, 
std::__1::__value_type, std::__1::allocator >, 
mxnet::NDArrayFunctionReg*>, std::__1::less, std::__1::allocator > >, true>, 
std::__1::allocator, std::__1::allocator >, 
mxnet::NDArrayFunctionReg*> > 
>::destroy(std::__1::__tree_node, std::__1::allocator >, 
mxnet::NDArrayFunctionReg*>, void*>*) + 1822
 [bt] (5) 6   libmxnet.so 0x000110e91c61 
mxnet::op::FullyConnectedComputeExCPU(nnvm::NodeAttrs const&, mxnet::OpContext 
const&, std::__1::vector > 
const&, std::__1::vector > const&, 
std::__1::vector > const&) 
+ 10335393
 [bt] (6) 7   libmxnet.so 0x000110e9224f 
mxnet::op::FullyConnectedComputeExCPU(nnvm::NodeAttrs const&, mxnet::OpContext 
const&, std::__1::vector > 
const&, std::__1::vector > const&, 
std::__1::vector > const&) 
+ 10336911
 [bt] (7) 8   libmxnet.so 0x000111ea50ef 
mxnet::imperative::SetShapeType(mxnet::Context const&, nnvm::NodeAttrs const&, 
std::__1::vector > 
const&, std::__1::vector 
> const&, mxnet::DispatchMode*) + 1583
 [bt] (8) 9   libmxnet.so 0x000111ea3bec 
mxnet::Imperative::Invoke(mxnet::Context const&, nnvm::NodeAttrs const&, 
std::__1::vector > 
const&, std::__1::vector 
> const&) + 716
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, 

[GitHub] [incubator-mxnet] szha commented on a change in pull request #16097: [numpy] __array_function__ protocol

2019-09-05 Thread GitBox
szha commented on a change in pull request #16097: [numpy] __array_function__ 
protocol
URL: https://github.com/apache/incubator-mxnet/pull/16097#discussion_r321388950
 
 

 ##
 File path: python/mxnet/array_function_protocol.py
 ##
 @@ -0,0 +1,98 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Utils for registering NumPy array function protocol for mxnet.numpy ops."""
+
+import numpy as onp
+from . import numpy as mx_np  # pylint: disable=pylint=reimported
+from .numpy.multiarray import _NUMPY_ARRAY_FUNCTION_DICT
+
+
+def _implements(numpy_function):
+"""Register an __array_function__ implementation for MyArray objects."""
+def decorator(func):
+_NUMPY_ARRAY_FUNCTION_DICT[numpy_function] = func
+return func
+return decorator
+
+
+_NUMPY_ARRAY_FUNCTION_LIST = [
 
 Review comment:
   Awesome. We can use this list to track the full-compatibility with numpy 
operators once we start measuring it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zhanghang1989 commented on a change in pull request #16053: Fixes #15543

2019-09-05 Thread GitBox
zhanghang1989 commented on a change in pull request #16053: Fixes #15543
URL: https://github.com/apache/incubator-mxnet/pull/16053#discussion_r320903141
 
 

 ##
 File path: tests/python/unittest/test_optimizer.py
 ##
 @@ -399,11 +395,7 @@ def update(self, index, weight, grad, state):
 if self.momentum == 0.0:
 weight32[:] += -lr * (grad32 + wd * weight32)
 else:
-mom[:] *= self.momentum
-weight32[:] -= self.momentum * mom[:]
-grad32 += wd * weight32
-grad32 *= lr
-weight32[:] -= (self.momentum + 1) * grad32
+weight32[:] += (self.momentum**2 * mom) - 
lr*(self.momentum+1)*(grad32 + wd*weight32)
 
 Review comment:
   Should multiply `lr` for the first term. `weight32[:] += 
lr*(self.momentum**2 * mom) - lr*(self.momentum+1)*(grad32 + wd*weight32)`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zhanghang1989 commented on a change in pull request #16053: Fixes #15543

2019-09-05 Thread GitBox
zhanghang1989 commented on a change in pull request #16053: Fixes #15543
URL: https://github.com/apache/incubator-mxnet/pull/16053#discussion_r321386949
 
 

 ##
 File path: tests/python/unittest/test_optimizer.py
 ##
 @@ -396,6 +397,7 @@ def update(self, index, weight, grad, state):
 weight32[:] += -lr * (grad32 + wd * weight32)
 else:
 weight32[:] += (self.momentum**2 * mom) - 
lr*(self.momentum+1)*(grad32 + wd*weight32)
+mom = (self.momentum*mom) - lr*(grad32 + wd*weight32)
 
 Review comment:
   Thanks! You are right. The MXNet implementation is puting lr inside. The 
update should be:
   
   
   
![image](https://user-images.githubusercontent.com/8041160/64364156-2c4a0200-cfc7-11e9-92d6-331d8b656d73.png)
   
![image](https://user-images.githubusercontent.com/8041160/64364165-2f44f280-cfc7-11e9-9416-b092fd294294.png)
   
   I will check your implementation again. Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #16097: [numpy] __array_function__ protocol

2019-09-05 Thread GitBox
reminisce commented on a change in pull request #16097: [numpy] 
__array_function__ protocol
URL: https://github.com/apache/incubator-mxnet/pull/16097#discussion_r321382725
 
 

 ##
 File path: python/mxnet/array_function_protocol.py
 ##
 @@ -0,0 +1,98 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Utils for registering NumPy array function protocol for mxnet.numpy ops."""
+
+import numpy as onp
+from . import numpy as mx_np  # pylint: disable=pylint=reimported
+from .numpy.multiarray import _NUMPY_ARRAY_FUNCTION_DICT
+
+
+def _implements(numpy_function):
+"""Register an __array_function__ implementation for MyArray objects."""
+def decorator(func):
+_NUMPY_ARRAY_FUNCTION_DICT[numpy_function] = func
+return func
+return decorator
+
+
+_NUMPY_ARRAY_FUNCTION_LIST = [
 
 Review comment:
   Yes, it is the operators in our codebase that is dispatchable through the 
array function protocol. Others will be dispatched through either fluent 
methods or array ufunc protocol, which will be implemented later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on issue #15884: [WIP] New Website: New Docs [1/3]

2019-09-05 Thread GitBox
aaronmarkham commented on issue #15884: [WIP] New Website: New Docs [1/3]
URL: https://github.com/apache/incubator-mxnet/pull/15884#issuecomment-528461300
 
 
   > CI fails the sanity test due to RAT license check.
   > I've excluded the offending folder here:
   > 
[aa74a30#diff-ec188843973709ab04de4ff565735aecR10](https://github.com/apache/incubator-mxnet/commit/aa74a30ae2597b61ed9d0571baf4a0d8eec3b655#diff-ec188843973709ab04de4ff565735aecR10)
   > But I get an error here:
   > 
http://jenkins.mxnet-ci.amazon-ml.com/job/mxnet-validation/job/sanity/job/PR-15884/9/display/redirect
   
   Leaving this here for reference... come to find out, you cannot exclude 
subfolders. The rat checker only seems to accept one folder, like `_build/*` 
but not `_build/xml/*`. Really quite irritating and over broad. I tried variety 
of regexes to no avail. Whatever the case, the existing rat-exclude file is 
really pretty broad and probably excludes way too many things. I added some 
comments to it and tried to get it as specific as possible for these new items.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha commented on a change in pull request #16097: [numpy] __array_function__ protocol

2019-09-05 Thread GitBox
szha commented on a change in pull request #16097: [numpy] __array_function__ 
protocol
URL: https://github.com/apache/incubator-mxnet/pull/16097#discussion_r321372173
 
 

 ##
 File path: python/mxnet/array_function_protocol.py
 ##
 @@ -0,0 +1,98 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Utils for registering NumPy array function protocol for mxnet.numpy ops."""
+
+import numpy as onp
+from . import numpy as mx_np  # pylint: disable=pylint=reimported
+from .numpy.multiarray import _NUMPY_ARRAY_FUNCTION_DICT
+
+
+def _implements(numpy_function):
+"""Register an __array_function__ implementation for MyArray objects."""
+def decorator(func):
+_NUMPY_ARRAY_FUNCTION_DICT[numpy_function] = func
+return func
+return decorator
+
+
+_NUMPY_ARRAY_FUNCTION_LIST = [
 
 Review comment:
   is this the list of operators where our operator definition complies with 
the official numpy behavior?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] chongruo removed a comment on issue #16101: [Bug, Feature Request] mx.nd.where()

2019-09-05 Thread GitBox
chongruo removed a comment on issue #16101: [Bug,  Feature Request]  
mx.nd.where()
URL: 
https://github.com/apache/incubator-mxnet/issues/16101#issuecomment-528421168
 
 
   @mxnet-label-bot  add [Feature request]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] chongruo edited a comment on issue #16101: [Bug, Feature Request] mx.nd.where()

2019-09-05 Thread GitBox
chongruo edited a comment on issue #16101: [Bug,  Feature Request]  
mx.nd.where()
URL: 
https://github.com/apache/incubator-mxnet/issues/16101#issuecomment-528420416
 
 
   @mxnet-label-bot  add [Bug, Feature request]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] chongruo commented on issue #16101: [Bug, Feature Request] mx.nd.where()

2019-09-05 Thread GitBox
chongruo commented on issue #16101: [Bug,  Feature Request]  mx.nd.where()
URL: 
https://github.com/apache/incubator-mxnet/issues/16101#issuecomment-528421168
 
 
   @mxnet-label-bot  add [Feature request]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] chongruo edited a comment on issue #16101: [Bug, Feature Request] mx.nd.where()

2019-09-05 Thread GitBox
chongruo edited a comment on issue #16101: [Bug,  Feature Request]  
mx.nd.where()
URL: 
https://github.com/apache/incubator-mxnet/issues/16101#issuecomment-528420416
 
 
   @mxnet-label-bot  add [Bug]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] chongruo edited a comment on issue #16101: [Bug, Feature Request] mx.nd.where()

2019-09-05 Thread GitBox
chongruo edited a comment on issue #16101: [Bug,  Feature Request]  
mx.nd.where()
URL: 
https://github.com/apache/incubator-mxnet/issues/16101#issuecomment-528420416
 
 
   @mxnet-label-bot  add [Bug] [Feature request]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] chongruo commented on issue #16101: [Bug, Feature Request] mx.nd.where()

2019-09-05 Thread GitBox
chongruo commented on issue #16101: [Bug,  Feature Request]  mx.nd.where()
URL: 
https://github.com/apache/incubator-mxnet/issues/16101#issuecomment-528420416
 
 
   @mxnet-label-bot  add [Bug]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #16075: Integrate MKL-DNN leakyrelu

2019-09-05 Thread GitBox
TaoLv commented on a change in pull request #16075: Integrate MKL-DNN leakyrelu
URL: https://github.com/apache/incubator-mxnet/pull/16075#discussion_r321307114
 
 

 ##
 File path: src/operator/leaky_relu.cc
 ##
 @@ -25,27 +25,123 @@
 */
 
 #include "./leaky_relu-inl.h"
+#if MXNET_USE_MKLDNN == 1
+#include "./nn/mkldnn/mkldnn_base-inl.h"
+#include "./nn/mkldnn/mkldnn_ops-inl.h"
+#endif  // MXNET_USE_MKLDNN == 1
 
 #include 
 namespace mxnet {
 namespace op {
-template<>
-Operator *CreateOp(LeakyReLUParam param, int dtype) {
-  Operator* op = nullptr;
-  MSHADOW_REAL_TYPE_SWITCH(dtype, DType, {
-op = new LeakyReLUOp(param);
-  });
-  return op;
+
+DMLC_REGISTER_PARAMETER(LeakyReLUParam);
+
+static bool LeakyReLUType(const nnvm::NodeAttrs& attrs,
+  std::vector *in_type,
+  std::vector *out_type) {
+  int dtype = -1;
+  for (const int& type : *in_type) {
+type_assign(, type);
+  }
+  for (const int& type : *out_type) {
+type_assign(, type);
+  }
+  for (size_t i = 0; i < in_type->size(); ++i) {
+TYPE_ASSIGN_CHECK(*in_type, i, dtype);
+  }
+  for (size_t i = 0; i < out_type->size(); ++i) {
+TYPE_ASSIGN_CHECK(*out_type, i, dtype);
+  }
+  return dtype != -1;
 }
 
-Operator *LeakyReLUProp::CreateOperatorEx(Context ctx, mxnet::ShapeVector 
*in_shape,
-  std::vector *in_type) const {
-  DO_BIND_DISPATCH(CreateOp, param_, in_type->at(0));
+static bool LeakyReLUShape(const nnvm::NodeAttrs& attrs,
+   std::vector *in_shape,
+   std::vector *out_shape) {
+  using namespace mshadow;
+  const LeakyReLUParam _ = nnvm::get(attrs.parsed);
+  if (param_.act_type == leakyrelu::kPReLU) {
+CHECK_EQ(in_shape->size(), 2U) << "Input:[data, gamma]";
+  } else {
+CHECK_EQ(in_shape->size(), 1U) << "Input:[data]";
+  }
+  const mxnet::TShape  = in_shape->at(leakyrelu::kData);
+  if (!mxnet::ndim_is_known(dshape)) return false;
+  if (param_.act_type == leakyrelu::kPReLU) {
+const mxnet::TShape  = in_shape->at(leakyrelu::kGamma);
+if (!mxnet::ndim_is_known(gshape)) {
+  in_shape->at(leakyrelu::kGamma) = mxnet::TShape(Shape1(dshape[1]));
+}
+if (dshape == gshape) {
+  SHAPE_ASSIGN_CHECK(*out_shape, 0, dshape);
+}
+  }
+  out_shape->clear();
+  out_shape->push_back(dshape);
+  if (param_.act_type == leakyrelu::kRReLU) {
+out_shape->push_back(dshape);
+  }
+  return true;
 }
 
-DMLC_REGISTER_PARAMETER(LeakyReLUParam);
+#if MXNET_USE_MKLDNN == 1
+static void LeakyReLUComputeExCPU(const nnvm::NodeAttrs& attrs,
+   const OpContext& ctx,
+   const std::vector& inputs,
+   const std::vector& req,
+   const std::vector& outputs) {
+  const LeakyReLUParam& param = nnvm::get(attrs.parsed);
+  size_t expected = param.act_type == leakyrelu::kPReLU ? 2 : 1;
+  CHECK_EQ(inputs.size(), expected);
+  if (SupportMKLDNNLeakyRelu(param, inputs[0])) {
+MKLDNN_OPCHECK_INIT(false, outputs.size(), inputs, outputs);
+MKLDNNLeakyReluForward(attrs, ctx, inputs[0], req[0], outputs[0]);
+MKLDNN_OPCHECK_RUN(LeakyReLUCompute, attrs, ctx, inputs, req, 
outputs);
+return;
+  }
+  FallBackCompute(LeakyReLUCompute, attrs, ctx, inputs, req, outputs);
+}
+
+void LeakyReLUGradComputeExCPU(const nnvm::NodeAttrs& attrs,
+const OpContext& ctx,
+const std::vector& inputs,
+const std::vector& req,
+const std::vector& outputs) {
+  const LeakyReLUParam& param = nnvm::get(attrs.parsed);
+  if (SupportMKLDNNLeakyRelu(param, inputs[0])) {
+MKLDNN_OPCHECK_INIT(true, outputs.size(), inputs, outputs);
+MKLDNNLeakyReluBackward(attrs, ctx, inputs.at(0), inputs.at(1), req[0],
+ outputs[0]);
+MKLDNN_OPCHECK_RUN(LeakyReLUGradCompute, attrs, ctx, inputs, req, 
outputs);
+return;
+  }
+  FallBackCompute(LeakyReLUGradCompute, attrs, ctx, inputs, req, outputs);
+}
+
+inline static bool LeakyReLUStorageType(const nnvm::NodeAttrs& attrs,
+ const int dev_mask,
+ DispatchMode* dispatch_mode,
+ std::vector *in_attrs,
+ std::vector *out_attrs) {
+  const LeakyReLUParam& param = nnvm::get(attrs.parsed);
+  size_t expected = param.act_type == leakyrelu::kPReLU ? 2 : 1;
+  CHECK_EQ(in_attrs->size(), expected);
+  return MKLDNNStorageType(attrs, dev_mask, SupportMKLDNNLeakyRelu(param),
+   dispatch_mode, in_attrs, out_attrs);
+}
 
-MXNET_REGISTER_OP_PROPERTY(LeakyReLU, LeakyReLUProp)
+inline static bool BackwardLeakyReLUStorageType(const nnvm::NodeAttrs& attrs,
+ 

[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #16075: Integrate MKL-DNN leakyrelu

2019-09-05 Thread GitBox
TaoLv commented on a change in pull request #16075: Integrate MKL-DNN leakyrelu
URL: https://github.com/apache/incubator-mxnet/pull/16075#discussion_r321306668
 
 

 ##
 File path: src/operator/leaky_relu.cc
 ##
 @@ -25,27 +25,123 @@
 */
 
 #include "./leaky_relu-inl.h"
+#if MXNET_USE_MKLDNN == 1
+#include "./nn/mkldnn/mkldnn_base-inl.h"
+#include "./nn/mkldnn/mkldnn_ops-inl.h"
+#endif  // MXNET_USE_MKLDNN == 1
 
 #include 
 namespace mxnet {
 namespace op {
-template<>
-Operator *CreateOp(LeakyReLUParam param, int dtype) {
-  Operator* op = nullptr;
-  MSHADOW_REAL_TYPE_SWITCH(dtype, DType, {
-op = new LeakyReLUOp(param);
-  });
-  return op;
+
+DMLC_REGISTER_PARAMETER(LeakyReLUParam);
+
+static bool LeakyReLUType(const nnvm::NodeAttrs& attrs,
+  std::vector *in_type,
+  std::vector *out_type) {
+  int dtype = -1;
+  for (const int& type : *in_type) {
+type_assign(, type);
+  }
+  for (const int& type : *out_type) {
+type_assign(, type);
+  }
+  for (size_t i = 0; i < in_type->size(); ++i) {
+TYPE_ASSIGN_CHECK(*in_type, i, dtype);
+  }
+  for (size_t i = 0; i < out_type->size(); ++i) {
+TYPE_ASSIGN_CHECK(*out_type, i, dtype);
+  }
+  return dtype != -1;
 }
 
-Operator *LeakyReLUProp::CreateOperatorEx(Context ctx, mxnet::ShapeVector 
*in_shape,
-  std::vector *in_type) const {
-  DO_BIND_DISPATCH(CreateOp, param_, in_type->at(0));
+static bool LeakyReLUShape(const nnvm::NodeAttrs& attrs,
+   std::vector *in_shape,
+   std::vector *out_shape) {
+  using namespace mshadow;
+  const LeakyReLUParam _ = nnvm::get(attrs.parsed);
+  if (param_.act_type == leakyrelu::kPReLU) {
+CHECK_EQ(in_shape->size(), 2U) << "Input:[data, gamma]";
+  } else {
+CHECK_EQ(in_shape->size(), 1U) << "Input:[data]";
+  }
+  const mxnet::TShape  = in_shape->at(leakyrelu::kData);
+  if (!mxnet::ndim_is_known(dshape)) return false;
+  if (param_.act_type == leakyrelu::kPReLU) {
+const mxnet::TShape  = in_shape->at(leakyrelu::kGamma);
+if (!mxnet::ndim_is_known(gshape)) {
+  in_shape->at(leakyrelu::kGamma) = mxnet::TShape(Shape1(dshape[1]));
+}
+if (dshape == gshape) {
+  SHAPE_ASSIGN_CHECK(*out_shape, 0, dshape);
+}
+  }
+  out_shape->clear();
+  out_shape->push_back(dshape);
+  if (param_.act_type == leakyrelu::kRReLU) {
+out_shape->push_back(dshape);
+  }
+  return true;
 }
 
-DMLC_REGISTER_PARAMETER(LeakyReLUParam);
+#if MXNET_USE_MKLDNN == 1
+static void LeakyReLUComputeExCPU(const nnvm::NodeAttrs& attrs,
+   const OpContext& ctx,
+   const std::vector& inputs,
+   const std::vector& req,
+   const std::vector& outputs) {
+  const LeakyReLUParam& param = nnvm::get(attrs.parsed);
+  size_t expected = param.act_type == leakyrelu::kPReLU ? 2 : 1;
+  CHECK_EQ(inputs.size(), expected);
+  if (SupportMKLDNNLeakyRelu(param, inputs[0])) {
+MKLDNN_OPCHECK_INIT(false, outputs.size(), inputs, outputs);
+MKLDNNLeakyReluForward(attrs, ctx, inputs[0], req[0], outputs[0]);
+MKLDNN_OPCHECK_RUN(LeakyReLUCompute, attrs, ctx, inputs, req, 
outputs);
+return;
+  }
+  FallBackCompute(LeakyReLUCompute, attrs, ctx, inputs, req, outputs);
+}
+
+void LeakyReLUGradComputeExCPU(const nnvm::NodeAttrs& attrs,
+const OpContext& ctx,
+const std::vector& inputs,
+const std::vector& req,
+const std::vector& outputs) {
+  const LeakyReLUParam& param = nnvm::get(attrs.parsed);
+  if (SupportMKLDNNLeakyRelu(param, inputs[0])) {
+MKLDNN_OPCHECK_INIT(true, outputs.size(), inputs, outputs);
+MKLDNNLeakyReluBackward(attrs, ctx, inputs.at(0), inputs.at(1), req[0],
 
 Review comment:
   Only two inputs are needed? Is it possible to use vector so we can have a 
more unified interface?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #16075: Integrate MKL-DNN leakyrelu

2019-09-05 Thread GitBox
TaoLv commented on a change in pull request #16075: Integrate MKL-DNN leakyrelu
URL: https://github.com/apache/incubator-mxnet/pull/16075#discussion_r321311980
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_act.cc
 ##
 @@ -79,18 +93,30 @@ mkldnn::algorithm GetMKLDNNActAlgo(const ActivationParam& 
param) {
   }
 }
 
+mkldnn::algorithm GetMKLDNNActAlgo(const LeakyReLUParam& param) {
+  switch (param.act_type) {
+case leakyrelu::kLeakyReLU:
+  return mkldnn::algorithm::eltwise_relu;
+case leakyrelu::kELU:
+  return mkldnn::algorithm::eltwise_elu;
+default:
+  LOG(FATAL) << "unknown activation type";
 
 Review comment:
   More descriptive error message"
   ```suggestion
 LOG(FATAL) << "unknown activation type for LeakyReLU: " << 
param.act_type;
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #16075: Integrate MKL-DNN leakyrelu

2019-09-05 Thread GitBox
TaoLv commented on a change in pull request #16075: Integrate MKL-DNN leakyrelu
URL: https://github.com/apache/incubator-mxnet/pull/16075#discussion_r321313388
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_ops-inl.h
 ##
 @@ -109,6 +109,12 @@ void MKLDNNActivationForward(const nnvm::NodeAttrs& 
attrs, const OpContext ,
 void MKLDNNActivationBackward(const nnvm::NodeAttrs& attrs, const OpContext 
,
   const NDArray _grad, const NDArray _data,
   const OpReqType , const NDArray _grad);
+void MKLDNNLeakyReluForward(const nnvm::NodeAttrs& attrs, const OpContext ,
+ const NDArray _data, const OpReqType ,
 
 Review comment:
   Indentation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #16075: Integrate MKL-DNN leakyrelu

2019-09-05 Thread GitBox
TaoLv commented on a change in pull request #16075: Integrate MKL-DNN leakyrelu
URL: https://github.com/apache/incubator-mxnet/pull/16075#discussion_r321303071
 
 

 ##
 File path: src/operator/leaky_relu.cc
 ##
 @@ -25,27 +25,123 @@
 */
 
 #include "./leaky_relu-inl.h"
+#if MXNET_USE_MKLDNN == 1
+#include "./nn/mkldnn/mkldnn_base-inl.h"
+#include "./nn/mkldnn/mkldnn_ops-inl.h"
+#endif  // MXNET_USE_MKLDNN == 1
 
 #include 
 namespace mxnet {
 namespace op {
-template<>
-Operator *CreateOp(LeakyReLUParam param, int dtype) {
-  Operator* op = nullptr;
-  MSHADOW_REAL_TYPE_SWITCH(dtype, DType, {
-op = new LeakyReLUOp(param);
-  });
-  return op;
+
+DMLC_REGISTER_PARAMETER(LeakyReLUParam);
+
+static bool LeakyReLUType(const nnvm::NodeAttrs& attrs,
+  std::vector *in_type,
+  std::vector *out_type) {
+  int dtype = -1;
+  for (const int& type : *in_type) {
+type_assign(, type);
+  }
+  for (const int& type : *out_type) {
+type_assign(, type);
+  }
+  for (size_t i = 0; i < in_type->size(); ++i) {
+TYPE_ASSIGN_CHECK(*in_type, i, dtype);
+  }
+  for (size_t i = 0; i < out_type->size(); ++i) {
+TYPE_ASSIGN_CHECK(*out_type, i, dtype);
+  }
+  return dtype != -1;
 }
 
-Operator *LeakyReLUProp::CreateOperatorEx(Context ctx, mxnet::ShapeVector 
*in_shape,
-  std::vector *in_type) const {
-  DO_BIND_DISPATCH(CreateOp, param_, in_type->at(0));
+static bool LeakyReLUShape(const nnvm::NodeAttrs& attrs,
+   std::vector *in_shape,
+   std::vector *out_shape) {
+  using namespace mshadow;
+  const LeakyReLUParam _ = nnvm::get(attrs.parsed);
+  if (param_.act_type == leakyrelu::kPReLU) {
+CHECK_EQ(in_shape->size(), 2U) << "Input:[data, gamma]";
+  } else {
+CHECK_EQ(in_shape->size(), 1U) << "Input:[data]";
+  }
+  const mxnet::TShape  = in_shape->at(leakyrelu::kData);
+  if (!mxnet::ndim_is_known(dshape)) return false;
+  if (param_.act_type == leakyrelu::kPReLU) {
+const mxnet::TShape  = in_shape->at(leakyrelu::kGamma);
+if (!mxnet::ndim_is_known(gshape)) {
+  in_shape->at(leakyrelu::kGamma) = mxnet::TShape(Shape1(dshape[1]));
+}
+if (dshape == gshape) {
+  SHAPE_ASSIGN_CHECK(*out_shape, 0, dshape);
+}
+  }
+  out_shape->clear();
+  out_shape->push_back(dshape);
+  if (param_.act_type == leakyrelu::kRReLU) {
+out_shape->push_back(dshape);
+  }
+  return true;
 }
 
-DMLC_REGISTER_PARAMETER(LeakyReLUParam);
+#if MXNET_USE_MKLDNN == 1
+static void LeakyReLUComputeExCPU(const nnvm::NodeAttrs& attrs,
+   const OpContext& ctx,
 
 Review comment:
   Please fix the indentation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #16075: Integrate MKL-DNN leakyrelu

2019-09-05 Thread GitBox
TaoLv commented on a change in pull request #16075: Integrate MKL-DNN leakyrelu
URL: https://github.com/apache/incubator-mxnet/pull/16075#discussion_r321303976
 
 

 ##
 File path: src/operator/leaky_relu.cc
 ##
 @@ -25,27 +25,123 @@
 */
 
 #include "./leaky_relu-inl.h"
+#if MXNET_USE_MKLDNN == 1
+#include "./nn/mkldnn/mkldnn_base-inl.h"
+#include "./nn/mkldnn/mkldnn_ops-inl.h"
+#endif  // MXNET_USE_MKLDNN == 1
 
 #include 
 namespace mxnet {
 namespace op {
-template<>
-Operator *CreateOp(LeakyReLUParam param, int dtype) {
-  Operator* op = nullptr;
-  MSHADOW_REAL_TYPE_SWITCH(dtype, DType, {
-op = new LeakyReLUOp(param);
-  });
-  return op;
+
+DMLC_REGISTER_PARAMETER(LeakyReLUParam);
+
+static bool LeakyReLUType(const nnvm::NodeAttrs& attrs,
+  std::vector *in_type,
+  std::vector *out_type) {
+  int dtype = -1;
+  for (const int& type : *in_type) {
+type_assign(, type);
+  }
+  for (const int& type : *out_type) {
+type_assign(, type);
+  }
+  for (size_t i = 0; i < in_type->size(); ++i) {
+TYPE_ASSIGN_CHECK(*in_type, i, dtype);
+  }
+  for (size_t i = 0; i < out_type->size(); ++i) {
+TYPE_ASSIGN_CHECK(*out_type, i, dtype);
+  }
+  return dtype != -1;
 }
 
-Operator *LeakyReLUProp::CreateOperatorEx(Context ctx, mxnet::ShapeVector 
*in_shape,
-  std::vector *in_type) const {
-  DO_BIND_DISPATCH(CreateOp, param_, in_type->at(0));
+static bool LeakyReLUShape(const nnvm::NodeAttrs& attrs,
+   std::vector *in_shape,
+   std::vector *out_shape) {
+  using namespace mshadow;
+  const LeakyReLUParam _ = nnvm::get(attrs.parsed);
+  if (param_.act_type == leakyrelu::kPReLU) {
+CHECK_EQ(in_shape->size(), 2U) << "Input:[data, gamma]";
+  } else {
+CHECK_EQ(in_shape->size(), 1U) << "Input:[data]";
+  }
+  const mxnet::TShape  = in_shape->at(leakyrelu::kData);
+  if (!mxnet::ndim_is_known(dshape)) return false;
+  if (param_.act_type == leakyrelu::kPReLU) {
+const mxnet::TShape  = in_shape->at(leakyrelu::kGamma);
+if (!mxnet::ndim_is_known(gshape)) {
+  in_shape->at(leakyrelu::kGamma) = mxnet::TShape(Shape1(dshape[1]));
+}
+if (dshape == gshape) {
+  SHAPE_ASSIGN_CHECK(*out_shape, 0, dshape);
+}
+  }
+  out_shape->clear();
+  out_shape->push_back(dshape);
+  if (param_.act_type == leakyrelu::kRReLU) {
+out_shape->push_back(dshape);
+  }
+  return true;
 }
 
-DMLC_REGISTER_PARAMETER(LeakyReLUParam);
+#if MXNET_USE_MKLDNN == 1
+static void LeakyReLUComputeExCPU(const nnvm::NodeAttrs& attrs,
+   const OpContext& ctx,
+   const std::vector& inputs,
+   const std::vector& req,
+   const std::vector& outputs) {
+  const LeakyReLUParam& param = nnvm::get(attrs.parsed);
+  size_t expected = param.act_type == leakyrelu::kPReLU ? 2 : 1;
+  CHECK_EQ(inputs.size(), expected);
+  if (SupportMKLDNNLeakyRelu(param, inputs[0])) {
+MKLDNN_OPCHECK_INIT(false, outputs.size(), inputs, outputs);
+MKLDNNLeakyReluForward(attrs, ctx, inputs[0], req[0], outputs[0]);
+MKLDNN_OPCHECK_RUN(LeakyReLUCompute, attrs, ctx, inputs, req, 
outputs);
+return;
+  }
+  FallBackCompute(LeakyReLUCompute, attrs, ctx, inputs, req, outputs);
+}
+
+void LeakyReLUGradComputeExCPU(const nnvm::NodeAttrs& attrs,
+const OpContext& ctx,
 
 Review comment:
   Please fix the indentation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #16075: Integrate MKL-DNN leakyrelu

2019-09-05 Thread GitBox
TaoLv commented on a change in pull request #16075: Integrate MKL-DNN leakyrelu
URL: https://github.com/apache/incubator-mxnet/pull/16075#discussion_r321314534
 
 

 ##
 File path: tests/python/unittest/test_gluon.py
 ##
 @@ -1222,7 +1222,9 @@ def elu(x):
 return [elu(x_i) for x_i in x]
 
 for test_point, ref_point in zip(elu_test(point_to_validate), 
elu(point_to_validate)):
-assert test_point == ref_point
+print(test_point)
+print(ref_point)
 
 Review comment:
   Why print?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #16075: Integrate MKL-DNN leakyrelu

2019-09-05 Thread GitBox
TaoLv commented on a change in pull request #16075: Integrate MKL-DNN leakyrelu
URL: https://github.com/apache/incubator-mxnet/pull/16075#discussion_r321306876
 
 

 ##
 File path: src/operator/leaky_relu.cc
 ##
 @@ -25,27 +25,123 @@
 */
 
 #include "./leaky_relu-inl.h"
+#if MXNET_USE_MKLDNN == 1
+#include "./nn/mkldnn/mkldnn_base-inl.h"
+#include "./nn/mkldnn/mkldnn_ops-inl.h"
+#endif  // MXNET_USE_MKLDNN == 1
 
 #include 
 namespace mxnet {
 namespace op {
-template<>
-Operator *CreateOp(LeakyReLUParam param, int dtype) {
-  Operator* op = nullptr;
-  MSHADOW_REAL_TYPE_SWITCH(dtype, DType, {
-op = new LeakyReLUOp(param);
-  });
-  return op;
+
+DMLC_REGISTER_PARAMETER(LeakyReLUParam);
+
+static bool LeakyReLUType(const nnvm::NodeAttrs& attrs,
+  std::vector *in_type,
+  std::vector *out_type) {
+  int dtype = -1;
+  for (const int& type : *in_type) {
+type_assign(, type);
+  }
+  for (const int& type : *out_type) {
+type_assign(, type);
+  }
+  for (size_t i = 0; i < in_type->size(); ++i) {
+TYPE_ASSIGN_CHECK(*in_type, i, dtype);
+  }
+  for (size_t i = 0; i < out_type->size(); ++i) {
+TYPE_ASSIGN_CHECK(*out_type, i, dtype);
+  }
+  return dtype != -1;
 }
 
-Operator *LeakyReLUProp::CreateOperatorEx(Context ctx, mxnet::ShapeVector 
*in_shape,
-  std::vector *in_type) const {
-  DO_BIND_DISPATCH(CreateOp, param_, in_type->at(0));
+static bool LeakyReLUShape(const nnvm::NodeAttrs& attrs,
+   std::vector *in_shape,
+   std::vector *out_shape) {
+  using namespace mshadow;
+  const LeakyReLUParam _ = nnvm::get(attrs.parsed);
+  if (param_.act_type == leakyrelu::kPReLU) {
+CHECK_EQ(in_shape->size(), 2U) << "Input:[data, gamma]";
+  } else {
+CHECK_EQ(in_shape->size(), 1U) << "Input:[data]";
+  }
+  const mxnet::TShape  = in_shape->at(leakyrelu::kData);
+  if (!mxnet::ndim_is_known(dshape)) return false;
+  if (param_.act_type == leakyrelu::kPReLU) {
+const mxnet::TShape  = in_shape->at(leakyrelu::kGamma);
+if (!mxnet::ndim_is_known(gshape)) {
+  in_shape->at(leakyrelu::kGamma) = mxnet::TShape(Shape1(dshape[1]));
+}
+if (dshape == gshape) {
+  SHAPE_ASSIGN_CHECK(*out_shape, 0, dshape);
+}
+  }
+  out_shape->clear();
+  out_shape->push_back(dshape);
+  if (param_.act_type == leakyrelu::kRReLU) {
+out_shape->push_back(dshape);
+  }
+  return true;
 }
 
-DMLC_REGISTER_PARAMETER(LeakyReLUParam);
+#if MXNET_USE_MKLDNN == 1
+static void LeakyReLUComputeExCPU(const nnvm::NodeAttrs& attrs,
+   const OpContext& ctx,
+   const std::vector& inputs,
+   const std::vector& req,
+   const std::vector& outputs) {
+  const LeakyReLUParam& param = nnvm::get(attrs.parsed);
+  size_t expected = param.act_type == leakyrelu::kPReLU ? 2 : 1;
+  CHECK_EQ(inputs.size(), expected);
+  if (SupportMKLDNNLeakyRelu(param, inputs[0])) {
+MKLDNN_OPCHECK_INIT(false, outputs.size(), inputs, outputs);
+MKLDNNLeakyReluForward(attrs, ctx, inputs[0], req[0], outputs[0]);
+MKLDNN_OPCHECK_RUN(LeakyReLUCompute, attrs, ctx, inputs, req, 
outputs);
+return;
+  }
+  FallBackCompute(LeakyReLUCompute, attrs, ctx, inputs, req, outputs);
+}
+
+void LeakyReLUGradComputeExCPU(const nnvm::NodeAttrs& attrs,
+const OpContext& ctx,
+const std::vector& inputs,
+const std::vector& req,
+const std::vector& outputs) {
+  const LeakyReLUParam& param = nnvm::get(attrs.parsed);
+  if (SupportMKLDNNLeakyRelu(param, inputs[0])) {
+MKLDNN_OPCHECK_INIT(true, outputs.size(), inputs, outputs);
+MKLDNNLeakyReluBackward(attrs, ctx, inputs.at(0), inputs.at(1), req[0],
+ outputs[0]);
+MKLDNN_OPCHECK_RUN(LeakyReLUGradCompute, attrs, ctx, inputs, req, 
outputs);
+return;
+  }
+  FallBackCompute(LeakyReLUGradCompute, attrs, ctx, inputs, req, outputs);
+}
+
+inline static bool LeakyReLUStorageType(const nnvm::NodeAttrs& attrs,
+ const int dev_mask,
 
 Review comment:
   Indentation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #16075: Integrate MKL-DNN leakyrelu

2019-09-05 Thread GitBox
TaoLv commented on a change in pull request #16075: Integrate MKL-DNN leakyrelu
URL: https://github.com/apache/incubator-mxnet/pull/16075#discussion_r321299622
 
 

 ##
 File path: src/operator/leaky_relu-inl.h
 ##
 @@ -332,166 +332,51 @@ class LeakyReLUOp : public Operator {
 };  // class LeakyReLUOp
 
 template
-Operator* CreateOp(LeakyReLUParam type, int dtype);
+void LeakyReLUCompute(const nnvm::NodeAttrs& attrs,
+  const OpContext& ctx, const std::vector& inputs,
+  const std::vector& req,
+  const std::vector& outputs) {
+  const LeakyReLUParam  = nnvm::get(attrs.parsed);
+  const std::vector no_use_but_adapt_origin_api;
+  size_t expected = param.act_type == leakyrelu::kPReLU ? 2 : 1;
+  CHECK_EQ(inputs.size(), expected);
 
-#if DMLC_USE_CXX11
-class LeakyReLUProp : public OperatorProperty {
- public:
-  void Init(const std::vector >& kwargs) 
override {
-param_.Init(kwargs);
-  }
-
-  std::map GetParams() const override {
-return param_.__DICT__();
-  }
-
-  bool InferShape(mxnet::ShapeVector *in_shape,
-  mxnet::ShapeVector *out_shape,
-  mxnet::ShapeVector *aux_shape) const override {
-using namespace mshadow;
-if (param_.act_type == leakyrelu::kPReLU) {
-  CHECK_EQ(in_shape->size(), 2U) << "Input:[data, gamma]";
-} else {
-  CHECK_EQ(in_shape->size(), 1U) << "Input:[data]";
-}
-const mxnet::TShape  = in_shape->at(leakyrelu::kData);
-if (!mxnet::ndim_is_known(dshape)) return false;
-if (param_.act_type == leakyrelu::kPReLU) {
-  const mxnet::TShape  = in_shape->at(leakyrelu::kGamma);
-  if (!mxnet::ndim_is_known(gshape)) {
-in_shape->at(leakyrelu::kGamma) = mxnet::TShape(Shape1(dshape[1]));
-  }
-  if (dshape == gshape) {
-SHAPE_ASSIGN_CHECK(*out_shape, 0, dshape);
-  }
-}
-out_shape->clear();
-out_shape->push_back(dshape);
-if (param_.act_type == leakyrelu::kRReLU) {
-  out_shape->push_back(dshape);
-}
-return true;
-  }
-
-  bool InferType(std::vector *in_type,
- std::vector *out_type,
- std::vector *aux_type) const override {
-int dtype = -1;
-for (const int& type : *in_type) {
-  type_assign(, type);
-}
-for (const int& type : *out_type) {
-  type_assign(, type);
-}
-
-for (size_t i = 0; i < in_type->size(); ++i) {
-  TYPE_ASSIGN_CHECK(*in_type, i, dtype);
-}
-for (size_t i = 0; i < out_type->size(); ++i) {
-  TYPE_ASSIGN_CHECK(*out_type, i, dtype);
-}
-return dtype != -1;
-  }
-
-  OperatorProperty* Copy() const override {
-auto ptr = new LeakyReLUProp();
-ptr->param_ = param_;
-return ptr;
-  }
-
-  std::string TypeString() const override {
-return "LeakyReLU";
-  }
-
-  // decalre dependency and inplace optimization options
-  std::vector DeclareBackwardDependency(
-const std::vector _grad,
-const std::vector _data,
-const std::vector _data) const override {
-if (param_.act_type == leakyrelu::kPReLU) {
-  return {out_grad[leakyrelu::kOut],
-  out_data[leakyrelu::kOut],
-  in_data[leakyrelu::kData],
-  in_data[leakyrelu::kGamma]};
-} else if (param_.act_type == leakyrelu::kRReLU) {
-  return {out_grad[leakyrelu::kOut], out_data[leakyrelu::kMask], 
out_data[leakyrelu::kOut]};
-} else {
-  return {out_grad[leakyrelu::kOut], out_data[leakyrelu::kData]};
-}
-  }
+  MSHADOW_REAL_TYPE_SWITCH(inputs[leakyrelu::kData].type_flag_, DType, {
+LeakyReLUOp op(param);
+op.Forward(ctx, inputs, req, outputs, no_use_but_adapt_origin_api);
+  });
+}
 
-  std::vector > BackwardInplaceOption(
-const std::vector _grad,
-const std::vector _data,
-const std::vector _data,
-const std::vector _grad) const override {
-return {{out_grad[leakyrelu::kOut], in_grad[leakyrelu::kData]}};
-  }
-
-  std::vector > ForwardInplaceOption(
-const std::vector _data,
-const std::vector _data) const override {
-if (param_.act_type == leakyrelu::kPReLU) {
-  return {};
-} else {
-  return {{in_data[leakyrelu::kData], out_data[leakyrelu::kOut]}};
-}
-  }
-
-  std::vector ListArguments() const override {
-if (param_.act_type == leakyrelu::kPReLU) {
-  return {"data", "gamma"};
-} else {
-  return {"data"};
-}
-  }
-
-  std::vector ListOutputs() const override {
-if (param_.act_type == leakyrelu::kRReLU) {
-  return {"output", "mask"};
-} else {
-  return {"output"};
-}
-  }
-
-  int NumOutputs() const override {
-if (param_.act_type == leakyrelu::kRReLU) {
-  return 2;
-} else {
-  return 1;
-}
-  }
-
-  int NumVisibleOutputs() const override {
-return 1;
-  }
-
-  std::vector ForwardResource(
-  const mxnet::ShapeVector _shape) const override {
-if (param_.act_type == leakyrelu::kRReLU) {
-  return 

[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #16075: Integrate MKL-DNN leakyrelu

2019-09-05 Thread GitBox
TaoLv commented on a change in pull request #16075: Integrate MKL-DNN leakyrelu
URL: https://github.com/apache/incubator-mxnet/pull/16075#discussion_r321298810
 
 

 ##
 File path: src/operator/leaky_relu-inl.h
 ##
 @@ -332,166 +332,51 @@ class LeakyReLUOp : public Operator {
 };  // class LeakyReLUOp
 
 template
-Operator* CreateOp(LeakyReLUParam type, int dtype);
+void LeakyReLUCompute(const nnvm::NodeAttrs& attrs,
+  const OpContext& ctx, const std::vector& inputs,
+  const std::vector& req,
+  const std::vector& outputs) {
+  const LeakyReLUParam  = nnvm::get(attrs.parsed);
+  const std::vector no_use_but_adapt_origin_api;
+  size_t expected = param.act_type == leakyrelu::kPReLU ? 2 : 1;
+  CHECK_EQ(inputs.size(), expected);
 
-#if DMLC_USE_CXX11
-class LeakyReLUProp : public OperatorProperty {
- public:
-  void Init(const std::vector >& kwargs) 
override {
-param_.Init(kwargs);
-  }
-
-  std::map GetParams() const override {
-return param_.__DICT__();
-  }
-
-  bool InferShape(mxnet::ShapeVector *in_shape,
-  mxnet::ShapeVector *out_shape,
-  mxnet::ShapeVector *aux_shape) const override {
-using namespace mshadow;
-if (param_.act_type == leakyrelu::kPReLU) {
-  CHECK_EQ(in_shape->size(), 2U) << "Input:[data, gamma]";
-} else {
-  CHECK_EQ(in_shape->size(), 1U) << "Input:[data]";
-}
-const mxnet::TShape  = in_shape->at(leakyrelu::kData);
-if (!mxnet::ndim_is_known(dshape)) return false;
-if (param_.act_type == leakyrelu::kPReLU) {
-  const mxnet::TShape  = in_shape->at(leakyrelu::kGamma);
-  if (!mxnet::ndim_is_known(gshape)) {
-in_shape->at(leakyrelu::kGamma) = mxnet::TShape(Shape1(dshape[1]));
-  }
-  if (dshape == gshape) {
-SHAPE_ASSIGN_CHECK(*out_shape, 0, dshape);
-  }
-}
-out_shape->clear();
-out_shape->push_back(dshape);
-if (param_.act_type == leakyrelu::kRReLU) {
-  out_shape->push_back(dshape);
-}
-return true;
-  }
-
-  bool InferType(std::vector *in_type,
- std::vector *out_type,
- std::vector *aux_type) const override {
-int dtype = -1;
-for (const int& type : *in_type) {
-  type_assign(, type);
-}
-for (const int& type : *out_type) {
-  type_assign(, type);
-}
-
-for (size_t i = 0; i < in_type->size(); ++i) {
-  TYPE_ASSIGN_CHECK(*in_type, i, dtype);
-}
-for (size_t i = 0; i < out_type->size(); ++i) {
-  TYPE_ASSIGN_CHECK(*out_type, i, dtype);
-}
-return dtype != -1;
-  }
-
-  OperatorProperty* Copy() const override {
-auto ptr = new LeakyReLUProp();
-ptr->param_ = param_;
-return ptr;
-  }
-
-  std::string TypeString() const override {
-return "LeakyReLU";
-  }
-
-  // decalre dependency and inplace optimization options
-  std::vector DeclareBackwardDependency(
-const std::vector _grad,
-const std::vector _data,
-const std::vector _data) const override {
-if (param_.act_type == leakyrelu::kPReLU) {
-  return {out_grad[leakyrelu::kOut],
-  out_data[leakyrelu::kOut],
-  in_data[leakyrelu::kData],
-  in_data[leakyrelu::kGamma]};
-} else if (param_.act_type == leakyrelu::kRReLU) {
-  return {out_grad[leakyrelu::kOut], out_data[leakyrelu::kMask], 
out_data[leakyrelu::kOut]};
-} else {
-  return {out_grad[leakyrelu::kOut], out_data[leakyrelu::kData]};
-}
-  }
+  MSHADOW_REAL_TYPE_SWITCH(inputs[leakyrelu::kData].type_flag_, DType, {
+LeakyReLUOp op(param);
+op.Forward(ctx, inputs, req, outputs, no_use_but_adapt_origin_api);
+  });
+}
 
-  std::vector > BackwardInplaceOption(
-const std::vector _grad,
-const std::vector _data,
-const std::vector _data,
-const std::vector _grad) const override {
-return {{out_grad[leakyrelu::kOut], in_grad[leakyrelu::kData]}};
-  }
-
-  std::vector > ForwardInplaceOption(
-const std::vector _data,
-const std::vector _data) const override {
-if (param_.act_type == leakyrelu::kPReLU) {
-  return {};
-} else {
-  return {{in_data[leakyrelu::kData], out_data[leakyrelu::kOut]}};
-}
-  }
-
-  std::vector ListArguments() const override {
-if (param_.act_type == leakyrelu::kPReLU) {
-  return {"data", "gamma"};
-} else {
-  return {"data"};
-}
-  }
-
-  std::vector ListOutputs() const override {
-if (param_.act_type == leakyrelu::kRReLU) {
-  return {"output", "mask"};
-} else {
-  return {"output"};
-}
-  }
-
-  int NumOutputs() const override {
-if (param_.act_type == leakyrelu::kRReLU) {
-  return 2;
-} else {
-  return 1;
-}
-  }
-
-  int NumVisibleOutputs() const override {
-return 1;
-  }
-
-  std::vector ForwardResource(
-  const mxnet::ShapeVector _shape) const override {
-if (param_.act_type == leakyrelu::kRReLU) {
-  return 

[incubator-mxnet] tag 1.5.1.rc0 created (now c981848)

2019-09-05 Thread taolv
This is an automated email from the ASF dual-hosted git repository.

taolv pushed a change to tag 1.5.1.rc0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


  at c981848  (commit)
No new revisions were added by this update.



[GitHub] [incubator-mxnet] chongruo opened a new issue #16101: [Bug, Feature Request] mx.nd.where()

2019-09-05 Thread GitBox
chongruo opened a new issue #16101: [Bug,  Feature Request]  mx.nd.where()
URL: https://github.com/apache/incubator-mxnet/issues/16101
 
 
   
   ## Description
   ### Bug 
   
[mx.nd.where()](https://beta.mxnet.io/api/ndarray/_autogen/mxnet.ndarray.where.html?highlight=where#mxnet.ndarray.where)
 shows an incorrect behavior when one of the inputs is an NDArray with zero 
size. 
   
   Here is a reproducible example
   ```python
   cond = mx.nd.array([0])   # cond.shape: (1,)
   x = mx.nd.array([[10,10]])#x.shape: (1, 2)
   y = mx.nd.array(4)#y.shape: ()
   
   print( mx.nd.where(cond, x, y) )
   # output: [[4.e+00 3.0773e-41]]
   
   ```
   The output is weird and it seems that the NDArray with zero size has not 
been checked. We expect that it would raise an error showing the shape of x and 
y must be the same, according to [docs of 
mx.nd.where()](https://beta.mxnet.io/api/ndarray/_autogen/mxnet.ndarray.where.html?highlight=where#mxnet.ndarray.where).
 Broadcast is not supported in the latest version but where() still has an 
output.
   
It is also a little dangerous as it outputs incorrect answers rather than 
error messages, when users forget to type [] for ``mx.nd.array([4])``.
   
   
   
   
   ### Feature Request
    1. Broadcast
   
   Currently, there are two limitations for mx.nd.where()
- x and y must have the same shape
- If condition does not have the same shape as x, it must be a 1D array 
whose size is the same as x’s first dimension size
   
   Similar to 
[np.where()](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.where.html),
 it would be great if mx.nd.where() supports broadcast to make sure (cond, x, 
y) have the same shape, even if they are in different shapes as input. 
   
   
   
    2.  Scalar inputs (cond, x and y)
   In some situations, we want to give a constant value for True/False.
   
   It would be user-friendly if programmers only need to type 
   ```mx.nd.where(cond, x, 0)``` 
   instead of 
   ```mx.nd.where(cond, x,  mx.nd.array([0]))```
   
   
   
   
   



   
   
   
   ---
   
   ## Environment info (Required)
   
   ```
   --Python Info--
   Version  : 3.6.9
   Compiler : GCC 7.3.0
   Build: ('default', 'Jul 30 2019 19:07:31')
   Arch : ('64bit', '')
   Pip Info---
   Version  : 19.2.2
   Directory: 
/home/ubuntu/anaconda3/envs/new/lib/python3.6/site-packages/pip
   --MXNet Info---
   Version  : 1.6.0
   Directory: /home/ubuntu/new/my-mxnet/python/mxnet
   Commit hash file "/home/ubuntu/new/my-mxnet/python/mxnet/COMMIT_HASH" not 
found. Not installed from pre-built package or built from source.
   Library  : 
['/home/ubuntu/new/my-mxnet/python/mxnet/../../build/libmxnet.so']
   Build features:
   No runtime build feature info available
   --System Info--
   Platform : Linux-4.4.0-1092-aws-x86_64-with-debian-stretch-sid
   system   : Linux
   node : ip-172-31-14-150
   release  : 4.4.0-1092-aws
   version  : #103-Ubuntu SMP Tue Aug 27 10:21:48 UTC 2019
   --Hardware Info--
   machine  : x86_64
   processor: x86_64
   Architecture:  x86_64
   CPU op-mode(s):32-bit, 64-bit
   Byte Order:Little Endian
   CPU(s):96
   On-line CPU(s) list:   0-95
   Thread(s) per core:2
   Core(s) per socket:24
   Socket(s): 2
   NUMA node(s):  2
   Vendor ID: GenuineIntel
   CPU family:6
   Model: 85
   Model name:Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz
   Stepping:  4
   CPU MHz:   2499.998
   BogoMIPS:  4999.99
   Hypervisor vendor: KVM
   Virtualization type:   full
   L1d cache: 32K
   L1i cache: 32K
   L2 cache:  1024K
   L3 cache:  33792K
   NUMA node0 CPU(s): 0-23,48-71
   NUMA node1 CPU(s): 24-47,72-95
   Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm 
constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc aperfmperf 
tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic 
movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm 
abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 hle avx2 smep 
bmi2 erms invpcid rtm mpx avx512f rdseed adx smap clflushopt clwb avx512cd 
xsaveopt xsavec xgetbv1 ida arat pku
   --Network Test--
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0014 
sec, LOAD: 0.4787 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.1707 sec, LOAD: 
0.2402 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0228 sec, LOAD: 
0.3108 sec.
   

[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #16101: [Bug, Feature Request] mx.nd.where()

2019-09-05 Thread GitBox
mxnet-label-bot commented on issue #16101: [Bug,  Feature Request]  
mx.nd.where()
URL: 
https://github.com/apache/incubator-mxnet/issues/16101#issuecomment-528375500
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended label(s): Feature


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-09-05 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new bf789c1  Bump the publish timestamp.
bf789c1 is described below

commit bf789c15c58710ab18f4502123c8c924fd544aba
Author: mxnet-ci 
AuthorDate: Thu Sep 5 13:28:08 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..1ccc606
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Thu Sep  5 13:28:08 UTC 2019



[GitHub] [incubator-mxnet] anirudhacharya commented on a change in pull request #16053: Fixes #15543

2019-09-05 Thread GitBox
anirudhacharya commented on a change in pull request #16053: Fixes #15543
URL: https://github.com/apache/incubator-mxnet/pull/16053#discussion_r321256928
 
 

 ##
 File path: tests/python/unittest/test_optimizer.py
 ##
 @@ -396,6 +397,7 @@ def update(self, index, weight, grad, state):
 weight32[:] += -lr * (grad32 + wd * weight32)
 else:
 weight32[:] += (self.momentum**2 * mom) - 
lr*(self.momentum+1)*(grad32 + wd*weight32)
+mom = (self.momentum*mom) - lr*(grad32 + wd*weight32)
 
 Review comment:
   the sgd unit test also has `lr` as part of `mom` update - 
https://github.com/apache/incubator-mxnet/blob/master/tests/python/unittest/test_optimizer.py#L142.
 
   
   let me know if you still think this needs to change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #15767: FullyConnected op with float64 and MKL-DNN fails if gradient are not set in a specific way

2019-09-05 Thread GitBox
pengzhao-intel commented on issue #15767: FullyConnected op with float64 and 
MKL-DNN fails if gradient are not set in a specific way
URL: 
https://github.com/apache/incubator-mxnet/issues/15767#issuecomment-528323795
 
 
   @matteosal sorry for the delay. The PR is blocked by 3rd party package but 
it is resolved and will be merged soon.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #15853: Float64 fallback for mkldnn subgraph and rnn op

2019-09-05 Thread GitBox
pengzhao-intel commented on issue #15853: Float64 fallback for mkldnn subgraph 
and rnn op
URL: https://github.com/apache/incubator-mxnet/pull/15853#issuecomment-528322312
 
 
   @zhennanqin could you resolve the conflict and I will merge this one?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Pagey commented on issue #16070: yajiedesign/mxnet "This branch is 137 commits behind apache:master."

2019-09-05 Thread GitBox
Pagey commented on issue #16070: yajiedesign/mxnet "This branch is 137 commits 
behind apache:master."
URL: 
https://github.com/apache/incubator-mxnet/issues/16070#issuecomment-528296624
 
 
   @haojin2 e.g. here: 
https://mxnet.incubator.apache.org/versions/master/install/windows_setup.html
   as i said, the prebuilds are build every day or other day there, but for 
some reason the code hasn't been updated in a month- me and others use these 
prebuilds for windows installations- i could compile myself from source but 
this was better


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hzfan opened a new pull request #16100: Infra for tvm op runtime dispatch

2019-09-05 Thread GitBox
hzfan opened a new pull request #16100: Infra for tvm op runtime dispatch
URL: https://github.com/apache/incubator-mxnet/pull/16100
 
 
   ## Description ##
   This PR implements an infra to let users dispatch the execution of a tvm 
operator to different schedules according to the runtime input shapes. This 
helps with acceleration. 
   
   A gemm example can be found in
   - Kernel definition: contrib/tvmop/core/multiarray.py
   - Operator registry and dispatch: src/operator/contrib/tvmop/dot.cc
   - Benchmark: benchmark/python/tvmop/benchmark_tvmop.py
   
   Note that benchmark results cannot be reproduced until 
[this](https://github.com/dmlc/tvm/pull/3842) gets merged. The following are 
some experimental results for matrix multiplication between two n * n matrix.
   
   | n  | Before Dispatch (ms) | After Dispatch (ms) |
   | --- | --- | --- |
   | 1024  | 177 | 482 |
   | 1056  | 190 | 366 |
   | 1088 | 200 | 424 |
   
   The schedule is rough, and is equivalent to the 
[Blocking](https://docs.tvm.ai/tutorials/optimize/opt_gemm.html) optimization 
(the first step of opt). More opt (like the Vectorization, Loop Permutation, 
Array Packing, Write cache for blocks, Parallel) can be further added for 
acceleration.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - Add dispatch infra
   - Add an example dot operator
   
   ## Comments ##
   - Thank @yzhliu and @reminisce for guidance and review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on issue #16070: yajiedesign/mxnet "This branch is 137 commits behind apache:master."

2019-09-05 Thread GitBox
haojin2 commented on issue #16070: yajiedesign/mxnet "This branch is 137 
commits behind apache:master."
URL: 
https://github.com/apache/incubator-mxnet/issues/16070#issuecomment-528289176
 
 
   @Pagey Would you please kindly point me to the doc that you're using now? so 
that I could double-check and provide help to you.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (b7071c4 -> d0fa8c0)

2019-09-05 Thread wkcn
This is an automated email from the ASF dual-hosted git repository.

wkcn pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from b7071c4  Enable tvm_op for ci (#15889)
 add d0fa8c0  Test large vector mean operator and fix a few bugs (#16079)

No new revisions were added by this update.

Summary of changes:
 src/common/tensor_inspector.h  |  1 -
 src/operator/mshadow_op.h  |  2 +-
 src/operator/tensor/broadcast_reduce-inl.h |  4 +-
 tests/nightly/test_large_vector.py | 95 --
 4 files changed, 55 insertions(+), 47 deletions(-)



[GitHub] [incubator-mxnet] wkcn merged pull request #16079: Test large vector mean operator and fix a few bugs

2019-09-05 Thread GitBox
wkcn merged pull request #16079: Test large vector mean operator and fix a few 
bugs
URL: https://github.com/apache/incubator-mxnet/pull/16079
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hgt312 commented on issue #13482: [Feature Request] Release gpu memory by API.

2019-09-05 Thread GitBox
hgt312 commented on issue #13482: [Feature Request] Release gpu memory by API.
URL: 
https://github.com/apache/incubator-mxnet/issues/13482#issuecomment-528245309
 
 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-09-05 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new a0567e3  Bump the publish timestamp.
a0567e3 is described below

commit a0567e3b2eefbcb6a625d16532717270139dca82
Author: mxnet-ci 
AuthorDate: Thu Sep 5 07:36:09 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..b871ae6
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Thu Sep  5 07:36:09 UTC 2019



[GitHub] [incubator-mxnet] damvantai commented on issue #1432: How can I clear the memory usage?

2019-09-05 Thread GitBox
damvantai commented on issue #1432: How can I clear the memory usage?
URL: 
https://github.com/apache/incubator-mxnet/issues/1432#issuecomment-528231456
 
 
   > This worked for me:
   > 
   > ```python
   > del mod
   > gc.collect()
   > # memory should be freed
   > ```
   
   but in Vram Gpu, my memory still don't reduce.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (c6a92d9 -> b7071c4)

2019-09-05 Thread reminisce
This is an automated email from the ASF dual-hosted git repository.

reminisce pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from c6a92d9  remove 'foo' and other print msg from test (#16088)
 add b7071c4  Enable tvm_op for ci (#15889)

No new revisions were added by this update.

Summary of changes:
 CMakeLists.txt |  2 +-
 Makefile   | 37 +-
 ci/docker/Dockerfile.build.ubuntu_gpu_cu101|  1 +
 ci/docker/install/ubuntu_python.sh |  2 +-
 ci/docker/runtime_functions.sh | 15 +
 ci/jenkins/Jenkins_steps.groovy| 20 ++--
 .../assembly/src/main/assembly/assembly.xml|  1 +
 .../apache/mxnet/util/NativeLibraryLoader.scala|  1 +
 src/operator/contrib/tvmop/ufunc.cc|  2 +-
 tests/python/unittest/test_tvm_op.py   |  1 -
 10 files changed, 53 insertions(+), 29 deletions(-)



[GitHub] [incubator-mxnet] reminisce merged pull request #15889: Enable tvm_op for ci

2019-09-05 Thread GitBox
reminisce merged pull request #15889: Enable tvm_op for ci
URL: https://github.com/apache/incubator-mxnet/pull/15889
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] VoVAllen edited a comment on issue #16099: [Bug] NDArray asscalar error after set npx.set_np()

2019-09-05 Thread GitBox
VoVAllen edited a comment on issue #16099: [Bug] NDArray asscalar error after 
set npx.set_np()
URL: 
https://github.com/apache/incubator-mxnet/issues/16099#issuecomment-528216667
 
 
   It's from `assert a[0] == 1`, which calls `bool(a[0]==1)`, and would trigger 
the error.
   I'm from dgl team and we currently using ndarray since only zero-copy 
interface from dlpack to ndarray is provided. And we need empty shape support 
somehow. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] VoVAllen edited a comment on issue #16099: [Bug] NDArray asscalar error after set npx.set_np()

2019-09-05 Thread GitBox
VoVAllen edited a comment on issue #16099: [Bug] NDArray asscalar error after 
set npx.set_np()
URL: 
https://github.com/apache/incubator-mxnet/issues/16099#issuecomment-528216667
 
 
   It's from `assert a[0] == 1`, which calls `bool(a[0]==1)`, and would trigger 
the error.
   I'm from dgl team and we currently using ndarray since only interface from 
dlpack to ndarray is provided.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] VoVAllen commented on issue #16099: [Bug] NDArray asscalar error after set npx.set_np()

2019-09-05 Thread GitBox
VoVAllen commented on issue #16099: [Bug] NDArray asscalar error after set 
npx.set_np()
URL: 
https://github.com/apache/incubator-mxnet/issues/16099#issuecomment-528216667
 
 
   It's from `assert a[0] == 1`, which calls `bool(a[0]==1)`, and would trigger 
the error.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on issue #16099: [Bug] NDArray asscalar error after set npx.set_np()

2019-09-05 Thread GitBox
reminisce commented on issue #16099: [Bug] NDArray asscalar error after set 
npx.set_np()
URL: 
https://github.com/apache/incubator-mxnet/issues/16099#issuecomment-528216411
 
 
   The problem is that scalar sanity check for legacy NDArray is not 
numpy-compatible. I can send in the fix, but could you explain in what 
situation you would need to do this with numpy semantics on?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce edited a comment on issue #16098: [Bug] Unrecognize parameter shape after npx.set_up()

2019-09-05 Thread GitBox
reminisce edited a comment on issue #16098: [Bug] Unrecognize parameter shape 
after npx.set_up()
URL: 
https://github.com/apache/incubator-mxnet/issues/16098#issuecomment-528214571
 
 
   It looks like you saved a net which has not been initialized. It should work 
like the following. However, recent changes on ndarray indexing has prevented 
assigning a legacy `NDArray` to `mxnet.numpy.ndarray`. So the `load` function 
cannot work now which is different issue than this one. I will submit a PR to 
fix this.
   ```python
   from mxnet import npx, np
   npx.set_np()
   from mxnet.gluon import nn
   print(nn.Dense(32).collect_params())  # weight shape=(32,-1)
   
   net = nn.Dense(32)
   net.initialize()
   net(np.ones((4, 11)))
   print(net.collect_params())  # weight shape=(32,-1)
   
   net.save_parameters('test.params')
   net.load_parameters('test.params')
   print(net.collect_params())
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on issue #16098: [Bug] Unrecognize parameter shape after npx.set_up()

2019-09-05 Thread GitBox
reminisce commented on issue #16098: [Bug] Unrecognize parameter shape after 
npx.set_up()
URL: 
https://github.com/apache/incubator-mxnet/issues/16098#issuecomment-528214571
 
 
   It looks like you saved a net which has not been initialized. It should work 
like the following. However, recent changes on ndarray indexing has prevented 
assigning a legacy `NDArray` to `mxnet.numpy.ndarray`. So the `load` function 
cannot work now. I will submit a PR to fix this.
   ```python
   from mxnet import npx, np
   npx.set_np()
   from mxnet.gluon import nn
   print(nn.Dense(32).collect_params())  # weight shape=(32,-1)
   
   net = nn.Dense(32)
   net.initialize()
   net(np.ones((4, 11)))
   print(net.collect_params())  # weight shape=(32,-1)
   
   net.save_parameters('test.params')
   net.load_parameters('test.params')
   print(net.collect_params())
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services