[GitHub] hellonico opened a new pull request #13624: WIP: Nightly Tests For Clojure

2018-12-11 Thread GitBox
hellonico opened a new pull request #13624: WIP: Nightly Tests For Clojure
URL: https://github.com/apache/incubator-mxnet/pull/13624
 
 
   ## Description ##
   
   This is about running the inrtegration tests during nightly CI.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] MyYaYa commented on issue #13001: Feature request: numpy.cumsum

2018-12-11 Thread GitBox
MyYaYa commented on issue #13001: Feature request: numpy.cumsum
URL: 
https://github.com/apache/incubator-mxnet/issues/13001#issuecomment-446489801
 
 
   Yeah, if ndarray supports cumsum operation, some custom metrics(i.e. mAP) 
will benefit from that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2018-12-11 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 8f01f4b  Bump the publish timestamp.
8f01f4b is described below

commit 8f01f4be39e40f18e32ba8d35869675276061412
Author: mxnet-ci 
AuthorDate: Wed Dec 12 06:59:42 2018 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..8e80d29
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Wed Dec 12 06:59:42 UTC 2018



[GitHub] chinakook commented on issue #8335: Performance of MXNet on Windows is lower than that on Linux by 15%-20%

2018-12-11 Thread GitBox
chinakook commented on issue #8335: Performance of MXNet on Windows  is lower 
than that on Linux by 15%-20%
URL: 
https://github.com/apache/incubator-mxnet/issues/8335#issuecomment-446478064
 
 
   Thx, I will try later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #13419: [MXNET-1233] Enable dynamic shape in CachedOp

2018-12-11 Thread GitBox
zheng-da commented on a change in pull request #13419: [MXNET-1233] Enable 
dynamic shape in CachedOp
URL: https://github.com/apache/incubator-mxnet/pull/13419#discussion_r240883314
 
 

 ##
 File path: src/imperative/cached_op.cc
 ##
 @@ -262,6 +262,29 @@ std::vector CachedOp::Gradient(
   return ret;
 }
 
+bool CachedOp::CheckDynamicShapeExists(const Context& default_ctx,
+   const std::vector& inputs) {
 
 Review comment:
   i wonder if it's better to check operators with dynamic shape directly. 
right now, it assumes that if a computation graph can't infer shape, it 
contains dynamic-shape operators. it's better to write one that works for both 
CachedOp and symbol executor. It's a property of a computation graph whether a 
graph contains dynamic shape. We can easily check it by traversing all 
operators in a graph.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #13419: [MXNET-1233] Enable dynamic shape in CachedOp

2018-12-11 Thread GitBox
zheng-da commented on a change in pull request #13419: [MXNET-1233] Enable 
dynamic shape in CachedOp
URL: https://github.com/apache/incubator-mxnet/pull/13419#discussion_r240890932
 
 

 ##
 File path: src/imperative/imperative_utils.cc
 ##
 @@ -22,6 +22,114 @@
 
 namespace mxnet {
 namespace imperative {
+
+void NaiveRunGraph(
+const bool retain_graph,
+const Context& default_ctx,
+const nnvm::IndexedGraph& idx,
+const std::vector arrays,
+size_t node_start, size_t node_end,
+std::vector&& array_reqs,
+std::vector&& ref_count,
+std::vector *p_states,
+const DispatchModeVector _modes,
+bool recording) {
+  using namespace nnvm;
+  using namespace imperative;
+  static auto& createop = nnvm::Op::GetAttr("FCreateOpState");
+  static auto& is_layer_backward = Op::GetAttr("TIsLayerOpBackward");
+  static const auto bwd_cached_op = Op::Get("_backward_CachedOp");
+
+  const auto imp = Imperative::Get();
+
+  std::vector& states = *p_states;
+
+  for (size_t i = node_start; i < node_end; ++i) {
+const nnvm::IndexedGraph::Node& node = idx[i];
+if (node.source->op() == nullptr) {
+  continue;
+}
+size_t num_outputs = node.source->num_outputs();
+// construct `ndinputs`
+std::vector ndinputs;
+ndinputs.reserve(node.inputs.size());
+for (const auto& j : node.inputs) {
+  ndinputs.emplace_back(arrays[idx.entry_id(j)]);
+  CHECK(!ndinputs.back()->is_none()) << idx[j.node_id].source->attrs.name 
<< " " << j.index;
+}
+// construct `ndoutputs` and `req`
+std::vector ndoutputs;
+ndoutputs.reserve(num_outputs);
+for (size_t j = 0; j < num_outputs; ++j) {
+  size_t eid = idx.entry_id(i, j);
+  ndoutputs.emplace_back(arrays[eid]);
+}
+// other auxiliary data
+Context ctx = GetContext(node.source->attrs, ndinputs, ndoutputs, 
default_ctx);
+auto invoke = [&](const OpStatePtr ) {
+  DispatchMode dispatch_mode = DispatchMode::kUndefined;
+  SetShapeType(ctx, node.source->attrs, ndinputs, ndoutputs, 
_mode);
+  std::vector req;
+  SetWriteInplaceReq(ndinputs, ndoutputs, );
+  imp->InvokeOp(ctx, node.source->attrs, ndinputs, ndoutputs, req, 
dispatch_mode, state);
+  for (size_t i = 0; i < ndoutputs.size(); i++) {
+if (ndoutputs[i]->shape().ndim() == 0) {
+  ndoutputs[i]->WaitToRead();
+  ndoutputs[i]->SetShapeFromChunk();
+}
+  }
+  if (recording) {
+imp->RecordOp(NodeAttrs(node.source->attrs), ndinputs, ndoutputs, 
state);
+  }
+};
+if (node.source->op() == bwd_cached_op) {
+  // case 1: backward cached op
+  std::vector req;
+  req.reserve(num_outputs);
+  for (size_t j = 0; j < num_outputs; ++j) {
+size_t eid = idx.entry_id(i, j);
+req.push_back(array_reqs[eid]);
+CHECK(array_reqs[eid] == kNullOp || !ndoutputs.back()->is_none());
+  }
+  const auto& cached_op = 
dmlc::get(node.source->attrs.parsed);
+  nnvm::Node* fwd_node = node.source->control_deps[0].get();
+  auto fwd_node_id = idx.node_id(fwd_node);
+  cached_op->Backward(retain_graph, states[fwd_node_id], ndinputs, req, 
ndoutputs);
+} else if (createop.count(node.source->op())) {
+  // case 2: node is in createop
 
 Review comment:
   i think this is to handle stateful operators


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #13419: [MXNET-1233] Enable dynamic shape in CachedOp

2018-12-11 Thread GitBox
zheng-da commented on a change in pull request #13419: [MXNET-1233] Enable 
dynamic shape in CachedOp
URL: https://github.com/apache/incubator-mxnet/pull/13419#discussion_r240889937
 
 

 ##
 File path: src/imperative/cached_op.cc
 ##
 @@ -834,6 +861,61 @@ OpStatePtr CachedOp::DynamicForward(
   return op_state;
 }
 
+OpStatePtr CachedOp::NaiveForward(
+const Context& default_ctx,
+const std::vector& inputs,
+const std::vector& outputs) {
+  using namespace nnvm;
+  using namespace imperative;
+  // Initialize
+  bool recording = Imperative::Get()->is_recording();
+  auto op_state = OpStatePtr::Create();
+  auto& runtime = op_state.get_state();
+  {
+auto state_ptr = GetCachedOpState(default_ctx);
+auto& state = state_ptr.get_state();
+std::lock_guard lock(state.mutex);
+SetForwardGraph(, recording, inputs);
+runtime.info.fwd_graph = state.info.fwd_graph;
+  }
+  // build the indexed graph
+  nnvm::Graph& g = runtime.info.fwd_graph;
+  const auto& idx = g.indexed_graph();
+  const size_t num_inputs = idx.input_nodes().size();
+  const size_t num_entries = idx.num_node_entries();
+  std::vector ref_count = g.GetAttr >(
+recording ? "full_ref_count" : "forward_ref_count");
+  // construct `arrays`
+  runtime.buff.resize(num_entries);
+  std::vector arrays;
+  arrays.reserve(num_entries);
+  for (auto& item : runtime.buff) {
+arrays.push_back();
+  }
 
 Review comment:
   i wonder if we should buffer arrays from the previous run?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #13419: [MXNET-1233] Enable dynamic shape in CachedOp

2018-12-11 Thread GitBox
zheng-da commented on a change in pull request #13419: [MXNET-1233] Enable 
dynamic shape in CachedOp
URL: https://github.com/apache/incubator-mxnet/pull/13419#discussion_r240890307
 
 

 ##
 File path: src/imperative/imperative_utils.cc
 ##
 @@ -22,6 +22,114 @@
 
 namespace mxnet {
 namespace imperative {
+
+void NaiveRunGraph(
+const bool retain_graph,
+const Context& default_ctx,
+const nnvm::IndexedGraph& idx,
+const std::vector arrays,
+size_t node_start, size_t node_end,
+std::vector&& array_reqs,
+std::vector&& ref_count,
+std::vector *p_states,
+const DispatchModeVector _modes,
+bool recording) {
+  using namespace nnvm;
+  using namespace imperative;
+  static auto& createop = nnvm::Op::GetAttr("FCreateOpState");
+  static auto& is_layer_backward = Op::GetAttr("TIsLayerOpBackward");
+  static const auto bwd_cached_op = Op::Get("_backward_CachedOp");
+
+  const auto imp = Imperative::Get();
+
+  std::vector& states = *p_states;
+
+  for (size_t i = node_start; i < node_end; ++i) {
+const nnvm::IndexedGraph::Node& node = idx[i];
+if (node.source->op() == nullptr) {
+  continue;
+}
+size_t num_outputs = node.source->num_outputs();
+// construct `ndinputs`
+std::vector ndinputs;
+ndinputs.reserve(node.inputs.size());
+for (const auto& j : node.inputs) {
+  ndinputs.emplace_back(arrays[idx.entry_id(j)]);
+  CHECK(!ndinputs.back()->is_none()) << idx[j.node_id].source->attrs.name 
<< " " << j.index;
+}
+// construct `ndoutputs` and `req`
+std::vector ndoutputs;
+ndoutputs.reserve(num_outputs);
+for (size_t j = 0; j < num_outputs; ++j) {
+  size_t eid = idx.entry_id(i, j);
+  ndoutputs.emplace_back(arrays[eid]);
+}
+// other auxiliary data
+Context ctx = GetContext(node.source->attrs, ndinputs, ndoutputs, 
default_ctx);
+auto invoke = [&](const OpStatePtr ) {
+  DispatchMode dispatch_mode = DispatchMode::kUndefined;
+  SetShapeType(ctx, node.source->attrs, ndinputs, ndoutputs, 
_mode);
 
 Review comment:
   do we still infer shape here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on a change in pull request #13150: support mkl log when dtype is fp32 or fp64

2018-12-11 Thread GitBox
TaoLv commented on a change in pull request #13150: support mkl log when dtype 
is fp32 or fp64
URL: https://github.com/apache/incubator-mxnet/pull/13150#discussion_r240885436
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op.h
 ##
 @@ -348,6 +352,42 @@ class UnaryOp : public OpBase {
   LogUnimplementedOp(attrs, ctx, inputs, req, outputs);
 }
   }
+
+#if MSHADOW_USE_MKL == 1
+  static inline void MKLLog(MKL_INT size, const float* pIn, float* pOut) {
+vsLn(size, pIn, pOut);
+  }
+
+  static inline void MKLLog(MKL_INT size, const double* pIn, double* pOut) {
+vdLn(size, pIn, pOut);
+  }
+#endif
+
+  template
+  static void LogCompute(const nnvm::NodeAttrs& attrs,
+const OpContext& ctx,
+const std::vector& inputs,
+const std::vector& req,
+const std::vector& outputs) {
+if (req[0] == kNullOp) return;
+// if defined MSHADOW_USE_MKL then call mkl log when req is KWriteTo, 
type_flag is
+// mshadow::kFloat32 or mshadow::kFloat64 and data size less than or equal 
MKL_INT_MAX
+#if MSHADOW_USE_MKL == 1
+auto type_flag = inputs[0].type_flag_;
+const size_t MKL_INT_MAX = (sizeof(MKL_INT) == sizeof(int)) ? INT_MAX : 
LLONG_MAX;
+size_t input_size = inputs[0].Size();
+if (req[0] == kWriteTo && (type_flag == mshadow::kFloat32
+  || type_flag == mshadow::kFloat64) && input_size <= MKL_INT_MAX) {
 
 Review comment:
   fix indent.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on a change in pull request #13150: support mkl log when dtype is fp32 or fp64

2018-12-11 Thread GitBox
TaoLv commented on a change in pull request #13150: support mkl log when dtype 
is fp32 or fp64
URL: https://github.com/apache/incubator-mxnet/pull/13150#discussion_r240885471
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op.h
 ##
 @@ -348,6 +352,42 @@ class UnaryOp : public OpBase {
   LogUnimplementedOp(attrs, ctx, inputs, req, outputs);
 }
   }
+
+#if MSHADOW_USE_MKL == 1
+  static inline void MKLLog(MKL_INT size, const float* pIn, float* pOut) {
+vsLn(size, pIn, pOut);
+  }
+
+  static inline void MKLLog(MKL_INT size, const double* pIn, double* pOut) {
+vdLn(size, pIn, pOut);
+  }
+#endif
+
+  template
+  static void LogCompute(const nnvm::NodeAttrs& attrs,
+const OpContext& ctx,
+const std::vector& inputs,
+const std::vector& req,
+const std::vector& outputs) {
+if (req[0] == kNullOp) return;
+// if defined MSHADOW_USE_MKL then call mkl log when req is KWriteTo, 
type_flag is
+// mshadow::kFloat32 or mshadow::kFloat64 and data size less than or equal 
MKL_INT_MAX
+#if MSHADOW_USE_MKL == 1
+auto type_flag = inputs[0].type_flag_;
+const size_t MKL_INT_MAX = (sizeof(MKL_INT) == sizeof(int)) ? INT_MAX : 
LLONG_MAX;
+size_t input_size = inputs[0].Size();
+if (req[0] == kWriteTo && (type_flag == mshadow::kFloat32
+  || type_flag == mshadow::kFloat64) && input_size <= MKL_INT_MAX) {
+  MSHADOW_SGL_DBL_TYPE_SWITCH(type_flag, DType, {
+MKLLog(input_size, inputs[0].dptr(), outputs[0].dptr());
+  })
 
 Review comment:
   need `;` here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on a change in pull request #13150: support mkl log when dtype is fp32 or fp64

2018-12-11 Thread GitBox
TaoLv commented on a change in pull request #13150: support mkl log when dtype 
is fp32 or fp64
URL: https://github.com/apache/incubator-mxnet/pull/13150#discussion_r240885737
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op.h
 ##
 @@ -34,6 +34,10 @@
 #include "../mxnet_op.h"
 #include "../elemwise_op_common.h"
 #include "../../ndarray/ndarray_function.h"
+#if MSHADOW_USE_MKL == 1
+#include "mkl.h"
+#endif
+#include
 
 Review comment:
   need space before `<` and put this include before `#include 
"./cast_storage-inl.h"`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on a change in pull request #13599: fallback to dense version for grad(reshape), grad(expand_dims)

2018-12-11 Thread GitBox
pengzhao-intel commented on a change in pull request #13599: fallback to dense 
version for grad(reshape), grad(expand_dims)
URL: https://github.com/apache/incubator-mxnet/pull/13599#discussion_r240881956
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -236,6 +236,27 @@ NNVM_REGISTER_OP(_backward_copy)
 return std::vector{true};
   });
 
+NNVM_REGISTER_OP(_backward_reshape)
+.set_num_inputs(1)
+.set_num_outputs(1)
+.set_attr("TIsBackward", true)
+.set_attr("FInplaceOption",
+[](const NodeAttrs& attrs){
+  return std::vector >{{0, 
0}};
+})
+.set_attr("FInferStorageType", ElemwiseStorageType<1, 1, 
false, false, false>)
+.set_attr("FCompute", UnaryOp::IdentityCompute)
+#if MXNET_USE_MKLDNN == 1
+.set_attr("TIsMKLDNN", true)
+.set_attr("FResourceRequest", [](const NodeAttrs& n) {
+  return std::vector{ResourceRequest::kTempSpace};
+})
+#endif
 
 Review comment:
   FYI, https://github.com/apache/incubator-mxnet/pull/12980 is enabled the FW 
of MKL-DNN supported reshape but BW is still WIP @huangzhiyuan 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel edited a comment on issue #13602: Fix for import mxnet taking long time if multiple process launched

2018-12-11 Thread GitBox
pengzhao-intel edited a comment on issue #13602: Fix for import mxnet taking 
long time if multiple process launched
URL: https://github.com/apache/incubator-mxnet/pull/13602#issuecomment-446459523
 
 
   Based on 10560's 
[comment](https://github.com/apache/incubator-mxnet/issues/10560#issuecomment-381514559),
 "It sometimes block executing in Ubuntu and always block executing in Windows" 
and several related issues, including import hang, are reported.
   
   Could anyone help verify the functionality of this feature? @mseth10 @azai91 
@lupesko 
   Maybe set it off by default. Any idea?
   
   @cjolivier01 could you provide more backgrounds and how auto-tuning works?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #13599: fallback to dense version for grad(reshape), grad(expand_dims)

2018-12-11 Thread GitBox
zheng-da commented on issue #13599: fallback to dense version for 
grad(reshape), grad(expand_dims)
URL: https://github.com/apache/incubator-mxnet/pull/13599#issuecomment-446459824
 
 
   Otherwise, it looks good to me.
   
   BTW, I don't think it has anything to do with sparse.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #13602: Fix for import mxnet taking long time if multiple process launched

2018-12-11 Thread GitBox
pengzhao-intel commented on issue #13602: Fix for import mxnet taking long time 
if multiple process launched
URL: https://github.com/apache/incubator-mxnet/pull/13602#issuecomment-446459523
 
 
   Based on 10560's 
[comment](https://github.com/apache/incubator-mxnet/issues/10560#issuecomment-381514559),
 "It sometimes block executing in Ubuntu and always block executing in Windows" 
and several related issues, including import hang, are reported.
   
   Does anyone can help verify the funtionality of this feature? 
   Maybe set it off by default. Any idea?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #13599: fallback to dense version for grad(reshape), grad(expand_dims)

2018-12-11 Thread GitBox
zheng-da commented on a change in pull request #13599: fallback to dense 
version for grad(reshape), grad(expand_dims)
URL: https://github.com/apache/incubator-mxnet/pull/13599#discussion_r240880048
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -236,6 +236,27 @@ NNVM_REGISTER_OP(_backward_copy)
 return std::vector{true};
   });
 
+NNVM_REGISTER_OP(_backward_reshape)
+.set_num_inputs(1)
+.set_num_outputs(1)
+.set_attr("TIsBackward", true)
+.set_attr("FInplaceOption",
+[](const NodeAttrs& attrs){
+  return std::vector >{{0, 
0}};
+})
+.set_attr("FInferStorageType", ElemwiseStorageType<1, 1, 
false, false, false>)
+.set_attr("FCompute", UnaryOp::IdentityCompute)
+#if MXNET_USE_MKLDNN == 1
+.set_attr("TIsMKLDNN", true)
+.set_attr("FResourceRequest", [](const NodeAttrs& n) {
+  return std::vector{ResourceRequest::kTempSpace};
+})
+#endif
 
 Review comment:
   does mkldnn support reshape?
   
   if you want to optimize for mkldnn, you should add FComputeEx


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #12203: flaky test: test_operator_gpu.test_depthwise_convolution

2018-12-11 Thread GitBox
pengzhao-intel commented on issue #12203: flaky test: 
test_operator_gpu.test_depthwise_convolution
URL: 
https://github.com/apache/incubator-mxnet/issues/12203#issuecomment-446457845
 
 
   @juliusshufan could you help take a look for this test case?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StatML commented on issue #11868: nnpack_fully_connected-inl.h:45:55: error: expected template-name before ‘<’ token > class NNPACKFullyConnectedOp : public FullyConnectedOp { >

2018-12-11 Thread GitBox
StatML commented on issue #11868: nnpack_fully_connected-inl.h:45:55: error: 
expected template-name before ‘<’ token >  class NNPACKFullyConnectedOp : 
public FullyConnectedOp { > 
   ^
URL: 
https://github.com/apache/incubator-mxnet/issues/11868#issuecomment-446457593
 
 
   I just tried on Ubuntu 18.04.1, this issue is still not fixed... :(


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jiangzhengkai removed a comment on issue #13623: This convolution is not supported by cudnn

2018-12-11 Thread GitBox
jiangzhengkai removed a comment on issue #13623:  This convolution is not 
supported by cudnn
URL: 
https://github.com/apache/incubator-mxnet/issues/13623#issuecomment-446449640
 
 
   Hi, i'm using 1.5.0 version mxnet. The log "This convolution is not 
supported by cudnn" is really
   uncomfortable. How to supress the warning.
   I also try this, however, it does not help.
   
![image](https://user-images.githubusercontent.com/19780166/49845378-71a71d80-fe01-11e8-8869-af9c97cd55a2.png)
   
   
   CUDA version is 8.0.
   
   Hers is cuddn version
   
![image](https://user-images.githubusercontent.com/19780166/49845417-9bf8db00-fe01-11e8-892e-2b8ae17757dc.png)
   
   
   1.5.0 version needs higher version cudnn?
   
   thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jiangzhengkai commented on issue #13623: This convolution is not supported by cudnn

2018-12-11 Thread GitBox
jiangzhengkai commented on issue #13623:  This convolution is not supported by 
cudnn
URL: 
https://github.com/apache/incubator-mxnet/issues/13623#issuecomment-446449640
 
 
   Hi, i'm using 1.5.0 version mxnet. The log "This convolution is not 
supported by cudnn" is really
   uncomfortable. How to supress the warning.
   I also try this, however, it does not help.
   
![image](https://user-images.githubusercontent.com/19780166/49845378-71a71d80-fe01-11e8-8869-af9c97cd55a2.png)
   
   
   CUDA version is 8.0.
   
   Hers is cuddn version
   
![image](https://user-images.githubusercontent.com/19780166/49845417-9bf8db00-fe01-11e8-892e-2b8ae17757dc.png)
   
   
   1.5.0 version needs higher version cudnn?
   
   thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jiangzhengkai opened a new issue #13623: This convolution is not supported by cudnn

2018-12-11 Thread GitBox
jiangzhengkai opened a new issue #13623:  This convolution is not supported by 
cudnn
URL: https://github.com/apache/incubator-mxnet/issues/13623
 
 
   Hi, i'm using 1.5.0 version mxnet. The log **"This** convolution is not 
supported by cudnn" is really 
   uncomfortable. How to supress the **warning.**
   I also try this, howerev, it does not help.
   
![image](https://user-images.githubusercontent.com/19780166/49845119-6a334480-fe00-11e8-9689-b39804f51581.png)
   
   CUDA version is 8.0.
   
   Hers is cuddn version 
   
![image](https://user-images.githubusercontent.com/19780166/49845324-212fc000-fe01-11e8-982b-ad6c3b84dc96.png)
   
   1.5.0 version needs higher version cudnn?
   
   thanks!
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel opened a new pull request #13622: [WIP]Fix SSD accuracy variance

2018-12-11 Thread GitBox
xinyu-intel opened a new pull request #13622: [WIP]Fix SSD accuracy variance
URL: https://github.com/apache/incubator-mxnet/pull/13622
 
 
   ## Description ##
   This PR is to fix omp bug in `multibox_detection` to avoid accuracy variance 
of SSD topology.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   
   @pengzhao-intel @TaoLv 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #13150: support mkl log when dtype is fp32 or fp64

2018-12-11 Thread GitBox
TaoLv commented on issue #13150: support mkl log when dtype is fp32 or fp64
URL: https://github.com/apache/incubator-mxnet/pull/13150#issuecomment-446430758
 
 
   @XiaotaoChen  since #13607 is merged, please rebase code and retrigger CI. 
Make sure that unit test for log operator can correctly run into MKL BLAS.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ifeherva commented on issue #3118: Gradient reversal layer without custom operator

2018-12-11 Thread GitBox
ifeherva commented on issue #3118: Gradient reversal layer without custom 
operator
URL: 
https://github.com/apache/incubator-mxnet/issues/3118#issuecomment-446429492
 
 
   I implemented this as custom operator currently, however I think this would 
be of interest to others. Would it make sense to implement it in Cpp as a 
standard or contrib operator? If yes I can PR it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #12922: Support Quantized Fully Connected by INT8 GEMM

2018-12-11 Thread GitBox
TaoLv commented on issue #12922: Support Quantized Fully Connected by INT8 GEMM
URL: https://github.com/apache/incubator-mxnet/pull/12922#issuecomment-446427791
 
 
   Test case need be refined to make it can run into MKL BLAS.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Fix warning in waitall doc (#13618)

2018-12-11 Thread nswamy
This is an automated email from the ASF dual-hosted git repository.

nswamy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 9ce7eab  Fix warning in waitall doc (#13618)
9ce7eab is described below

commit 9ce7eabcbc9575128240f71f79f9f7cce1a19aa7
Author: Anirudh Subramanian 
AuthorDate: Tue Dec 11 17:22:02 2018 -0800

Fix warning in waitall doc (#13618)
---
 python/mxnet/ndarray/ndarray.py | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/python/mxnet/ndarray/ndarray.py b/python/mxnet/ndarray/ndarray.py
index 4e6d0cd..9a62620 100644
--- a/python/mxnet/ndarray/ndarray.py
+++ b/python/mxnet/ndarray/ndarray.py
@@ -157,11 +157,13 @@ def waitall():
 """Wait for all async operations to finish in MXNet.
 
 This function is used for benchmarking only.
+
 .. warning::
-If your code has exceptions, `waitall` can cause silent failures.
-For this reason you should avoid `waitall` in your code.
-Use it only if you are confident that your code is error free.
-Then make sure you call `wait_to_read` on all outputs after `waitall`.
+
+   If your code has exceptions, `waitall` can cause silent failures.
+   For this reason you should avoid `waitall` in your code.
+   Use it only if you are confident that your code is error free.
+   Then make sure you call `wait_to_read` on all outputs after `waitall`.
 """
 check_call(_LIB.MXNDArrayWaitAll())
 



[GitHub] nswamy closed pull request #13618: Fix warning in waitall doc

2018-12-11 Thread GitBox
nswamy closed pull request #13618: Fix warning in waitall doc
URL: https://github.com/apache/incubator-mxnet/pull/13618
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/ndarray/ndarray.py b/python/mxnet/ndarray/ndarray.py
index 4e6d0cdc929..9a62620da85 100644
--- a/python/mxnet/ndarray/ndarray.py
+++ b/python/mxnet/ndarray/ndarray.py
@@ -157,11 +157,13 @@ def waitall():
 """Wait for all async operations to finish in MXNet.
 
 This function is used for benchmarking only.
+
 .. warning::
-If your code has exceptions, `waitall` can cause silent failures.
-For this reason you should avoid `waitall` in your code.
-Use it only if you are confident that your code is error free.
-Then make sure you call `wait_to_read` on all outputs after `waitall`.
+
+   If your code has exceptions, `waitall` can cause silent failures.
+   For this reason you should avoid `waitall` in your code.
+   Use it only if you are confident that your code is error free.
+   Then make sure you call `wait_to_read` on all outputs after `waitall`.
 """
 check_call(_LIB.MXNDArrayWaitAll())
 


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on issue #12922: Support Quantized Fully Connected by INT8 GEMM

2018-12-11 Thread GitBox
xinyu-intel commented on issue #12922: Support Quantized Fully Connected by 
INT8 GEMM
URL: https://github.com/apache/incubator-mxnet/pull/12922#issuecomment-446425642
 
 
   @lihaofd please rebase code and trigger MKL ci.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ciyongch commented on issue #13596: Fix quantize pass error when excluding a quantization supported op

2018-12-11 Thread GitBox
ciyongch commented on issue #13596: Fix quantize pass error when excluding a 
quantization supported op
URL: https://github.com/apache/incubator-mxnet/pull/13596#issuecomment-446422951
 
 
   @roywei No open issue related to it so far. We found this error when trying 
to quantize Resnet50_v1 at local (with excluding some ops which supports 
quantization from the model script manually). In this case,  
`NeedQuantize(e.node, excluded_nodes)` is much more accurate than 
`quantized_op_map.count(e.node->op())`. 
   A test case is also added to cover this kind of error.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2018-12-11 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new e97e209  Bump the publish timestamp.
e97e209 is described below

commit e97e209c1076c972c2185c81469b05f683d7cbb3
Author: mxnet-ci 
AuthorDate: Wed Dec 12 00:59:46 2018 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..fbd07f0
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Wed Dec 12 00:59:46 UTC 2018



[GitHub] roywei commented on issue #13588: Accelerate DGL csr neighbor sampling

2018-12-11 Thread GitBox
roywei commented on issue #13588: Accelerate DGL csr neighbor sampling
URL: https://github.com/apache/incubator-mxnet/pull/13588#issuecomment-446418476
 
 
   @mxnet-label-bot add[Operator, pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on issue #13618: Fix warning in waitall doc

2018-12-11 Thread GitBox
anirudh2290 commented on issue #13618: Fix warning in waitall doc
URL: https://github.com/apache/incubator-mxnet/pull/13618#issuecomment-446418635
 
 
   Preview here: 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-13618/1/api/python/ndarray/ndarray.html?highlight=waitall#mxnet.ndarray.waitall


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 opened a new pull request #13621: [MXNET-1251] Basic configuration to do static-linking

2018-12-11 Thread GitBox
lanking520 opened a new pull request #13621: [MXNET-1251] Basic configuration 
to do static-linking
URL: https://github.com/apache/incubator-mxnet/pull/13621
 
 
   ## Description ##
   For Ubuntu 14.04 base build to install all dependencies.
   @szha @zachgk 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on issue #13588: Accelerate DGL csr neighbor sampling

2018-12-11 Thread GitBox
roywei commented on issue #13588: Accelerate DGL csr neighbor sampling
URL: https://github.com/apache/incubator-mxnet/pull/13588#issuecomment-446418341
 
 
   @BullDemonKing Thanks for the contribution! could you take a look at failed 
tests?
   @zheng-da @eric-haibin-lin could you take a look?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on issue #13590: fix Makefile for rpkg

2018-12-11 Thread GitBox
roywei commented on issue #13590: fix Makefile for rpkg
URL: https://github.com/apache/incubator-mxnet/pull/13590#issuecomment-446418055
 
 
   @mxnet-label-bot [R, pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on issue #13590: fix Makefile for rpkg

2018-12-11 Thread GitBox
roywei commented on issue #13590: fix Makefile for rpkg
URL: https://github.com/apache/incubator-mxnet/pull/13590#issuecomment-446417917
 
 
   @jeremiedb Thanks for the contribution, could you take a look at the unit 
test failed?
   ping @anirudhacharya @ankkhedia for review


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on issue #13591: Add a DGL operator to compute vertex Ids in layers

2018-12-11 Thread GitBox
roywei commented on issue #13591: Add a DGL operator to compute vertex Ids in 
layers
URL: https://github.com/apache/incubator-mxnet/pull/13591#issuecomment-446417396
 
 
   @mxnet-label-bot add[Operator, pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on issue #13591: Add a DGL operator to compute vertex Ids in layers

2018-12-11 Thread GitBox
roywei commented on issue #13591: Add a DGL operator to compute vertex Ids in 
layers
URL: https://github.com/apache/incubator-mxnet/pull/13591#issuecomment-446417351
 
 
   @BullDemonKing Thanks for the contribution. @zheng-da @apeforest could you 
take a look?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on issue #13596: Fix quantize pass error when excluding a quantization supported op

2018-12-11 Thread GitBox
roywei commented on issue #13596: Fix quantize pass error when excluding a 
quantization supported op
URL: https://github.com/apache/incubator-mxnet/pull/13596#issuecomment-446417029
 
 
   @mxnet-label-bot add [Operator, pr-awaiting-review]
   @ciyongch Thanks for the contribution, is there an issue related to this fix?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on issue #13597: [MXNET-1255] update hybridize documentation

2018-12-11 Thread GitBox
roywei commented on issue #13597: [MXNET-1255] update hybridize documentation
URL: https://github.com/apache/incubator-mxnet/pull/13597#issuecomment-446416327
 
 
   @mxnet-label-bot add[Doc, pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zachgk commented on a change in pull request #13619: [MXNET-1231] Allow not using Some in the Scala operators

2018-12-11 Thread GitBox
zachgk commented on a change in pull request #13619: [MXNET-1231] Allow not 
using Some in the Scala operators
URL: https://github.com/apache/incubator-mxnet/pull/13619#discussion_r240843382
 
 

 ##
 File path: 
scala-package/core/src/test/scala/org/apache/mxnet/NDArraySuite.scala
 ##
 @@ -576,4 +576,12 @@ class NDArraySuite extends FunSuite with 
BeforeAndAfterAll with Matchers {
 assert(arr.internal.toDoubleArray === Array(2d, 2d))
 assert(arr.internal.toByteArray === Array(2.toByte, 2.toByte))
   }
+
+  test("Generated api") {
 
 Review comment:
   We should test both with SomeConversion and without


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on issue #13602: Fix for import mxnet taking long time if multiple process launched

2018-12-11 Thread GitBox
roywei commented on issue #13602: Fix for import mxnet taking long time if 
multiple process launched
URL: https://github.com/apache/incubator-mxnet/pull/13602#issuecomment-446416195
 
 
   @mxnet-label-bot add[Environment Variables, Operator]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 opened a new pull request #13620: [WIP] add examples and fix the dependency problem

2018-12-11 Thread GitBox
lanking520 opened a new pull request #13620: [WIP] add examples and fix the 
dependency problem
URL: https://github.com/apache/incubator-mxnet/pull/13620
 
 
   ## Description ##
   Add a use case in the java demo explaining the usage of ParamObject
   @andrewfayres @zachgk @piyushghai @nswamy 
   
   I am also getting tired of fixing the issue of the script (TODO):
   - [] Add CI test for the demo script for Java
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on issue #13604: [WIP] onnx broadcast ops fixes

2018-12-11 Thread GitBox
roywei commented on issue #13604: [WIP] onnx broadcast ops fixes
URL: https://github.com/apache/incubator-mxnet/pull/13604#issuecomment-446415455
 
 
   @Roshrini Thanks for the contribution, seems one of the onnx unit test 
failed.
   @vandanavk for review


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on issue #13604: [WIP] onnx broadcast ops fixes

2018-12-11 Thread GitBox
roywei commented on issue #13604: [WIP] onnx broadcast ops fixes
URL: https://github.com/apache/incubator-mxnet/pull/13604#issuecomment-446415242
 
 
   @mxnet-label-bot add[ONNX, pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on issue #13606: Complimentary gluon DataLoader improvements

2018-12-11 Thread GitBox
roywei commented on issue #13606: Complimentary gluon DataLoader improvements
URL: https://github.com/apache/incubator-mxnet/pull/13606#issuecomment-446415025
 
 
   @mxnet-label-bot add[Data-loading, pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on issue #13609: [MXNET-1258]fix unittest for ROIAlign Operator

2018-12-11 Thread GitBox
roywei commented on issue #13609: [MXNET-1258]fix unittest for ROIAlign Operator
URL: https://github.com/apache/incubator-mxnet/pull/13609#issuecomment-446413342
 
 
   @mxnet-label-bot add[CI, Flaky, pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on issue #13611: add image resize operator and unit test

2018-12-11 Thread GitBox
roywei commented on issue #13611: add image resize operator and unit test
URL: https://github.com/apache/incubator-mxnet/pull/13611#issuecomment-446412852
 
 
   @mxnet-label-bot add[Operator, pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on issue #13612: add pos_weight for SigmoidBinaryCrossEntropyLoss

2018-12-11 Thread GitBox
roywei commented on issue #13612: add pos_weight for 
SigmoidBinaryCrossEntropyLoss
URL: https://github.com/apache/incubator-mxnet/pull/13612#issuecomment-446412567
 
 
   @mxnet-label-bot add[Gluon, pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on issue #13612: add pos_weight for SigmoidBinaryCrossEntropyLoss

2018-12-11 Thread GitBox
roywei commented on issue #13612: add pos_weight for 
SigmoidBinaryCrossEntropyLoss
URL: https://github.com/apache/incubator-mxnet/pull/13612#issuecomment-446412389
 
 
   @eureka7mt Thanks for the contribution, could you add a unit test for this 
case?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on issue #13614: Make to_tensor and normalize to accept 3D or 4D tensor inputs

2018-12-11 Thread GitBox
roywei commented on issue #13614: Make to_tensor and normalize to accept 3D or 
4D tensor inputs
URL: https://github.com/apache/incubator-mxnet/pull/13614#issuecomment-446411763
 
 
   @mxnet-label-bot add[Gluon, Data-loading, Operator]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] azai91 commented on issue #13084: Test/mkldnn batch norm op

2018-12-11 Thread GitBox
azai91 commented on issue #13084: Test/mkldnn batch norm op
URL: https://github.com/apache/incubator-mxnet/pull/13084#issuecomment-446411222
 
 
   @Vikas89 added


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on issue #13618: Fix warning in waitall doc

2018-12-11 Thread GitBox
roywei commented on issue #13618: Fix warning in waitall doc
URL: https://github.com/apache/incubator-mxnet/pull/13618#issuecomment-446409938
 
 
   @mxnet-label-bot add[Website, pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on issue #13619: [MXNET-1231] Allow not using Some in the Scala operators

2018-12-11 Thread GitBox
roywei commented on issue #13619: [MXNET-1231] Allow not using Some in the 
Scala operators
URL: https://github.com/apache/incubator-mxnet/pull/13619#issuecomment-446409294
 
 
   @lanking520 Thanks for the contribution! any documentation on how to use 
SomeConversion?
   @mxnet-label-bot add[Scala, pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on issue #13356: ONNX export: Add Flatten before Gemm

2018-12-11 Thread GitBox
vandanavk commented on issue #13356: ONNX export: Add Flatten before Gemm
URL: https://github.com/apache/incubator-mxnet/pull/13356#issuecomment-446403176
 
 
   @zhreshold for review


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on a change in pull request #13602: Fix for import mxnet taking long time if multiple process launched

2018-12-11 Thread GitBox
anirudh2290 commented on a change in pull request #13602: Fix for import mxnet 
taking long time if multiple process launched
URL: https://github.com/apache/incubator-mxnet/pull/13602#discussion_r240829402
 
 

 ##
 File path: src/operator/operator_tune-inl.h
 ##
 @@ -56,7 +56,7 @@ namespace op {
 #endif
 #endif  // MXNET_NO_INLINE
 
-#define OUTSIDE_COUNT_SHIFT9
 
 Review comment:
   does changing this impact the IsOMPFaster selection in operator_tune.h. Do 
we need to tweak WORKLOAD_COUNT_SHIFT too ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 closed pull request #13364: [MXNET-1225] Always use config.mk in make install instructions

2018-12-11 Thread GitBox
lanking520 closed pull request #13364: [MXNET-1225] Always use config.mk in 
make install instructions
URL: https://github.com/apache/incubator-mxnet/pull/13364
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/install/build_from_source.md 
b/docs/install/build_from_source.md
index e41b1d0f180..e807fb44b59 100644
--- a/docs/install/build_from_source.md
+++ b/docs/install/build_from_source.md
@@ -2,6 +2,7 @@
 
 This document explains how to build MXNet from source code.
 
+**For Java/Scala/Clojure, please follow [this guide 
instead](./scala_setup.md)**
 
 ## Overview
 
@@ -27,7 +28,6 @@ MXNet's newest and most popular API is Gluon. Gluon is built 
into the Python bin
 - [Python (includes Gluon)](../api/python/index.html)
 - [C++](../api/c++/index.html)
 - [Clojure](../api/clojure/index.html)
-- Java (coming soon)
 - [Julia](../api/julia/index.html)
 - [Perl](../api/perl/index.html)
 - [R](../api/r/index.html)
@@ -35,6 +35,7 @@ MXNet's newest and most popular API is Gluon. Gluon is built 
into the Python bin
 - [Java](../api/java/index.html)
 
 
+
 ## Build Instructions by Operating System
 
 Detailed instructions are provided per operating system. Each of these guides 
also covers how to install the specific [Language 
Bindings](#installing-mxnet-language-bindings) you require.
@@ -160,7 +161,7 @@ More information on turning these features on or off are 
found in the following
 ## Build Configurations
 
 There is a configuration file for make,
-[`make/config.mk`](https://github.com/apache/incubator-mxnet/blob/master/make/config.mk),
 that contains all the compilation options. You can edit it and then run `make` 
or `cmake`. `cmake` is recommended for building MXNet (and is required to build 
with MKLDNN), however you may use `make` instead.
+[`make/config.mk`](https://github.com/apache/incubator-mxnet/blob/master/make/config.mk),
 that contains all the compilation options. You can edit it and then run `make` 
or `cmake`. `cmake` is recommended for building MXNet (and is required to build 
with MKLDNN), however you may use `make` instead. For building with 
Java/Scala/Clojure, only `make` is supported.
 
 
 
@@ -203,18 +204,18 @@ It is recommended to set environment variable 
NCCL_LAUNCH_MODE to PARALLEL when
 
 ### Build MXNet with C++
 
-* To enable C++ package, just add `USE_CPP_PACKAGE=1` when you run `make` or 
`cmake`.
+* To enable C++ package, just add `USE_CPP_PACKAGE=1` when you run `make` or 
`cmake` (see examples).
 
 
 
 ### Usage Examples
 
-* `-j` runs multiple jobs against multi-core CPUs.
-
 For example, you can specify using all cores on Linux as follows:
 
 ```bash
-cmake -j$(nproc)
+mkdir build && cd build
+cmake -GNinja .
+ninja -v
 ```
 
 
@@ -222,28 +223,36 @@ cmake -j$(nproc)
 * Build MXNet with `cmake` and install with MKL DNN, GPU, and OpenCV support:
 
 ```bash
-cmake -j USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1 USE_MKLDNN=1
+mkdir build && cd build
+cmake -DUSE_CUDA=1 -DUSE_CUDA_PATH=/usr/local/cuda -DUSE_CUDNN=1 
-DUSE_MKLDNN=1 -GNinja .
+ninja -v
 ```
 
  Recommended for Systems with NVIDIA GPUs
 * Build with both OpenBLAS, GPU, and OpenCV support:
 
 ```bash
-cmake -j BLAS=open USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1
+mkdir build && cd build
+cmake -DBLAS=open -DUSE_CUDA=1 -DUSE_CUDA_PATH=/usr/local/cuda -DUSE_CUDNN=1 
-GNinja .
+ninja -v
 ```
 
  Recommended for Systems with Intel CPUs
 * Build MXNet with `cmake` and install with MKL DNN, and OpenCV support:
 
 ```bash
-cmake -j USE_CUDA=0 USE_MKLDNN=1
+mkdir build && cd build
+cmake -DUSE_CUDA=0 -DUSE_MKLDNN=1 -GNinja .
+ninja -v
 ```
 
  Recommended for Systems with non-Intel CPUs
 * Build MXNet with `cmake` and install with OpenBLAS and OpenCV support:
 
 ```bash
-cmake -j USE_CUDA=0 BLAS=open
+mkdir build && cd build
+cmake -DUSE_CUDA=0 -DBLAS=open -GNinja .
+ninja -v
 ```
 
  Other Examples
@@ -251,20 +260,26 @@ cmake -j USE_CUDA=0 BLAS=open
 * Build without using OpenCV:
 
 ```bash
-cmake USE_OPENCV=0
+mkdir build && cd build
+cmake -DUSE_OPENCV=0 -GNinja .
+ninja -v
 ```
 
 * Build on **macOS** with the default BLAS library (Apple Accelerate) and 
Clang installed with `xcode` (OPENMP is disabled because it is not supported by 
the Apple version of Clang):
 
 ```bash
-cmake -j BLAS=apple USE_OPENCV=0 USE_OPENMP=0
+mkdir build && cd build
+cmake -DBLAS=apple -DUSE_OPENCV=0 -DUSE_OPENMP=0 -GNinja .
+ninja -v
 ```
 
 * To use OpenMP on **macOS** you need to install the Clang compiler, `llvm` 
(the one provided by Apple does not support OpenMP):
 
 ```bash
 brew install llvm
-cmake -j BLAS=apple USE_OPENMP=1
+mkdir build && cd build
+cmake -DBLAS=apple -DUSE_OPENMP=1 -GNinja .
+ninja -v
 ```
 
 
diff --git 

[incubator-mxnet] branch master updated: [MXNET-1225] Always use config.mk in make install instructions (#13364)

2018-12-11 Thread lanking
This is an automated email from the ASF dual-hosted git repository.

lanking pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 97e0c97  [MXNET-1225] Always use config.mk in make install 
instructions (#13364)
97e0c97 is described below

commit 97e0c972178177011ee928407719b2e002fa116f
Author: Zach Kimberg 
AuthorDate: Tue Dec 11 15:23:13 2018 -0800

[MXNET-1225] Always use config.mk in make install instructions (#13364)

* Always use config.mk in make install instructions

* Specify Cuda 0 for ubuntu with mkldnn

* Scala install doc avoid build_from_source

Minor doc fixes

* Fix build_from_source CMake usage

* CPP Install Instruction with CMake

* Use cmake out of source build
---
 docs/install/build_from_source.md | 41 ++-
 docs/install/c_plus_plus.md   |  3 ++-
 docs/install/java_setup.md|  4 +++-
 docs/install/osx_setup.md |  9 -
 docs/install/scala_setup.md   |  4 +++-
 docs/install/ubuntu_setup.md  | 21 
 6 files changed, 61 insertions(+), 21 deletions(-)

diff --git a/docs/install/build_from_source.md 
b/docs/install/build_from_source.md
index e41b1d0..e807fb4 100644
--- a/docs/install/build_from_source.md
+++ b/docs/install/build_from_source.md
@@ -2,6 +2,7 @@
 
 This document explains how to build MXNet from source code.
 
+**For Java/Scala/Clojure, please follow [this guide 
instead](./scala_setup.md)**
 
 ## Overview
 
@@ -27,7 +28,6 @@ MXNet's newest and most popular API is Gluon. Gluon is built 
into the Python bin
 - [Python (includes Gluon)](../api/python/index.html)
 - [C++](../api/c++/index.html)
 - [Clojure](../api/clojure/index.html)
-- Java (coming soon)
 - [Julia](../api/julia/index.html)
 - [Perl](../api/perl/index.html)
 - [R](../api/r/index.html)
@@ -35,6 +35,7 @@ MXNet's newest and most popular API is Gluon. Gluon is built 
into the Python bin
 - [Java](../api/java/index.html)
 
 
+
 ## Build Instructions by Operating System
 
 Detailed instructions are provided per operating system. Each of these guides 
also covers how to install the specific [Language 
Bindings](#installing-mxnet-language-bindings) you require.
@@ -160,7 +161,7 @@ More information on turning these features on or off are 
found in the following
 ## Build Configurations
 
 There is a configuration file for make,
-[`make/config.mk`](https://github.com/apache/incubator-mxnet/blob/master/make/config.mk),
 that contains all the compilation options. You can edit it and then run `make` 
or `cmake`. `cmake` is recommended for building MXNet (and is required to build 
with MKLDNN), however you may use `make` instead.
+[`make/config.mk`](https://github.com/apache/incubator-mxnet/blob/master/make/config.mk),
 that contains all the compilation options. You can edit it and then run `make` 
or `cmake`. `cmake` is recommended for building MXNet (and is required to build 
with MKLDNN), however you may use `make` instead. For building with 
Java/Scala/Clojure, only `make` is supported.
 
 
 
@@ -203,18 +204,18 @@ It is recommended to set environment variable 
NCCL_LAUNCH_MODE to PARALLEL when
 
 ### Build MXNet with C++
 
-* To enable C++ package, just add `USE_CPP_PACKAGE=1` when you run `make` or 
`cmake`.
+* To enable C++ package, just add `USE_CPP_PACKAGE=1` when you run `make` or 
`cmake` (see examples).
 
 
 
 ### Usage Examples
 
-* `-j` runs multiple jobs against multi-core CPUs.
-
 For example, you can specify using all cores on Linux as follows:
 
 ```bash
-cmake -j$(nproc)
+mkdir build && cd build
+cmake -GNinja .
+ninja -v
 ```
 
 
@@ -222,28 +223,36 @@ cmake -j$(nproc)
 * Build MXNet with `cmake` and install with MKL DNN, GPU, and OpenCV support:
 
 ```bash
-cmake -j USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1 USE_MKLDNN=1
+mkdir build && cd build
+cmake -DUSE_CUDA=1 -DUSE_CUDA_PATH=/usr/local/cuda -DUSE_CUDNN=1 
-DUSE_MKLDNN=1 -GNinja .
+ninja -v
 ```
 
  Recommended for Systems with NVIDIA GPUs
 * Build with both OpenBLAS, GPU, and OpenCV support:
 
 ```bash
-cmake -j BLAS=open USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1
+mkdir build && cd build
+cmake -DBLAS=open -DUSE_CUDA=1 -DUSE_CUDA_PATH=/usr/local/cuda -DUSE_CUDNN=1 
-GNinja .
+ninja -v
 ```
 
  Recommended for Systems with Intel CPUs
 * Build MXNet with `cmake` and install with MKL DNN, and OpenCV support:
 
 ```bash
-cmake -j USE_CUDA=0 USE_MKLDNN=1
+mkdir build && cd build
+cmake -DUSE_CUDA=0 -DUSE_MKLDNN=1 -GNinja .
+ninja -v
 ```
 
  Recommended for Systems with non-Intel CPUs
 * Build MXNet with `cmake` and install with OpenBLAS and OpenCV support:
 
 ```bash
-cmake -j USE_CUDA=0 BLAS=open
+mkdir build && cd build
+cmake -DUSE_CUDA=0 -DBLAS=open -GNinja .
+ninja -v
 ```
 
  Other Examples
@@ -251,20 +260,26 

[GitHub] lanking520 closed pull request #13493: [MXNET-1224]: improve scala maven jni build.

2018-12-11 Thread GitBox
lanking520 closed pull request #13493: [MXNET-1224]: improve scala maven jni 
build.
URL: https://github.com/apache/incubator-mxnet/pull/13493
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/scala-package/assembly/linux-x86_64-cpu/pom.xml 
b/scala-package/assembly/linux-x86_64-cpu/pom.xml
index abefead175c..1658f36e6bb 100644
--- a/scala-package/assembly/linux-x86_64-cpu/pom.xml
+++ b/scala-package/assembly/linux-x86_64-cpu/pom.xml
@@ -14,6 +14,10 @@
   MXNet Scala Package - Full Linux-x86_64 CPU-only
   jar
 
+  
+${project.parent.parent.basedir}/..
+  
+
   
 
   org.apache.mxnet
diff --git 
a/scala-package/assembly/linux-x86_64-cpu/src/main/assembly/assembly.xml 
b/scala-package/assembly/linux-x86_64-cpu/src/main/assembly/assembly.xml
index a574f8af25d..f4c2017c824 100644
--- a/scala-package/assembly/linux-x86_64-cpu/src/main/assembly/assembly.xml
+++ b/scala-package/assembly/linux-x86_64-cpu/src/main/assembly/assembly.xml
@@ -25,4 +25,10 @@
   
 
   
+  
+
+  ${MXNET_DIR}/lib/libmxnet.so
+  lib/native
+
+  
 
diff --git a/scala-package/assembly/linux-x86_64-gpu/pom.xml 
b/scala-package/assembly/linux-x86_64-gpu/pom.xml
index 96ffa38c6af..c80515e7b10 100644
--- a/scala-package/assembly/linux-x86_64-gpu/pom.xml
+++ b/scala-package/assembly/linux-x86_64-gpu/pom.xml
@@ -14,6 +14,10 @@
   MXNet Scala Package - Full Linux-x86_64 GPU
   jar
 
+  
+${project.parent.parent.basedir}/..
+  
+
   
 
   org.apache.mxnet
diff --git 
a/scala-package/assembly/linux-x86_64-gpu/src/main/assembly/assembly.xml 
b/scala-package/assembly/linux-x86_64-gpu/src/main/assembly/assembly.xml
index 3a064bf9f2c..2aca64bdf1a 100644
--- a/scala-package/assembly/linux-x86_64-gpu/src/main/assembly/assembly.xml
+++ b/scala-package/assembly/linux-x86_64-gpu/src/main/assembly/assembly.xml
@@ -25,4 +25,10 @@
   
 
   
+  
+
+  ${MXNET_DIR}/lib/libmxnet.so
+  lib/native
+
+  
 
diff --git a/scala-package/assembly/osx-x86_64-cpu/main/assembly/assembly.xml 
b/scala-package/assembly/osx-x86_64-cpu/main/assembly/assembly.xml
deleted file mode 100644
index fecafecad31..000
--- a/scala-package/assembly/osx-x86_64-cpu/main/assembly/assembly.xml
+++ /dev/null
@@ -1,30 +0,0 @@
-
-  full
-  
-jar
-  
-  false
-  
-
-  
-*:*:jar
-  
-  /
-  true
-  true
-  runtime
-
-
-  lib/native
-  
${artifact.artifactId}${dashClassifier?}.${artifact.extension}
-  false
-  false
-  false
-  
-*:*:dll:*
-*:*:so:*
-*:*:jnilib:*
-  
-
-  
-
diff --git a/scala-package/assembly/osx-x86_64-cpu/pom.xml 
b/scala-package/assembly/osx-x86_64-cpu/pom.xml
index 5c5733a9a4c..62979a140fd 100644
--- a/scala-package/assembly/osx-x86_64-cpu/pom.xml
+++ b/scala-package/assembly/osx-x86_64-cpu/pom.xml
@@ -14,6 +14,10 @@
   MXNet Scala Package - Full OSX-x86_64 CPU-only
   jar
 
+  
+${project.parent.parent.basedir}/..
+  
+
   
 
   org.apache.mxnet
diff --git 
a/scala-package/assembly/osx-x86_64-cpu/src/main/assembly/assembly.xml 
b/scala-package/assembly/osx-x86_64-cpu/src/main/assembly/assembly.xml
index bdbd09f170c..e9bc3728fcd 100644
--- a/scala-package/assembly/osx-x86_64-cpu/src/main/assembly/assembly.xml
+++ b/scala-package/assembly/osx-x86_64-cpu/src/main/assembly/assembly.xml
@@ -25,4 +25,10 @@
   
 
   
+  
+
+  ${MXNET_DIR}/lib/libmxnet.so
+  lib/native
+
+  
 
diff --git a/scala-package/core/pom.xml b/scala-package/core/pom.xml
index 484fbbd9679..976383f2e7d 100644
--- a/scala-package/core/pom.xml
+++ b/scala-package/core/pom.xml
@@ -12,6 +12,7 @@
 
   
 true
+${project.parent.basedir}/..
   
 
   mxnet-core_2.11
@@ -77,6 +78,9 @@
 
-Djava.library.path=${project.parent.basedir}/native/${platform}/target \
 
-Dlog4j.configuration=file://${project.basedir}/src/test/resources/log4j.properties
   
+  
+${MXNET_DIR}/lib
+  
 
   
   
@@ -88,6 +92,10 @@
 
-Djava.library.path=${project.parent.basedir}/native/${platform}/target
   
   ${skipTests}
+  always
+  
+${MXNET_DIR}/lib
+  
 
   
   
diff --git 
a/scala-package/core/src/main/scala/org/apache/mxnet/util/NativeLibraryLoader.scala
 
b/scala-package/core/src/main/scala/org/apache/mxnet/util/NativeLibraryLoader.scala
index e94d320391f..2ce893b478e 100644
--- 
a/scala-package/core/src/main/scala/org/apache/mxnet/util/NativeLibraryLoader.scala
+++ 
b/scala-package/core/src/main/scala/org/apache/mxnet/util/NativeLibraryLoader.scala
@@ -85,12 +85,10 @@ private[mxnet] object NativeLibraryLoader {
   }
 

[GitHub] anirudh2290 commented on a change in pull request #13602: Fix for import mxnet taking long time if multiple process launched

2018-12-11 Thread GitBox
anirudh2290 commented on a change in pull request #13602: Fix for import mxnet 
taking long time if multiple process launched
URL: https://github.com/apache/incubator-mxnet/pull/13602#discussion_r240818689
 
 

 ##
 File path: docs/faq/env_var.md
 ##
 @@ -226,12 +226,11 @@ Settings for More GPU Parallelism
 Settings for controlling OMP tuning
 -
 - Set ```MXNET_USE_OPERATOR_TUNING=0``` to disable Operator tuning code which 
decides whether to use OMP or not for operator
-  - 
-   * Values: String representation of MXNET_ENABLE_OPERATOR_TUNING environment 
variable
-   *0=disable all
-   *1=enable all
-   *float32, float16, float32=list of types to enable, and disable 
those not listed
-   * refer : 
https://github.com/apache/incubator-mxnet/blob/master/src/operator/operator_tune-inl.h#L444
+   - Values: String representation of MXNET_ENABLE_OPERATOR_TUNING environment 
variable
+   -0=disable all
+   -1=enable all
+   -float32, float16, float32=list of types to enable, and disable 
those not listed
 
 Review comment:
   Can we list the valid types here: "float32", "float16", "float64", "int8", 
"uint8", "int32", "int64"


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [MXNET-1224]: improve scala maven jni build and packing. (#13493)

2018-12-11 Thread lanking
This is an automated email from the ASF dual-hosted git repository.

lanking pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new b242b0c  [MXNET-1224]: improve scala maven jni build and packing. 
(#13493)
b242b0c is described below

commit b242b0c1fb71da43fba8d6208ee8ca282e735474
Author: Frank Liu 
AuthorDate: Tue Dec 11 15:21:05 2018 -0800

[MXNET-1224]: improve scala maven jni build and packing. (#13493)

Major JNI feature changes. Please find more info here: 
https://cwiki.apache.org/confluence/display/MXNET/Scala+maven+build+improvement
---
 scala-package/assembly/linux-x86_64-cpu/pom.xml|  4 ++
 .../src/main/assembly/assembly.xml |  6 +++
 scala-package/assembly/linux-x86_64-gpu/pom.xml|  4 ++
 .../src/main/assembly/assembly.xml |  6 +++
 .../osx-x86_64-cpu/main/assembly/assembly.xml  | 30 ---
 scala-package/assembly/osx-x86_64-cpu/pom.xml  |  4 ++
 .../osx-x86_64-cpu/src/main/assembly/assembly.xml  |  6 +++
 scala-package/core/pom.xml |  8 +++
 .../apache/mxnet/util/NativeLibraryLoader.scala| 55 ---
 scala-package/examples/pom.xml |  4 ++
 scala-package/infer/pom.xml|  4 ++
 scala-package/init-native/linux-x86_64/pom.xml | 42 +++
 scala-package/init-native/osx-x86_64/pom.xml   | 49 ++---
 scala-package/native/README.md | 63 ++
 scala-package/native/linux-x86_64-cpu/pom.xml  | 25 -
 scala-package/native/linux-x86_64-gpu/pom.xml  | 25 -
 scala-package/native/osx-x86_64-cpu/pom.xml| 50 ++---
 scala-package/pom.xml  |  2 +
 18 files changed, 291 insertions(+), 96 deletions(-)

diff --git a/scala-package/assembly/linux-x86_64-cpu/pom.xml 
b/scala-package/assembly/linux-x86_64-cpu/pom.xml
index abefead..1658f36 100644
--- a/scala-package/assembly/linux-x86_64-cpu/pom.xml
+++ b/scala-package/assembly/linux-x86_64-cpu/pom.xml
@@ -14,6 +14,10 @@
   MXNet Scala Package - Full Linux-x86_64 CPU-only
   jar
 
+  
+${project.parent.parent.basedir}/..
+  
+
   
 
   org.apache.mxnet
diff --git 
a/scala-package/assembly/linux-x86_64-cpu/src/main/assembly/assembly.xml 
b/scala-package/assembly/linux-x86_64-cpu/src/main/assembly/assembly.xml
index a574f8a..f4c2017 100644
--- a/scala-package/assembly/linux-x86_64-cpu/src/main/assembly/assembly.xml
+++ b/scala-package/assembly/linux-x86_64-cpu/src/main/assembly/assembly.xml
@@ -25,4 +25,10 @@
   
 
   
+  
+
+  ${MXNET_DIR}/lib/libmxnet.so
+  lib/native
+
+  
 
diff --git a/scala-package/assembly/linux-x86_64-gpu/pom.xml 
b/scala-package/assembly/linux-x86_64-gpu/pom.xml
index 96ffa38..c80515e 100644
--- a/scala-package/assembly/linux-x86_64-gpu/pom.xml
+++ b/scala-package/assembly/linux-x86_64-gpu/pom.xml
@@ -14,6 +14,10 @@
   MXNet Scala Package - Full Linux-x86_64 GPU
   jar
 
+  
+${project.parent.parent.basedir}/..
+  
+
   
 
   org.apache.mxnet
diff --git 
a/scala-package/assembly/linux-x86_64-gpu/src/main/assembly/assembly.xml 
b/scala-package/assembly/linux-x86_64-gpu/src/main/assembly/assembly.xml
index 3a064bf..2aca64b 100644
--- a/scala-package/assembly/linux-x86_64-gpu/src/main/assembly/assembly.xml
+++ b/scala-package/assembly/linux-x86_64-gpu/src/main/assembly/assembly.xml
@@ -25,4 +25,10 @@
   
 
   
+  
+
+  ${MXNET_DIR}/lib/libmxnet.so
+  lib/native
+
+  
 
diff --git a/scala-package/assembly/osx-x86_64-cpu/main/assembly/assembly.xml 
b/scala-package/assembly/osx-x86_64-cpu/main/assembly/assembly.xml
deleted file mode 100644
index fecafec..000
--- a/scala-package/assembly/osx-x86_64-cpu/main/assembly/assembly.xml
+++ /dev/null
@@ -1,30 +0,0 @@
-
-  full
-  
-jar
-  
-  false
-  
-
-  
-*:*:jar
-  
-  /
-  true
-  true
-  runtime
-
-
-  lib/native
-  
${artifact.artifactId}${dashClassifier?}.${artifact.extension}
-  false
-  false
-  false
-  
-*:*:dll:*
-*:*:so:*
-*:*:jnilib:*
-  
-
-  
-
diff --git a/scala-package/assembly/osx-x86_64-cpu/pom.xml 
b/scala-package/assembly/osx-x86_64-cpu/pom.xml
index 5c5733a..62979a1 100644
--- a/scala-package/assembly/osx-x86_64-cpu/pom.xml
+++ b/scala-package/assembly/osx-x86_64-cpu/pom.xml
@@ -14,6 +14,10 @@
   MXNet Scala Package - Full OSX-x86_64 CPU-only
   jar
 
+  
+${project.parent.parent.basedir}/..
+  
+
   
 
   org.apache.mxnet
diff --git 
a/scala-package/assembly/osx-x86_64-cpu/src/main/assembly/assembly.xml 
b/scala-package/assembly/osx-x86_64-cpu/src/main/assembly/assembly.xml
index bdbd09f..e9bc372 100644
--- a/scala-package/assembly/osx-x86_64-cpu/src/main/assembly/assembly.xml
+++ 

[GitHub] anirudh2290 commented on a change in pull request #13602: Fix for import mxnet taking long time if multiple process launched

2018-12-11 Thread GitBox
anirudh2290 commented on a change in pull request #13602: Fix for import mxnet 
taking long time if multiple process launched
URL: https://github.com/apache/incubator-mxnet/pull/13602#discussion_r240829402
 
 

 ##
 File path: src/operator/operator_tune-inl.h
 ##
 @@ -56,7 +56,7 @@ namespace op {
 #endif
 #endif  // MXNET_NO_INLINE
 
-#define OUTSIDE_COUNT_SHIFT9
 
 Review comment:
   does changing this impact the IsOMPFaster selection faster in 
operator_tune.h. Do we need to tweak WORKLOAD_COUNT_SHIFT too ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [MXNET-1155] Add scala packageTest utility (#13046)

2018-12-11 Thread lanking
This is an automated email from the ASF dual-hosted git repository.

lanking pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new a4c97ec  [MXNET-1155] Add scala packageTest utility (#13046)
a4c97ec is described below

commit a4c97eca9f4dc88d9a29d33728c45ea8158a0f9e
Author: Zach Kimberg 
AuthorDate: Tue Dec 11 15:19:06 2018 -0800

[MXNET-1155] Add scala packageTest utility (#13046)

* [MXNET-1155] Add scala packageTest utility

* Clean up utility

* Safe change directory in Makefile for scala

* mvn install file instructions with details
---
 Makefile   |  35 --
 scala-package/.gitignore   |   1 +
 scala-package/examples/pom.xml |  14 +++
 scala-package/packageTest/Makefile |  87 +
 scala-package/packageTest/README.md|  72 +++
 scala-package/packageTest/core/pom.xml |  39 ++
 scala-package/packageTest/core/scripts |   1 +
 scala-package/packageTest/examples/pom.xml |  48 +++
 scala-package/packageTest/examples/scripts |   1 +
 scala-package/packageTest/infer/pom.xml|  38 ++
 scala-package/packageTest/pom.xml  | 196 +
 11 files changed, 523 insertions(+), 9 deletions(-)

diff --git a/Makefile b/Makefile
index 16ea59f..822704e 100644
--- a/Makefile
+++ b/Makefile
@@ -600,11 +600,19 @@ rpkgtest:
Rscript -e 
'res<-covr:::package_coverage("R-package");fileConn<-file(paste("r-package_coverage_",toString(runif(1)),".json"));writeLines(covr:::to_codecov(res),
 fileConn);close(fileConn)'
 
 scalaclean:
-   (cd $(ROOTDIR)/scala-package; \
+   (cd $(ROOTDIR)/scala-package && \
mvn clean -P$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE))
 
+scalatestcompile:
+   (cd $(ROOTDIR)/scala-package && \
+   mvn test-compile 
-P$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE) -Dcxx="$(CXX)" \
+   -Dbuild.platform="$(SCALA_PKG_PROFILE)" \
+   -Dcflags="$(CFLAGS)" -Dldflags="$(LDFLAGS)" \
+   -Dcurrent_libdir="$(ROOTDIR)/lib" \
+   -Dlddeps="$(LIB_DEP) $(ROOTDIR)/lib/libmxnet.a")
+
 scalapkg:
-   (cd $(ROOTDIR)/scala-package; \
+   (cd $(ROOTDIR)/scala-package && \
mvn package -P$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE) 
-Dcxx="$(CXX)" \
-Dbuild.platform="$(SCALA_PKG_PROFILE)" \
-Dcflags="$(CFLAGS)" -Dldflags="$(LDFLAGS)" \
@@ -612,49 +620,58 @@ scalapkg:
-Dlddeps="$(LIB_DEP) $(ROOTDIR)/lib/libmxnet.a")
 
 scalaunittest:
-   (cd $(ROOTDIR)/scala-package; \
+   (cd $(ROOTDIR)/scala-package && \
mvn integration-test 
-P$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE),unittest -Dcxx="$(CXX)" \
-Dcflags="$(CFLAGS)" -Dldflags="$(LDFLAGS)" \
-Dlddeps="$(LIB_DEP) $(ROOTDIR)/lib/libmxnet.a" 
$(SCALA_TEST_ARGS))
 
 scalaintegrationtest:
-   (cd $(ROOTDIR)/scala-package; \
+   (cd $(ROOTDIR)/scala-package && \
mvn integration-test 
-P$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE),integrationtest -Dcxx="$(CXX)" \
-Dcflags="$(CFLAGS)" -Dldflags="$(LDFLAGS)" \
-Dlddeps="$(LIB_DEP) $(ROOTDIR)/lib/libmxnet.a" 
$(SCALA_TEST_ARGS))
 
 scalainstall:
-   (cd $(ROOTDIR)/scala-package; \
+   (cd $(ROOTDIR)/scala-package && \
mvn install -P$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE) 
-DskipTests=true -Dcxx="$(CXX)" \
-Dbuild.platform="$(SCALA_PKG_PROFILE)" \
-Dcflags="$(CFLAGS)" -Dldflags="$(LDFLAGS)" \
-Dlddeps="$(LIB_DEP) $(ROOTDIR)/lib/libmxnet.a")
 
 scalarelease-dryrun:
-   (cd $(ROOTDIR)/scala-package; \
+   (cd $(ROOTDIR)/scala-package && \
mvn release:clean release:prepare -DdryRun=true 
-DautoVersionSubmodules=true \
-Papache-release,$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE) \
-Darguments=""-Dbuild\.platform=\""$(SCALA_PKG_PROFILE)\""\ 
-DskipTests=true\ -Dcflags=\""$(CFLAGS)\""\ -Dcxx=\""$(CXX)\""\ 
-Dldflags=\""$(LDFLAGS)\""\ -Dlddeps=\""$(LIB_DEP) 
$(ROOTDIR)/lib/libmxnet.a\)
 
 scalarelease-prepare:
-   (cd $(ROOTDIR)/scala-package; \
+   (cd $(ROOTDIR)/scala-package && \
mvn release:clean release:prepare -DautoVersionSubmodules=true \
-Papache-release,$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE) \
-Darguments=""-Dbuild\.platform=\""$(SCALA_PKG_PROFILE)\""\ 
-DskipTests=true\ -Dcflags=\""$(CFLAGS)\""\ -Dcxx=\""$(CXX)\""\ 
-Dldflags=\""$(LDFLAGS)\""\ -Dlddeps=\""$(LIB_DEP) 
$(ROOTDIR)/lib/libmxnet.a\)
 
 scalarelease-perform:
-   (cd 

[GitHub] lanking520 closed pull request #13046: [MXNET-1155] Add scala packageTest utility

2018-12-11 Thread GitBox
lanking520 closed pull request #13046: [MXNET-1155] Add scala packageTest 
utility
URL: https://github.com/apache/incubator-mxnet/pull/13046
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/Makefile b/Makefile
index 16ea59f3d58..822704e2675 100644
--- a/Makefile
+++ b/Makefile
@@ -600,11 +600,19 @@ rpkgtest:
Rscript -e 
'res<-covr:::package_coverage("R-package");fileConn<-file(paste("r-package_coverage_",toString(runif(1)),".json"));writeLines(covr:::to_codecov(res),
 fileConn);close(fileConn)'
 
 scalaclean:
-   (cd $(ROOTDIR)/scala-package; \
+   (cd $(ROOTDIR)/scala-package && \
mvn clean -P$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE))
 
+scalatestcompile:
+   (cd $(ROOTDIR)/scala-package && \
+   mvn test-compile 
-P$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE) -Dcxx="$(CXX)" \
+   -Dbuild.platform="$(SCALA_PKG_PROFILE)" \
+   -Dcflags="$(CFLAGS)" -Dldflags="$(LDFLAGS)" \
+   -Dcurrent_libdir="$(ROOTDIR)/lib" \
+   -Dlddeps="$(LIB_DEP) $(ROOTDIR)/lib/libmxnet.a")
+
 scalapkg:
-   (cd $(ROOTDIR)/scala-package; \
+   (cd $(ROOTDIR)/scala-package && \
mvn package -P$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE) 
-Dcxx="$(CXX)" \
-Dbuild.platform="$(SCALA_PKG_PROFILE)" \
-Dcflags="$(CFLAGS)" -Dldflags="$(LDFLAGS)" \
@@ -612,49 +620,58 @@ scalapkg:
-Dlddeps="$(LIB_DEP) $(ROOTDIR)/lib/libmxnet.a")
 
 scalaunittest:
-   (cd $(ROOTDIR)/scala-package; \
+   (cd $(ROOTDIR)/scala-package && \
mvn integration-test 
-P$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE),unittest -Dcxx="$(CXX)" \
-Dcflags="$(CFLAGS)" -Dldflags="$(LDFLAGS)" \
-Dlddeps="$(LIB_DEP) $(ROOTDIR)/lib/libmxnet.a" 
$(SCALA_TEST_ARGS))
 
 scalaintegrationtest:
-   (cd $(ROOTDIR)/scala-package; \
+   (cd $(ROOTDIR)/scala-package && \
mvn integration-test 
-P$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE),integrationtest -Dcxx="$(CXX)" \
-Dcflags="$(CFLAGS)" -Dldflags="$(LDFLAGS)" \
-Dlddeps="$(LIB_DEP) $(ROOTDIR)/lib/libmxnet.a" 
$(SCALA_TEST_ARGS))
 
 scalainstall:
-   (cd $(ROOTDIR)/scala-package; \
+   (cd $(ROOTDIR)/scala-package && \
mvn install -P$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE) 
-DskipTests=true -Dcxx="$(CXX)" \
-Dbuild.platform="$(SCALA_PKG_PROFILE)" \
-Dcflags="$(CFLAGS)" -Dldflags="$(LDFLAGS)" \
-Dlddeps="$(LIB_DEP) $(ROOTDIR)/lib/libmxnet.a")
 
 scalarelease-dryrun:
-   (cd $(ROOTDIR)/scala-package; \
+   (cd $(ROOTDIR)/scala-package && \
mvn release:clean release:prepare -DdryRun=true 
-DautoVersionSubmodules=true \
-Papache-release,$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE) \
-Darguments=""-Dbuild\.platform=\""$(SCALA_PKG_PROFILE)\""\ 
-DskipTests=true\ -Dcflags=\""$(CFLAGS)\""\ -Dcxx=\""$(CXX)\""\ 
-Dldflags=\""$(LDFLAGS)\""\ -Dlddeps=\""$(LIB_DEP) 
$(ROOTDIR)/lib/libmxnet.a\)
 
 scalarelease-prepare:
-   (cd $(ROOTDIR)/scala-package; \
+   (cd $(ROOTDIR)/scala-package && \
mvn release:clean release:prepare -DautoVersionSubmodules=true \
-Papache-release,$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE) \
-Darguments=""-Dbuild\.platform=\""$(SCALA_PKG_PROFILE)\""\ 
-DskipTests=true\ -Dcflags=\""$(CFLAGS)\""\ -Dcxx=\""$(CXX)\""\ 
-Dldflags=\""$(LDFLAGS)\""\ -Dlddeps=\""$(LIB_DEP) 
$(ROOTDIR)/lib/libmxnet.a\)
 
 scalarelease-perform:
-   (cd $(ROOTDIR)/scala-package; \
+   (cd $(ROOTDIR)/scala-package && \
mvn release:perform -DautoVersionSubmodules=true \
-Papache-release,$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE) \
-Darguments=""-Dbuild\.platform=\""$(SCALA_PKG_PROFILE)\""\ 
-DskipTests=true\ -Dcflags=\""$(CFLAGS)\""\ -Dcxx=\""$(CXX)\""\ 
-Dldflags=\""$(LDFLAGS)\""\ -Dlddeps=\""$(LIB_DEP) 
$(ROOTDIR)/lib/libmxnet.a\)
 
 scaladeploy:
-   (cd $(ROOTDIR)/scala-package; \
+   (cd $(ROOTDIR)/scala-package && \
mvn deploy 
-Papache-release,$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE) 
\-DskipTests=true -Dcxx="$(CXX)" \
-Dbuild.platform="$(SCALA_PKG_PROFILE)" \
-Dcflags="$(CFLAGS)" -Dldflags="$(LDFLAGS)" \
-Dlddeps="$(LIB_DEP) $(ROOTDIR)/lib/libmxnet.a")
 
+scaladeploylocal:
+   (cd $(ROOTDIR)/scala-package && \
+   mvn deploy 

[GitHub] ChaiBapchya edited a comment on issue #13039: [MXNET-918] Random module

2018-12-11 Thread GitBox
ChaiBapchya edited a comment on issue #13039: [MXNET-918] Random module
URL: https://github.com/apache/incubator-mxnet/pull/13039#issuecomment-446395402
 
 
   It supports int32 and int64 (default to int32)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 closed pull request #13617: [MXNET-1257] fix the Float not showing correctly problem

2018-12-11 Thread GitBox
lanking520 closed pull request #13617: [MXNET-1257] fix the Float not showing 
correctly problem
URL: https://github.com/apache/incubator-mxnet/pull/13617
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/scala-package/infer/src/main/scala/org/apache/mxnet/infer/javaapi/Predictor.scala
 
b/scala-package/infer/src/main/scala/org/apache/mxnet/infer/javaapi/Predictor.scala
index 8c48742e6f0..0466693be9b 100644
--- 
a/scala-package/infer/src/main/scala/org/apache/mxnet/infer/javaapi/Predictor.scala
+++ 
b/scala-package/infer/src/main/scala/org/apache/mxnet/infer/javaapi/Predictor.scala
@@ -80,10 +80,11 @@ class Predictor private[mxnet] (val predictor: 
org.apache.mxnet.infer.Predictor)
   An extra List is needed for when the model has 
more than one input.
 * @return  Indexed sequence array of outputs
 */
-  def predict(input: java.util.List[java.util.List[Float]]):
-  java.util.List[java.util.List[Float]] = {
+  def predict(input: java.util.List[java.util.List[java.lang.Float]]):
+  java.util.List[java.util.List[java.lang.Float]] = {
 val in = 
JavaConverters.asScalaIteratorConverter(input.iterator).asScala.toIndexedSeq
-(predictor.predict(in map {a => a.asScala.toArray}) map {b => 
b.toList.asJava}).asJava
+(predictor.predict(in map {a => a.asScala.map(Float2float).toArray})
+  map {b => b.map(float2Float).toList.asJava}).asJava
   }
 
 


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: fix the Float not showing correctly problem (#13617)

2018-12-11 Thread lanking
This is an automated email from the ASF dual-hosted git repository.

lanking pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 1f8bb26  fix the Float not showing correctly problem (#13617)
1f8bb26 is described below

commit 1f8bb26b63f030ce9d64d3a45aae0cc216572de0
Author: Lanking 
AuthorDate: Tue Dec 11 15:11:27 2018 -0800

fix the Float not showing correctly problem (#13617)

Merge this PR for 1.4.x
---
 .../src/main/scala/org/apache/mxnet/infer/javaapi/Predictor.scala  | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git 
a/scala-package/infer/src/main/scala/org/apache/mxnet/infer/javaapi/Predictor.scala
 
b/scala-package/infer/src/main/scala/org/apache/mxnet/infer/javaapi/Predictor.scala
index 8c48742..0466693 100644
--- 
a/scala-package/infer/src/main/scala/org/apache/mxnet/infer/javaapi/Predictor.scala
+++ 
b/scala-package/infer/src/main/scala/org/apache/mxnet/infer/javaapi/Predictor.scala
@@ -80,10 +80,11 @@ class Predictor private[mxnet] (val predictor: 
org.apache.mxnet.infer.Predictor)
   An extra List is needed for when the model has 
more than one input.
 * @return  Indexed sequence array of outputs
 */
-  def predict(input: java.util.List[java.util.List[Float]]):
-  java.util.List[java.util.List[Float]] = {
+  def predict(input: java.util.List[java.util.List[java.lang.Float]]):
+  java.util.List[java.util.List[java.lang.Float]] = {
 val in = 
JavaConverters.asScalaIteratorConverter(input.iterator).asScala.toIndexedSeq
-(predictor.predict(in map {a => a.asScala.toArray}) map {b => 
b.toList.asJava}).asJava
+(predictor.predict(in map {a => a.asScala.map(Float2float).toArray})
+  map {b => b.map(float2Float).toList.asJava}).asJava
   }
 
 



[GitHub] lanking520 opened a new pull request #13619: [MXNET-1231] Allow not using Some in the Scala operators

2018-12-11 Thread GitBox
lanking520 opened a new pull request #13619: [MXNET-1231] Allow not using Some 
in the Scala operators
URL: https://github.com/apache/incubator-mxnet/pull/13619
 
 
   ## Description ##
   Adding a new Util called SomeConversion. Import that would help to reduce 
all Some usages.
   @nswamy @yzhliu @andrewfayres @zachgk @piyushghai 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Vikas89 commented on issue #13438: libc getenv is not threadsafe

2018-12-11 Thread GitBox
Vikas89 commented on issue #13438: libc getenv is not threadsafe
URL: 
https://github.com/apache/incubator-mxnet/issues/13438#issuecomment-446396227
 
 
   @anirudh2290 good one!
   I think this is problem in general. For this particular case we can try to 
use fork handlers:
   pthread_at_fork(prepare, parent, child) 
   
   in prepare method, we should try to set bogus env variable, which will 
unlock the lock and then child will get that lock in unlocked state. 
   Something like dmlc::setEnv("Bogus", "Bogus") here: 
https://github.com/apache/incubator-mxnet/blob/master/src/initialize.cc#L54
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ChaiBapchya commented on issue #13039: [MXNET-918] Random module

2018-12-11 Thread GitBox
ChaiBapchya commented on issue #13039: [MXNET-918] Random module
URL: https://github.com/apache/incubator-mxnet/pull/13039#issuecomment-446395402
 
 
   It supports int (default to int32)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 commented on issue #13039: [MXNET-918] Random module

2018-12-11 Thread GitBox
lanking520 commented on issue #13039: [MXNET-918] Random module
URL: https://github.com/apache/incubator-mxnet/pull/13039#issuecomment-446394609
 
 
   @ChaiBapchya what is the data type of `low` and `high`? It seemed not 
intepreted correctly in Scala?
   @mdespriee please try to pull --rebase upstream master and then make clean 
and build the backend all over again to see if you can reproduce this issue


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ChaiBapchya commented on issue #13039: [MXNET-918] Random module

2018-12-11 Thread GitBox
ChaiBapchya commented on issue #13039: [MXNET-918] Random module
URL: https://github.com/apache/incubator-mxnet/pull/13039#issuecomment-446393785
 
 
   Operator definition 
   
https://github.com/apache/incubator-mxnet/blob/449e17dbf2ec671037d4b127a28897b157f80bf3/src/operator/random/sample_op.h#L257
   
   RandInt kernel function definition - 
   
https://github.com/apache/incubator-mxnet/blob/449e17dbf2ec671037d4b127a28897b157f80bf3/src/operator/random/sampler.h#L97
   
   CPU and GPU specific code can be found in files - 
   
https://github.com/apache/incubator-mxnet/blob/449e17dbf2ec671037d4b127a28897b157f80bf3/src/operator/random/sample_op.cc#L181
   
   
https://github.com/apache/incubator-mxnet/blob/449e17dbf2ec671037d4b127a28897b157f80bf3/src/operator/random/sample_op.cu#L42


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] stu1130 commented on a change in pull request #13614: Make to_tensor and normalize to accept 3D or 4D tensor inputs

2018-12-11 Thread GitBox
stu1130 commented on a change in pull request #13614: Make to_tensor and 
normalize to accept 3D or 4D tensor inputs
URL: https://github.com/apache/incubator-mxnet/pull/13614#discussion_r240820503
 
 

 ##
 File path: src/operator/image/image_random-inl.h
 ##
 @@ -62,6 +67,23 @@ inline bool ToTensorType(const nnvm::NodeAttrs& attrs,
   return (*in_attrs)[0] != -1;
 }
 
+void ToTensorImpl(const std::vector ,
+const std::vector ,
+const int length,
+const int channel,
+const int step = 0) {
+  MSHADOW_TYPE_SWITCH(inputs[0].type_flag_, DType, {
+  float* output = outputs[0].dptr();
+  DType* input = inputs[0].dptr();
+
 
 Review comment:
   How about checking the DType by ourself without using the Macro


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #13614: Make to_tensor and normalize to accept 3D or 4D tensor inputs

2018-12-11 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #13614: Make 
to_tensor and normalize to accept 3D or 4D tensor inputs
URL: https://github.com/apache/incubator-mxnet/pull/13614#discussion_r240819963
 
 

 ##
 File path: src/operator/image/image_random-inl.h
 ##
 @@ -123,28 +159,50 @@ inline bool NormalizeShape(const nnvm::NodeAttrs& attrs,
   return true;
 }
 
+void NormalizeImpl(const std::vector ,
+  const std::vector ,
+  const NormalizeParam ,
+  const int length,
+  const int channel,
+  const int step = 0) {
+MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+  DType* input = inputs[0].dptr();
+  DType* output = outputs[0].dptr();
+
+  for (int i = 0; i < channel; ++i) {
+DType mean = param.mean[param.mean.ndim() > 1 ? i : 0];
+DType std_dev = param.std[param.std.ndim() > 1 ? i : 0];
+for (int j = 0; j < length; ++j) {
+  output[step + i*length + j] = (input[step + i*length + j] - mean) / 
std_dev;
+}
+  }
+});
+}
+
 void Normalize(const nnvm::NodeAttrs ,
   const OpContext ,
   const std::vector ,
   const std::vector ,
   const std::vector ) {
   const NormalizeParam  = nnvm::get(attrs.parsed);
 
-  int nchannels = inputs[0].shape_[0];
-  int length = inputs[0].shape_[1] * inputs[0].shape_[2];
-
-  MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
-DType* input = inputs[0].dptr();
-DType* output = outputs[0].dptr();
-
-for (int i = 0; i < nchannels; ++i) {
-  DType mean = param.mean[param.mean.ndim() > 1 ? i : 0];
-  DType std = param.std[param.std.ndim() > 1 ? i : 0];
-  for (int j = 0; j < length; ++j) {
-output[i*length + j] = (input[i*length + j] - mean) / std;
-  }
+  // 3D input (c, h, w)
+  if (inputs[0].ndim() == 3) {
+const int length = inputs[0].shape_[1] * inputs[0].shape_[2];
+const int channel = inputs[0].shape_[0];
+NormalizeImpl(inputs, outputs, param, length, channel);
+  } else if (inputs[0].ndim() == 4) {
+// 4D input (n, c, h, w)
+const int batch_size = inputs[0].shape_[0];
+const int length = inputs[0].shape_[2] * inputs[0].shape_[3];
+const int channel = inputs[0].shape_[1];
+const int step = channel*length;
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #13614: Make to_tensor and normalize to accept 3D or 4D tensor inputs

2018-12-11 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #13614: Make 
to_tensor and normalize to accept 3D or 4D tensor inputs
URL: https://github.com/apache/incubator-mxnet/pull/13614#discussion_r240819407
 
 

 ##
 File path: src/operator/image/image_random-inl.h
 ##
 @@ -62,6 +67,23 @@ inline bool ToTensorType(const nnvm::NodeAttrs& attrs,
   return (*in_attrs)[0] != -1;
 }
 
+void ToTensorImpl(const std::vector ,
+const std::vector ,
+const int length,
+const int channel,
+const int step = 0) {
+  MSHADOW_TYPE_SWITCH(inputs[0].type_flag_, DType, {
+  float* output = outputs[0].dptr();
+  DType* input = inputs[0].dptr();
+
 
 Review comment:
   Cannot be used inside Macro.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #13598: More fine-grained operator implementation dispatch & memory planning flow

2018-12-11 Thread GitBox
marcoabreu commented on issue #13598: More fine-grained operator implementation 
dispatch & memory planning flow 
URL: 
https://github.com/apache/incubator-mxnet/issues/13598#issuecomment-446384192
 
 
   Just had a small chat with Haibin. So just to clarify, my idea would be 
rather long-term to avoid having all the preprocessor directivesin the 
FInferStorageTypeEx and similar places.
   
   Within the above design, FInferStorageTypeEx would be part of the abstract 
operator interface which each backend would implement. The memory layout 
manager would then invoke that function in the same fashion as described by 
Haibin but the implementation would be in different places.
   
   I totally see that my proposal is way out of scope for the current problem 
and agree that Haibins method is definitely the best way to go considering the 
constraints of the current design around preprocessor directives.
   
   Just wanted to write down my idea for future cases when the operator 
implementation design might get revisited.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: #13385 [Clojure] - Turn examples into integration tests (#13554)

2018-12-11 Thread cmeier
This is an automated email from the ASF dual-hosted git repository.

cmeier pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 449e17d   #13385 [Clojure] - Turn examples into integration tests 
(#13554)
449e17d is described below

commit 449e17dbf2ec671037d4b127a28897b157f80bf3
Author: Nicolas Modrzyk 
AuthorDate: Wed Dec 12 07:17:37 2018 +0900

 #13385 [Clojure] - Turn examples into integration tests (#13554)
---
 .../src/cnn_text_classification/classifier.clj |  14 +-
 .../cnn_text_classification/classifier_test.clj|  44 
 contrib/clojure-package/examples/gan/project.clj   |   3 +-
 .../examples/gan/src/gan/gan_mnist.clj |   6 +-
 .../clojure-package/examples/gan/src/gan/viz.clj   |   4 +-
 .../gan/{project.clj => test/gan/gan_test.clj} |  15 +-
 .../src/imclassification/train_mnist.clj   |  20 +-
 .../test/imclassification/train_mnist_test.clj}|  29 ++-
 .../imclassification/test/test-symbol.json.ref | 105 
 .../project.clj => module/test/mnist_mlp_test.clj} |  19 +-
 .../test/multi_label_test.clj} |  16 +-
 .../neural-style/src/neural_style/core.clj |  22 +-
 .../neural-style/test/neural_style/vgg_19_test.clj |  53 
 .../examples/profiler/src/profiler/core.clj|   6 +-
 .../project.clj => profiler/test/core_test.clj}|  21 +-
 .../profiler/test/profile-matmul-20iter.json.ref   | 271 +
 .../examples/rnn/src/rnn/test_char_rnn.clj |   4 +
 .../examples/rnn/src/rnn/train_char_rnn.clj|   4 +
 .../project.clj => rnn/test/rnn/core_test.clj} |  16 +-
 .../clojure-package/examples/tutorial/.gitignore   |   1 +
 .../clojure-package/examples/tutorial/project.clj  |   2 +
 .../examples/tutorial/src/tutorial/module.clj  |  35 ++-
 .../examples/tutorial/src/tutorial/ndarray.clj |   8 +-
 .../examples/tutorial/src/tutorial/symbol.clj  |  10 +-
 .../test/tutorial/core_test.clj}   |  17 +-
 .../test/visualization/core_test.clj}  |  18 +-
 contrib/clojure-package/integration-tests.sh   |  28 +++
 .../nightly/apache_rat_license_check/rat-excludes  |   4 +-
 28 files changed, 705 insertions(+), 90 deletions(-)

diff --git 
a/contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj
 
b/contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj
index 29ff36f..94fd4f5 100644
--- 
a/contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj
+++ 
b/contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj
@@ -16,7 +16,9 @@
 ;;
 
 (ns cnn-text-classification.classifier
-  (:require [cnn-text-classification.data-helper :as data-helper]
+  (:require [clojure.java.io :as io]
+[clojure.java.shell :refer [sh]]
+[cnn-text-classification.data-helper :as data-helper]
 [org.apache.clojure-mxnet.eval-metric :as eval-metric]
 [org.apache.clojure-mxnet.io :as mx-io]
 [org.apache.clojure-mxnet.module :as m]
@@ -26,12 +28,18 @@
 [org.apache.clojure-mxnet.context :as context])
   (:gen-class))
 
+(def data-dir "data/")
 (def mr-dataset-path "data/mr-data") ;; the MR polarity dataset path
 (def glove-file-path "data/glove/glove.6B.50d.txt")
 (def num-filter 100)
 (def num-label 2)
 (def dropout 0.5)
 
+
+
+(when-not (.exists (io/file (str data-dir)))
+  (do (println "Retrieving data for cnn text classification...") (sh 
"./get_data.sh")))
+
 (defn shuffle-data [test-num {:keys [data label sentence-count sentence-size 
embedding-size]}]
   (println "Shuffling the data and splitting into training and test sets")
   (println {:sentence-count sentence-count
@@ -103,10 +111,10 @@
   ;;; omit max-examples if you want to run all the examples in the movie 
review dataset
 ;; to limit mem consumption set to something like 1000 and adjust test 
size to 100
 (println "Running with context devices of" devs)
-(train-convnet {:devs [(context/cpu)] :embedding-size 50 :batch-size 10 
:test-size 100 :num-epoch 10 :max-examples 1000})
+(train-convnet {:devs devs :embedding-size 50 :batch-size 10 :test-size 
100 :num-epoch 10 :max-examples 1000})
 ;; runs all the examples
 #_(train-convnet {:embedding-size 50 :batch-size 100 :test-size 1000 
:num-epoch 10})))
 
 (comment
-  (train-convnet {:devs [(context/cpu)] :embedding-size 50 :batch-size 10 
:test-size 100 :num-epoch 10 :max-examples 1000}))
+  (train-convnet {:devs devs :embedding-size 50 :batch-size 10 :test-size 100 
:num-epoch 10 :max-examples 1000}))
 
diff --git 
a/contrib/clojure-package/examples/cnn-text-classification/test/cnn_text_classification/classifier_test.clj
 

[GitHub] gigasquid commented on issue #13554: #13385 [Clojure] - Turn examples into integration tests

2018-12-11 Thread GitBox
gigasquid commented on issue #13554: #13385 [Clojure] - Turn examples into 
integration tests
URL: https://github.com/apache/incubator-mxnet/pull/13554#issuecomment-446383710
 
 
   Thanks for taking this on and making it happen  
   
   I'm going to go ahead a merge this and then we can work on adding it to the 
nightly tests in another PR and see how it runs there.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid closed pull request #13554: #13385 [Clojure] - Turn examples into integration tests

2018-12-11 Thread GitBox
gigasquid closed pull request #13554: #13385 [Clojure] - Turn examples into 
integration tests
URL: https://github.com/apache/incubator-mxnet/pull/13554
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj
 
b/contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj
index 29ff36fe1ec..94fd4f518c6 100644
--- 
a/contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj
+++ 
b/contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj
@@ -16,7 +16,9 @@
 ;;
 
 (ns cnn-text-classification.classifier
-  (:require [cnn-text-classification.data-helper :as data-helper]
+  (:require [clojure.java.io :as io]
+[clojure.java.shell :refer [sh]]
+[cnn-text-classification.data-helper :as data-helper]
 [org.apache.clojure-mxnet.eval-metric :as eval-metric]
 [org.apache.clojure-mxnet.io :as mx-io]
 [org.apache.clojure-mxnet.module :as m]
@@ -26,12 +28,18 @@
 [org.apache.clojure-mxnet.context :as context])
   (:gen-class))
 
+(def data-dir "data/")
 (def mr-dataset-path "data/mr-data") ;; the MR polarity dataset path
 (def glove-file-path "data/glove/glove.6B.50d.txt")
 (def num-filter 100)
 (def num-label 2)
 (def dropout 0.5)
 
+
+
+(when-not (.exists (io/file (str data-dir)))
+  (do (println "Retrieving data for cnn text classification...") (sh 
"./get_data.sh")))
+
 (defn shuffle-data [test-num {:keys [data label sentence-count sentence-size 
embedding-size]}]
   (println "Shuffling the data and splitting into training and test sets")
   (println {:sentence-count sentence-count
@@ -103,10 +111,10 @@
   ;;; omit max-examples if you want to run all the examples in the movie 
review dataset
 ;; to limit mem consumption set to something like 1000 and adjust test 
size to 100
 (println "Running with context devices of" devs)
-(train-convnet {:devs [(context/cpu)] :embedding-size 50 :batch-size 10 
:test-size 100 :num-epoch 10 :max-examples 1000})
+(train-convnet {:devs devs :embedding-size 50 :batch-size 10 :test-size 
100 :num-epoch 10 :max-examples 1000})
 ;; runs all the examples
 #_(train-convnet {:embedding-size 50 :batch-size 100 :test-size 1000 
:num-epoch 10})))
 
 (comment
-  (train-convnet {:devs [(context/cpu)] :embedding-size 50 :batch-size 10 
:test-size 100 :num-epoch 10 :max-examples 1000}))
+  (train-convnet {:devs devs :embedding-size 50 :batch-size 10 :test-size 100 
:num-epoch 10 :max-examples 1000}))
 
diff --git 
a/contrib/clojure-package/examples/cnn-text-classification/test/cnn_text_classification/classifier_test.clj
 
b/contrib/clojure-package/examples/cnn-text-classification/test/cnn_text_classification/classifier_test.clj
new file mode 100644
index 000..918a46f474d
--- /dev/null
+++ 
b/contrib/clojure-package/examples/cnn-text-classification/test/cnn_text_classification/classifier_test.clj
@@ -0,0 +1,44 @@
+;;
+;; Licensed to the Apache Software Foundation (ASF) under one or more
+;; contributor license agreements.  See the NOTICE file distributed with
+;; this work for additional information regarding copyright ownership.
+;; The ASF licenses this file to You under the Apache License, Version 2.0
+;; (the "License"); you may not use this file except in compliance with
+;; the License.  You may obtain a copy of the License at
+;;
+;;http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+;;
+
+(ns cnn-text-classification.classifier-test
+   (:require 
+   [clojure.test :refer :all]
+   [org.apache.clojure-mxnet.module :as module]
+   [org.apache.clojure-mxnet.ndarray :as ndarray]
+   [org.apache.clojure-mxnet.util :as util]
+   [org.apache.clojure-mxnet.context :as context]
+   [cnn-text-classification.classifier :as classifier]))
+
+;
+; The one and unique classifier test
+;
+(deftest classifier-test
+   (let [train
+(classifier/train-convnet 
+   {:devs [(context/default-context)]
+ :embedding-size 50 
+ :batch-size 10 
+ :test-size 100 
+ :num-epoch 1 
+ :max-examples 1000})]
+(is (= ["data"] (util/scala-vector->vec (module/data-names train
+(is (= 20 (count (ndarray/->vec (-> train 

[GitHub] zachgk commented on a change in pull request #13364: [MXNET-1225] Always use config.mk in make install instructions

2018-12-11 Thread GitBox
zachgk commented on a change in pull request #13364: [MXNET-1225] Always use 
config.mk in make install instructions
URL: https://github.com/apache/incubator-mxnet/pull/13364#discussion_r240802315
 
 

 ##
 File path: docs/install/build_from_source.md
 ##
 @@ -203,68 +204,82 @@ It is recommended to set environment variable 
NCCL_LAUNCH_MODE to PARALLEL when
 
 ### Build MXNet with C++
 
-* To enable C++ package, just add `USE_CPP_PACKAGE=1` when you run `make` or 
`cmake`.
+* To enable C++ package, just add `USE_CPP_PACKAGE=1` when you run `make` or 
`cmake` (see examples).
 
 
 
 ### Usage Examples
 
-* `-j` runs multiple jobs against multi-core CPUs.
-
 For example, you can specify using all cores on Linux as follows:
 
 ```bash
-cmake -j$(nproc)
+mkdir build && cd build
+cmake -GNinja .
+ninja -v
 ```
 
 
  Recommended for Systems with NVIDIA GPUs and Intel CPUs
 * Build MXNet with `cmake` and install with MKL DNN, GPU, and OpenCV support:
 
 ```bash
-cmake -j USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1 USE_MKLDNN=1
+mkdir build && cd build
+cmake -DUSE_CUDA=1 -DUSE_CUDA_PATH=/usr/local/cuda -DUSE_CUDNN=1 
-DUSE_MKLDNN=1 -GNinja .
+ninja -v
 ```
 
  Recommended for Systems with NVIDIA GPUs
 * Build with both OpenBLAS, GPU, and OpenCV support:
 
 ```bash
-cmake -j BLAS=open USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1
+mkdir build && cd build
+cmake -DBLAS=open -DUSE_CUDA=1 -DUSE_CUDA_PATH=/usr/local/cuda -DUSE_CUDNN=1 
-GNinja .
 
 Review comment:
   It is a build system (https://ninja-build.org/). I believe CMake generates 
the ninja configuration files and then ninja performs the actual build.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piyushghai commented on a change in pull request #13046: [MXNET-1155] Add scala packageTest utility

2018-12-11 Thread GitBox
piyushghai commented on a change in pull request #13046: [MXNET-1155] Add scala 
packageTest utility
URL: https://github.com/apache/incubator-mxnet/pull/13046#discussion_r240801610
 
 

 ##
 File path: scala-package/packageTest/README.md
 ##
 @@ -0,0 +1,72 @@
+# MXNet Scala Package Test
+
+This is an project created to run the test suite on a fully packaged mxnet 
jar. The test suite is found locally but mxnet is from the target jarfile.
+
+## General Setup
+
+To setup the packageTest, you must first build your tests. To build the tests, 
follow these steps from the mxnet main directory:
+
+1. Build MXNet and the scala package from source following the directions 
[here](https://mxnet.incubator.apache.org/install/scala_setup.html#source)
+2. Build the tests by running `make scalatestcompile`.
+3. Follow setup instructions below for your testing goal
+
+## Running
+
+There are three different modes of operation for testing based on the location 
of the jar and where it is coming from:
+
+### Test Installed Jars
+
+If you have a jar file, you can install it to your maven cache 
repository(`~/.m2/repository`). This might be useful if you acquire the .jar 
file from elsewhere. To install, it is easiest to use `mvn install:install-file 
-Dfile= -DpomFile=`. If the pom file is not 
available, you can also run `mvn install:install-file -Dfile= 
-DgroupId= -DartifactId= -Dversion= 
-Dpackaging=`. With the full mxnet jar, this might look like `mvn 
install:install-file -Dfile= -DgroupId=org.apache.mxnet 
-DartifactId=mxnet-full_2.11-linux-x86_64-cpu -Dversion=1.3.0 -Dpackaging=jar`.
+
+You can also run `make scalainstall` to install from a local build.
+
+After installing, run `make testinstall` in the package test directory to run 
the tests.  Note that unless you also install an additional mxnetexamples jar, 
you can only run the unit tests.
+
+### Test Local Deployment
+
+To test the jars that would be produced by a deployment, you can run `make 
scaladeploylocal` from the main mxnet directory. This produces a local snapshot 
located at `scala-package/local-snapshot`. To test this local snapshot, run 
`make testlocal`.
+
+### Remote Repository Snapshot
+
+This mode is to test a jar located in a remote repository. The default 
repository is the apache snapshot repisotory located at 
`https://repository.apache.org/content/repositories/snapshots`. Note that the 
actual jar in a repisotory should be located at 
`$repoUrl/org/apache/mxnet/mxnet-full_$scalaVersion-$osMode/$version/*.jar`.
+
+Test the snapshot repo using `make testsnapshot` or a different repo using 
`make testsnapshot MXNET_REPO=$NEW_REPO_URL`.
+
+### Options
+
+You are able to run unit tests, integration tests, or both using this utility. 
To run the unit tests, add the flag `UNIT=1` to make (e.g. `make testsnapshot 
UNIT=1`). Use `INTEGRATION=1` for integration tests. The default behavior is to 
run both the unit and integration tests. However, the integration tests require 
that the mxnet examples be installed in addition to the full mxnet package (see 
test mode instructions above).
+
+An additional option, you can specify the mxnet version with 
`MXNET_VERSION=1.3.1-SNAPSHOT`.
+
+## Cleaning Up
+
+You can clean temporary files and target artifacts by running `make 
scalaclean`.
+
+## Troubleshooting
+
+### Missing Examples
+
+If you fail with the following error
+```
+[ERROR] Failed to execute goal org.scalatest:scalatest-maven-plugin:1.0:test 
(test) on project mxnet-scala-packagetest-examples_2.11: There are test 
failures -> [Help 1]
+[ERROR]
+[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
+[ERROR] Re-run Maven using the -X switch to enable full debug logging.
+[ERROR]
+[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
+[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
+[ERROR]
+[ERROR] After correcting the problems, you can resume the build with the 
command
+[ERROR]   mvn  -rf :mxnet-scala-packagetest-examples_2.11
+Makefile:57: recipe for target 'scalaintegrationtest' failed
+make: *** [scalaintegrationtest] Error 1
+```
+
+and stacktrace begins with the following,
+
+```
+*** RUN ABORTED ***
+  java.lang.NoClassDefFoundError: org/apache/mxnetexamples/Util$
+```
+
+you are missing the mxnetexamples package.  See your test mode installation 
section for details.
 
 Review comment:
   Okay! Makes sense


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zachgk commented on a change in pull request #13046: [MXNET-1155] Add scala packageTest utility

2018-12-11 Thread GitBox
zachgk commented on a change in pull request #13046: [MXNET-1155] Add scala 
packageTest utility
URL: https://github.com/apache/incubator-mxnet/pull/13046#discussion_r240801313
 
 

 ##
 File path: scala-package/packageTest/README.md
 ##
 @@ -0,0 +1,72 @@
+# MXNet Scala Package Test
+
+This is an project created to run the test suite on a fully packaged mxnet 
jar. The test suite is found locally but mxnet is from the target jarfile.
+
+## General Setup
+
+To setup the packageTest, you must first build your tests. To build the tests, 
follow these steps from the mxnet main directory:
+
+1. Build MXNet and the scala package from source following the directions 
[here](https://mxnet.incubator.apache.org/install/scala_setup.html#source)
+2. Build the tests by running `make scalatestcompile`.
+3. Follow setup instructions below for your testing goal
+
+## Running
+
+There are three different modes of operation for testing based on the location 
of the jar and where it is coming from:
+
+### Test Installed Jars
+
+If you have a jar file, you can install it to your maven cache 
repository(`~/.m2/repository`). This might be useful if you acquire the .jar 
file from elsewhere. To install, it is easiest to use `mvn install:install-file 
-Dfile= -DpomFile=`. If the pom file is not 
available, you can also run `mvn install:install-file -Dfile= 
-DgroupId= -DartifactId= -Dversion= 
-Dpackaging=`. With the full mxnet jar, this might look like `mvn 
install:install-file -Dfile= -DgroupId=org.apache.mxnet 
-DartifactId=mxnet-full_2.11-linux-x86_64-cpu -Dversion=1.3.0 -Dpackaging=jar`.
+
+You can also run `make scalainstall` to install from a local build.
+
+After installing, run `make testinstall` in the package test directory to run 
the tests.  Note that unless you also install an additional mxnetexamples jar, 
you can only run the unit tests.
+
+### Test Local Deployment
+
+To test the jars that would be produced by a deployment, you can run `make 
scaladeploylocal` from the main mxnet directory. This produces a local snapshot 
located at `scala-package/local-snapshot`. To test this local snapshot, run 
`make testlocal`.
+
+### Remote Repository Snapshot
+
+This mode is to test a jar located in a remote repository. The default 
repository is the apache snapshot repisotory located at 
`https://repository.apache.org/content/repositories/snapshots`. Note that the 
actual jar in a repisotory should be located at 
`$repoUrl/org/apache/mxnet/mxnet-full_$scalaVersion-$osMode/$version/*.jar`.
+
+Test the snapshot repo using `make testsnapshot` or a different repo using 
`make testsnapshot MXNET_REPO=$NEW_REPO_URL`.
+
+### Options
+
+You are able to run unit tests, integration tests, or both using this utility. 
To run the unit tests, add the flag `UNIT=1` to make (e.g. `make testsnapshot 
UNIT=1`). Use `INTEGRATION=1` for integration tests. The default behavior is to 
run both the unit and integration tests. However, the integration tests require 
that the mxnet examples be installed in addition to the full mxnet package (see 
test mode instructions above).
+
+An additional option, you can specify the mxnet version with 
`MXNET_VERSION=1.3.1-SNAPSHOT`.
+
+## Cleaning Up
+
+You can clean temporary files and target artifacts by running `make 
scalaclean`.
+
+## Troubleshooting
+
+### Missing Examples
+
+If you fail with the following error
+```
+[ERROR] Failed to execute goal org.scalatest:scalatest-maven-plugin:1.0:test 
(test) on project mxnet-scala-packagetest-examples_2.11: There are test 
failures -> [Help 1]
+[ERROR]
+[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
+[ERROR] Re-run Maven using the -X switch to enable full debug logging.
+[ERROR]
+[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
+[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
+[ERROR]
+[ERROR] After correcting the problems, you can resume the build with the 
command
+[ERROR]   mvn  -rf :mxnet-scala-packagetest-examples_2.11
+Makefile:57: recipe for target 'scalaintegrationtest' failed
+make: *** [scalaintegrationtest] Error 1
+```
+
+and stacktrace begins with the following,
+
+```
+*** RUN ABORTED ***
+  java.lang.NoClassDefFoundError: org/apache/mxnetexamples/Util$
+```
+
+you are missing the mxnetexamples package.  See your test mode installation 
section for details.
 
 Review comment:
   In document linking is not officially part of markdown. Github added the 
feature, but it won't work in other viewers such as intellij. I would rather 
have no link than a link that might not work


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With 

[GitHub] stu1130 commented on a change in pull request #13614: Make to_tensor and normalize to accept 3D or 4D tensor inputs

2018-12-11 Thread GitBox
stu1130 commented on a change in pull request #13614: Make to_tensor and 
normalize to accept 3D or 4D tensor inputs
URL: https://github.com/apache/incubator-mxnet/pull/13614#discussion_r240792875
 
 

 ##
 File path: src/operator/image/image_random-inl.h
 ##
 @@ -123,28 +159,50 @@ inline bool NormalizeShape(const nnvm::NodeAttrs& attrs,
   return true;
 }
 
+void NormalizeImpl(const std::vector ,
+  const std::vector ,
+  const NormalizeParam ,
+  const int length,
+  const int channel,
+  const int step = 0) {
+MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+  DType* input = inputs[0].dptr();
+  DType* output = outputs[0].dptr();
+
+  for (int i = 0; i < channel; ++i) {
+DType mean = param.mean[param.mean.ndim() > 1 ? i : 0];
+DType std_dev = param.std[param.std.ndim() > 1 ? i : 0];
 
 Review comment:
   mean and std_dev should be float type defined by line 115, 116
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] stu1130 commented on a change in pull request #13614: Make to_tensor and normalize to accept 3D or 4D tensor inputs

2018-12-11 Thread GitBox
stu1130 commented on a change in pull request #13614: Make to_tensor and 
normalize to accept 3D or 4D tensor inputs
URL: https://github.com/apache/incubator-mxnet/pull/13614#discussion_r240795206
 
 

 ##
 File path: src/operator/image/image_random-inl.h
 ##
 @@ -123,28 +159,50 @@ inline bool NormalizeShape(const nnvm::NodeAttrs& attrs,
   return true;
 }
 
+void NormalizeImpl(const std::vector ,
+  const std::vector ,
+  const NormalizeParam ,
+  const int length,
+  const int channel,
+  const int step = 0) {
+MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+  DType* input = inputs[0].dptr();
+  DType* output = outputs[0].dptr();
+
+  for (int i = 0; i < channel; ++i) {
+DType mean = param.mean[param.mean.ndim() > 1 ? i : 0];
+DType std_dev = param.std[param.std.ndim() > 1 ? i : 0];
+for (int j = 0; j < length; ++j) {
+  output[step + i*length + j] = (input[step + i*length + j] - mean) / 
std_dev;
+}
+  }
+});
+}
+
 void Normalize(const nnvm::NodeAttrs ,
   const OpContext ,
   const std::vector ,
   const std::vector ,
   const std::vector ) {
   const NormalizeParam  = nnvm::get(attrs.parsed);
 
-  int nchannels = inputs[0].shape_[0];
-  int length = inputs[0].shape_[1] * inputs[0].shape_[2];
-
-  MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
-DType* input = inputs[0].dptr();
-DType* output = outputs[0].dptr();
-
-for (int i = 0; i < nchannels; ++i) {
-  DType mean = param.mean[param.mean.ndim() > 1 ? i : 0];
-  DType std = param.std[param.std.ndim() > 1 ? i : 0];
-  for (int j = 0; j < length; ++j) {
-output[i*length + j] = (input[i*length + j] - mean) / std;
-  }
+  // 3D input (c, h, w)
+  if (inputs[0].ndim() == 3) {
+const int length = inputs[0].shape_[1] * inputs[0].shape_[2];
+const int channel = inputs[0].shape_[0];
+NormalizeImpl(inputs, outputs, param, length, channel);
+  } else if (inputs[0].ndim() == 4) {
+// 4D input (n, c, h, w)
+const int batch_size = inputs[0].shape_[0];
+const int length = inputs[0].shape_[2] * inputs[0].shape_[3];
+const int channel = inputs[0].shape_[1];
+const int step = channel*length;
 
 Review comment:
   nit: 
   ```suggestion
   const int step = channel * length;
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] stu1130 commented on a change in pull request #13614: Make to_tensor and normalize to accept 3D or 4D tensor inputs

2018-12-11 Thread GitBox
stu1130 commented on a change in pull request #13614: Make to_tensor and 
normalize to accept 3D or 4D tensor inputs
URL: https://github.com/apache/incubator-mxnet/pull/13614#discussion_r240800204
 
 

 ##
 File path: tests/python/unittest/test_gluon_data_vision.py
 ##
 @@ -19,30 +19,66 @@
 import mxnet.ndarray as nd
 import numpy as np
 from mxnet import gluon
+from mxnet.base import MXNetError
 from mxnet.gluon.data.vision import transforms
 from mxnet.test_utils import assert_almost_equal
 from mxnet.test_utils import almost_equal
-from common import setup_module, with_seed, teardown
-
+from common import assertRaises, setup_module, with_seed, teardown
 
 @with_seed()
 def test_to_tensor():
+# 3D Input
 data_in = np.random.uniform(0, 255, (300, 300, 3)).astype(dtype=np.uint8)
 
 Review comment:
   ```suggestion
   data_in = nd.random.uniform(0, 255, (300, 300, 3)).astype(dtype=np.uint8)
   ```
   directly initialize ndarray would be better?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] stu1130 commented on a change in pull request #13614: Make to_tensor and normalize to accept 3D or 4D tensor inputs

2018-12-11 Thread GitBox
stu1130 commented on a change in pull request #13614: Make to_tensor and 
normalize to accept 3D or 4D tensor inputs
URL: https://github.com/apache/incubator-mxnet/pull/13614#discussion_r240795082
 
 

 ##
 File path: src/operator/image/image_random-inl.h
 ##
 @@ -123,28 +159,50 @@ inline bool NormalizeShape(const nnvm::NodeAttrs& attrs,
   return true;
 }
 
+void NormalizeImpl(const std::vector ,
+  const std::vector ,
+  const NormalizeParam ,
+  const int length,
+  const int channel,
+  const int step = 0) {
+MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+  DType* input = inputs[0].dptr();
+  DType* output = outputs[0].dptr();
+
+  for (int i = 0; i < channel; ++i) {
+DType mean = param.mean[param.mean.ndim() > 1 ? i : 0];
+DType std_dev = param.std[param.std.ndim() > 1 ? i : 0];
+for (int j = 0; j < length; ++j) {
+  output[step + i*length + j] = (input[step + i*length + j] - mean) / 
std_dev;
 
 Review comment:
   if input is int, should it be int or float after nomarlization? I prefer 
float here


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] stu1130 commented on a change in pull request #13614: Make to_tensor and normalize to accept 3D or 4D tensor inputs

2018-12-11 Thread GitBox
stu1130 commented on a change in pull request #13614: Make to_tensor and 
normalize to accept 3D or 4D tensor inputs
URL: https://github.com/apache/incubator-mxnet/pull/13614#discussion_r240793402
 
 

 ##
 File path: src/operator/image/image_random-inl.h
 ##
 @@ -62,6 +67,23 @@ inline bool ToTensorType(const nnvm::NodeAttrs& attrs,
   return (*in_attrs)[0] != -1;
 }
 
+void ToTensorImpl(const std::vector ,
+const std::vector ,
+const int length,
+const int channel,
+const int step = 0) {
+  MSHADOW_TYPE_SWITCH(inputs[0].type_flag_, DType, {
+  float* output = outputs[0].dptr();
+  DType* input = inputs[0].dptr();
+
 
 Review comment:
   ```suggestion
   #pragma omp parallel for collapse(2)
   ```
   would this make better performance?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mseth10 edited a comment on issue #12203: flaky test: test_operator_gpu.test_depthwise_convolution

2018-12-11 Thread GitBox
mseth10 edited a comment on issue #12203: flaky test: 
test_operator_gpu.test_depthwise_convolution
URL: 
https://github.com/apache/incubator-mxnet/issues/12203#issuecomment-446366930
 
 
   Reproduction steps (from 
https://cwiki.apache.org/confluence/display/MXNET/Reproducing+test+results):
   
   Spin up a p3.8x large instance, with Ubuntu base DLAMI, with at least 150GB 
EBS storage
   
   Cloning and building MXNet:
   - git clone --recursive https://github.com/apache/incubator-mxnet.git
   - cd incubator-mxnet
   - pip3 install -r ci/requirements.txt
   - ci/build.py --platform ubuntu_build_cuda /work/runtime_functions.sh 
build_ubuntu_gpu_mkldnn
   
   Enabling the test - Comment out *line 
[1634](https://github.com/apache/incubator-mxnet/blob/master/tests/python/unittest/test_operator.py#L1634)
 in file tests/python/unittest/test_operator.py.
   `# @unittest.skip("Flaky test 
https://github.com/apache/incubator-mxnet/issues/12203;)`
   
   Running only this particular test 10,000 times - Modify *line 
[735](https://github.com/apache/incubator-mxnet/blob/master/ci/docker/runtime_functions.sh#L735)
 in file ci/docker/runtime_functions.sh to
   `MXNET_TEST_COUNT=1 nosetests-2.7 $NOSE_COVERAGE_ARGUMENTS 
$NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_gpu.xml --verbose 
tests/python/gpu/test_operator_gpu.py:test_depthwise_convolution`
   
   - ci/build.py --nvidiadocker --platform ubuntu_gpu 
/work/runtime_functions.sh unittest_ubuntu_python2_gpu
   
   *Line numbers corresponding to commit 
[e25e18f](https://github.com/apache/incubator-mxnet/tree/e25e18f86be04b258973ecf3dcf72d052e7d33e4)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 opened a new pull request #13618: Fix warning in waitall doc

2018-12-11 Thread GitBox
anirudh2290 opened a new pull request #13618: Fix warning in waitall doc
URL: https://github.com/apache/incubator-mxnet/pull/13618
 
 
   ## Description ##
   Fixing waitall rendering: 
https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html?highlight=waitall#mxnet.ndarray.waitall
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Change API doc for correct rendering of warning.
   
   @nswamy @aaronmarkham 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mseth10 commented on issue #12203: flaky test: test_operator_gpu.test_depthwise_convolution

2018-12-11 Thread GitBox
mseth10 commented on issue #12203: flaky test: 
test_operator_gpu.test_depthwise_convolution
URL: 
https://github.com/apache/incubator-mxnet/issues/12203#issuecomment-446366930
 
 
   Reproduction steps (from 
https://cwiki.apache.org/confluence/display/MXNET/Reproducing+test+results):
   
   Spin up a p3.8x large instance, with Ubuntu base DLAMI, with at least 150GB 
EBS storage
   
   Cloning and building MXNet:
   - git clone --recursive https://github.com/apache/incubator-mxnet.git
   - cd incubator-mxnet
   - pip3 install -r ci/requirements.txt
   - ci/build.py --platform ubuntu_build_cuda /work/runtime_functions.sh 
build_ubuntu_gpu_mkldnn
   
   Enabling the test - Comment out line 
[1634](https://github.com/apache/incubator-mxnet/blob/master/tests/python/unittest/test_operator.py#L1634)
 in file tests/python/unittest/test_operator.py.
   `# @unittest.skip("Flaky test 
https://github.com/apache/incubator-mxnet/issues/12203;)`
   
   Running only this particular test 10,000 times - Modify line 
[735](https://github.com/apache/incubator-mxnet/blob/master/ci/docker/runtime_functions.sh#L735)
 in file ci/docker/runtime_functions.sh to
   `MXNET_TEST_COUNT=1 nosetests-2.7 $NOSE_COVERAGE_ARGUMENTS 
$NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_gpu.xml --verbose 
tests/python/gpu/test_operator_gpu.py:test_depthwise_convolution`
   
   - ci/build.py --nvidiadocker --platform ubuntu_gpu 
/work/runtime_functions.sh unittest_ubuntu_python2_gpu


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu edited a comment on issue #13598: More fine-grained operator implementation dispatch & memory planning flow

2018-12-11 Thread GitBox
marcoabreu edited a comment on issue #13598: More fine-grained operator 
implementation dispatch & memory planning flow 
URL: 
https://github.com/apache/incubator-mxnet/issues/13598#issuecomment-446353433
 
 
   Thanks for your very good questions! 
   
   For the operator selection I would think about a design which has something 
similar to a "tuning" or warm-up stage which evaluates the different 
possibilities. Initially, since that revamp would be quite big and 
experimental, I would hardcode an order (e.g. CUDA->AMD->MKLDNN->CPU) which is 
then evaluated and certain backends dropped if they don't support that operator 
or they're simply not present. Later on, there would ideally be a benchmark 
step which evaluates the different possibilities and then chooses the most 
efficient representation of the graph. This evaluation would first start with 
simple benchmarks (with different strategies like memory footprint, power 
consumption, throughput, etc) of each operator backend and then in the next 
stage go one level higher and evaluate groups of operators (up to evaluating 
the entire graph) to accomodate for layout conversion and memcopy overhead.  In 
the last iteration, we would have a graph which is most efficienct, but also 
runnable on that hardware, for the requested graph.
   
   There are two ways I could think of backends conflicting:
   1. Mismatching memory layouts
   2. Impossible/unlikely combinations (CUDA  or MKL )
   
   To solve number one, I would extend the design to not only have the 
operators abstracted, but also their memory layouts. In the same way as we 
would have an operator registry, we would have a memory layout registry where 
each backend announces their memory layouts (this could be rearrange data or 
moving them to different memory slots like GPU mem) as well as converters. Each 
operator implementation would specify a desired layout (most likely the one 
they registered themselfes). Now imagine you have a graph with threeoperators:
   ```
   Input -> Operator1_CUDA -> Operator2_MKL -> Operator3_MKL -> Output
   ```
   These three operators are from two entirely different backends and have 
their own implementation and memory layouts. Our engine would, during the 
initial analysis of the graph (this step is after the optional graph 
optimization and we assume the graph as final at that point), analyse the 
desired layout of each operator (in this case CUDA and MKL, but it could also 
go a level deeper like CUDA_NHWC etc) and then see whether they are compatible. 
If they are not, the engine would request a converter from the memory layout 
registry. These converters would then be inserted into the graph and the final 
graph would look as follows:
   ```
   Input -> Convert_Standard_CUDA -> Operator1_CUDA -> Convert_CUDA_MKL -> 
Operator2_MKL -> Operator3_MKL -> Convert_MKL_Standard -> Output
   ```
   This way, you will always have compatibility in between the different 
layouts while the neither the operators nor the engine will have to care about 
the different backends as that conversion happens in between. When an operator 
receives and outputs data, it expects to be in its "isolated" world. If the 
operators are from the same backend and use the same layout though, this 
conversion is skipped and a performance advantage is achieved.
   Now at this point you could get to O(N!) if you need convertors in between 
every single possible memory layout. The trick here is to have a standard 
layout (which we basically already have and is used to input and output data 
from the graphs). Each memory layout has to register at least two converters: 
TO_STANDARD and FROM_STANDARD. This allows have compatibility for backends 
where no direct conversion exists. Since this will require two conversions 
(FROM_MEMLAYOUT1_TO_STANARD and FROM_STANDARD_TO_MEMLAYOUT2), this will have 
additional overhead but keep compatibility high. For common cases, where would 
probably be direct converters. 
   
   For the second case where conflicting backends exist, they would simply be 
skipped during the evaluation stage when the engine checks whether an operator 
is actually eligible. So if CUDA is not present, the operator will simply not 
be considered for that graph.
   
   
   The optimization I talked about in the first paragraph could be developed in 
three stages:
   1. Simple hardcoded priority (maybe with ENV var)
   2. Pre-run benchmark that returns the most optimal graph. This will increase 
the startup duration.
   3. Background optimization: Immediately serve requests, but slightly modify 
the graph every now and then in order to approach the most optimal graph. This 
will slightly increase the initial latency (due to initial suboptimal operator 
choice) but will result in the most efficienct graph in the end as well
   
   This optimization could either be done as part of the run or also run 
separately (imagine a CD pipeline) and then deployed together with the model to 
avoid 

[GitHub] stu1130 commented on a change in pull request #13611: add image resize operator and unit test

2018-12-11 Thread GitBox
stu1130 commented on a change in pull request #13611: add image resize operator 
and unit test
URL: https://github.com/apache/incubator-mxnet/pull/13611#discussion_r240787276
 
 

 ##
 File path: src/operator/image/image_random-inl.h
 ##
 @@ -147,6 +150,140 @@ void Normalize(const nnvm::NodeAttrs ,
   });
 }
 
+struct ResizeParam : public dmlc::Parameter {
+  nnvm::Tuple size;
+  bool keep_ratio;
+  int interp;
+  DMLC_DECLARE_PARAMETER(ResizeParam) {
+DMLC_DECLARE_FIELD(size)
+.set_default(nnvm::Tuple())
+.describe("Size of new image. Could be (width, height) or (size)");
+DMLC_DECLARE_FIELD(keep_ratio)
+.describe("Whether to resize the short edge or both edges to `size`, "
+  "if size is give as an integer.");
+DMLC_DECLARE_FIELD(interp)
+.set_default(1)
+.describe("Interpolation method for resizing. By default uses bilinear"
+"interpolation. See OpenCV's resize function for available choices.");
+  }
+};
+
+inline std::tuple GetHeightAndWidth(int data_h,
+  int data_w,
+  const ResizeParam& param) {
+  CHECK_LE(param.size.ndim(), 2)
+  << "Input size dimension must be 1 or 2, but got "
+  << param.size.ndim();
+  int resized_h;
+  int resized_w;
+  if (param.size.ndim() == 1) {
+CHECK_GT(param.size[0], 0)
+  << "Input size should greater than 0, but got "
 
 Review comment:
   It's input size from python.
   ```F.image.resize(x, self._size, self._keep, self._interpolation)```
size=size or size=(width, height)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu edited a comment on issue #13598: More fine-grained operator implementation dispatch & memory planning flow

2018-12-11 Thread GitBox
marcoabreu edited a comment on issue #13598: More fine-grained operator 
implementation dispatch & memory planning flow 
URL: 
https://github.com/apache/incubator-mxnet/issues/13598#issuecomment-446353433
 
 
   Thanks for your very good questions! 
   
   For the operator selection I would think about a design which has something 
similar to a "tuning" or warm-up stage which evaluates the different 
possibilities. Initially, since that revamp would be quite big and 
experimental, I would hardcode an order (e.g. CUDA->AMD->MKLDNN->CPU) which is 
then evaluated and certain backends dropped if they don't support that operator 
or they're simply not present. Later on, there would ideally be a benchmark 
step which evaluates the different possibilities and then chooses the most 
efficient representation of the graph. This evaluation would first start with 
simple benchmarks (with different strategies like memory footprint, power 
consumption, throughput, etc) of each operator backend and then in the next 
stage go one level higher and evaluate groups of operators (up to evaluating 
the entire graph) to accomodate for layout conversion and memcopy overhead.  In 
the last iteration, we would have a graph which is most efficienct, but also 
runnable on that hardware, for the requested graph.
   
   There are two ways I could think of backends conflicting:
   1. Mismatching memory layouts
   2. Impossible/unlikely combinations (CUDA  or MKL )
   
   To solve number one, I would extend the design to not only have the 
operators abstracted, but also their memory layouts. In the same way as we 
would have an operator registry, we would have a memory layout registry where 
each backend announces their memory layouts (this could be rearrange data or 
moving them to different memory slots like GPU mem) as well as converters. Each 
operator implementation would specify a desired layout (most likely the one 
they registered themselfes). Now imagine you have a graph with threeoperators:
   ```
   Input -> Operator1_CUDA -> Operator2_MKL -> Operator3_MKL -> Output
   ```
   These three operators are from two entirely different backends and have 
their own implementation and memory layouts. Our engine would, during the 
initial analysis of the graph (this step is after the optional graph 
optimization and we assume the graph as final at that point), analyse the 
desired layout of each operator (in this case CUDA and MKL, but it could also 
go a level deeper like CUDA_NHWC etc) and then see whether they are compatible. 
If they are not, the engine would request a converter from the memory layout 
registry. These converters would then be inserted into the graph and the final 
graph would look as follows:
   ```
   Input -> Convert_Standard_CUDA -> Operator1_CUDA -> Convert_CUDA_MKL -> 
Operator2_MKL -> Operator3_MKL -> Convert_MKL_Standard -> Output
   ```
   This way, you will always have compatibility in between the different 
layouts while the neither the operators nor the engine will have to care about 
the different backends as that conversion happens in between. When an operator 
receives and outputs data, it expects to be in its "isolated" world. If the 
operators are from the same backend and use the same layout though, this 
conversion is skipped and a performance advantage is achieved.
   Now at this point you could get to O(N!) if you need convertors in between 
every single possible memory layout. The trick here is to have a standard 
layout (which we basically already have and is used to input and output data 
from the graphs). Each memory layout has to register at least two converters: 
TO_STANDARD and FROM_STANDARD. This allows have compatibility for backends 
where no direct conversion exists. Since this will require two conversions 
(FROM_MEMLAYOUT1_TO_STANARD and FROM_STANDARD_TO_MEMLAYOUT2), this will have 
additional overhead but keep compatibility high. For common cases, where would 
probably be direct converters. 
   
   For the second case where conflicting backends exist, they would simply be 
skipped during the evaluation stage when the engine checks whether an operator 
is actually eligible. So if CUDA is not present, the operator will simply not 
be considered for that graph.
   
   
   I hope my prosa makes my idea a bit clearer. If you would like, I'm happy to 
draw a small diagram


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu edited a comment on issue #13598: More fine-grained operator implementation dispatch & memory planning flow

2018-12-11 Thread GitBox
marcoabreu edited a comment on issue #13598: More fine-grained operator 
implementation dispatch & memory planning flow 
URL: 
https://github.com/apache/incubator-mxnet/issues/13598#issuecomment-446353433
 
 
   Thanks for your very good questions! 
   
   For the operator selection I would think about a design which has something 
similar to a "tuning" or warm-up stage which evaluates the different 
possibilities. Initially, since that revamp would be quite big and 
experimental, I would hardcode an order (e.g. CUDA->AMD->MKLDNN->CPU) which is 
then evaluated and certain backends dropped if they don't support that operator 
or they're simply not present. Later on, there would ideally be a benchmark 
step which evaluates the different possibilities and then chooses the most 
efficient representation of the graph. This evaluation would first start with 
simple benchmarks (with different strategies like memory footprint, power 
consumption, throughput, etc) of each operator backend and then in the next 
stage go one level higher and evaluate groups of operators (up to evaluating 
the entire graph) to accomodate for layout conversion and memcopy overhead.  In 
the last iteration, we would have a graph which is most efficienct, but also 
runnable on that hardware, for the requested graph.
   
   There are two ways I could think of backends conflicting:
   1. Mismatching memory layouts
   2. Impossible/unlikely combinations (CUDA  or MKL )
   
   To solve number one, I would extend the design to not only have the 
operators abstracted, but also their memory layouts. In the same way as we 
would have an operator registry, we would have a memory layout registry where 
each backend announces their memory layouts (this could be rearrange data or 
moving them to different memory slots like GPU mem) as well as converters. Each 
operator implementation would specify a desired layout (most likely the one 
they registered themselfes). Now imagine you have a graph with threeoperators:
   ```
   Input -> Operator1_CUDA -> Operator2_MKL -> Operator3_MKL -> Output
   ```
   These three operators are from two entirely different backends and have 
their own implementation and memory layouts. Our engine would, during the 
initial analysis of the graph (this step is after the optional graph 
optimization and we assume the graph as final at that point), analyse the 
desired layout of each operator (in this case CUDA and MKL, but it could also 
go a level deeper like CUDA_NHWC etc) and then see whether they are compatible. 
If they are not, the engine would request a converter from the memory layout 
registry. These converters would then be inserted into the graph and the final 
graph would look as follows:
   ```
   Input -> Convert_Standard_CUDA -> Operator1_CUDA -> Convert_CUDA_MKL -> 
Operator2_MKL -> Operator3_MKL -> Convert_MKL_Standard -> Output
   ```
   This way, you will always have compatibility in between the different 
layouts while the neither the operators nor the engine will have to care about 
the different backends as that conversion happens in between. When an operator 
receives and outputs data, it expects to be in its "isolated" world. If the 
operators are from the same backend and use the same layout though, this 
conversion is skipped and a performance advantage is achieved.
   Now at this point you could get to O(N!) if you need convertors in between 
every single possible memory layout. The trick here is to have a standard 
layout (which we basically already have and is used to input and output data 
from the graphs). Each memory layout has to register at least two converters: 
TO_STANDARD and FROM_STANDARD. This allows have compatibility for backends 
where no direct conversion exists. Since this will require two conversions 
(FROM_MEMLAYOUT1_TO_STANARD and FROM_STANDARD_TO_MEMLAYOUT2), this will have 
additional overhead but keep compatibility high. For common cases, where would 
probably be direct converters. 
   
   For the second case where conflicting backends exist, they would simply be 
skipped during the evaluation stage when the engine checks whether an operator 
is actually eligible. So if CUDA is not present, the operator will simply not 
be considered for that graph.
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #13598: More fine-grained operator implementation dispatch & memory planning flow

2018-12-11 Thread GitBox
marcoabreu commented on issue #13598: More fine-grained operator implementation 
dispatch & memory planning flow 
URL: 
https://github.com/apache/incubator-mxnet/issues/13598#issuecomment-446353433
 
 
   Thanks for your very good questions! 
   
   For the operator selection I would think about a design which has something 
similar to a "tuning" or warm-up stage which evaluates the different 
possibilities. Initially, since that revamp would be quite big and 
experimental, I would hardcode an order (e.g. CUDA->AMD->MKLDNN->CPU) which is 
then evaluated and certain backends dropped if they don't support that operator 
or they're simply not present. Later on, there would ideally be a benchmark 
step which evaluates the different possibilities and then chooses the most 
efficient representation of the graph. This evaluation would first start with 
simple benchmarks (with different strategies like memory footprint, power 
consumption, throughput, etc) of each operator backend and then in the next 
stage go one level higher and evaluate groups of operators (up to evaluating 
the entire graph) to accomodate for layout conversion and memcopy overhead.  In 
the last iteration, we would have a graph which is most efficienct, but also 
runnable on that hardware, for the requested graph.
   
   There are two ways I could think of backends conflicting:
   1. Mismatching memory layouts
   2. Impossible/unlikely combinations (CUDA  or MKL )
   
   To solve number one, I would extend the design to not only have the 
operators abstracted, but also their memory layouts. In the same way as we 
would have an operator registry, we would have a memory layout registry where 
each backend announces their memory layouts as well as converters. Each 
operator implementation would specify a desired layout (most likely the one 
they registered themselfes). Now imagine you have a graph with threeoperators:
   ```
   Input -> Operator1_CUDA -> Operator2_MKL -> Operator3_MKL -> Output
   ```
   These three operators are from two entirely different backends and have 
their own implementation and memory layouts. Our engine would, during the 
initial analysis of the graph (this step is after the optional graph 
optimization and we assume the graph as final at that point), analyse the 
desired layout of each operator (in this case CUDA and MKL, but it could also 
go a level deeper like CUDA_NHWC etc) and then see whether they are compatible. 
If they are not, the engine would request a converter from the memory layout 
registry. These converters would then be inserted into the graph and the final 
graph would look as follows:
   ```
   Input -> Convert_Standard_CUDA -> Operator1_CUDA -> Convert_CUDA_MKL -> 
Operator2_MKL -> Operator3_MKL -> Convert_MKL_Standard -> Output
   ```
   This way, you will always have compatibility in between the different 
layouts while the neither the operators nor the engine will have to care about 
the different backends as that conversion happens in between. When an operator 
receives and outputs data, it expects to be in its "isolated" world. If the 
operators are from the same backend and use the same layout though, this 
conversion is skipped and a performance advantage is achieved.
   Now at this point you could get to O(N!) if you need convertors in between 
every single possible memory layout. The trick here is to have a standard 
layout (which we basically already have and is used to input and output data 
from the graphs). Each memory layout has to register at least two converters: 
TO_STANDARD and FROM_STANDARD. This allows have compatibility for backends 
where no direct conversion exists. Since this will require two conversions 
(FROM_MEMLAYOUT1_TO_STANARD and FROM_STANDARD_TO_MEMLAYOUT2), this will have 
additional overhead but keep compatibility high. For common cases, where would 
probably be direct converters. 
   
   For the second case where conflicting backends exist, they would simply be 
skipped during the evaluation stage when the engine checks whether an operator 
is actually eligible. So if CUDA is not present, the operator will simply not 
be considered for that graph.
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2018-12-11 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new ed2ac32  Bump the publish timestamp.
ed2ac32 is described below

commit ed2ac324a26fbc37700dcd2e4e15f386bd8bbc89
Author: mxnet-ci 
AuthorDate: Tue Dec 11 20:32:10 2018 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..fde3f17
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue Dec 11 20:32:10 UTC 2018



[GitHub] mseth10 edited a comment on issue #12203: flaky test: test_operator_gpu.test_depthwise_convolution

2018-12-11 Thread GitBox
mseth10 edited a comment on issue #12203: flaky test: 
test_operator_gpu.test_depthwise_convolution
URL: 
https://github.com/apache/incubator-mxnet/issues/12203#issuecomment-446342867
 
 
   This flaky test issue has previously been identified 
(https://github.com/apache/incubator-mxnet/issues/8712) and fixed 
(https://github.com/apache/incubator-mxnet/pull/10365) for Python2: MKLDNN-CPU. 
During this fix (PR discussion), it was identified that this problem still 
exists for Python2: MKLDNN-GPU.
   
   PR https://github.com/apache/incubator-mxnet/pull/10578 supposedly fixed the 
issue, but as it appears, the test still fails non-deterministically. Can you 
please have a look at this issue? @nihui @xinyu-intel @pengzhao-intel @zheng-da 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] samskalicky commented on issue #13039: [MXNET-918] Random module

2018-12-11 Thread GitBox
samskalicky commented on issue #13039: [MXNET-918] Random module
URL: https://github.com/apache/incubator-mxnet/pull/13039#issuecomment-446345931
 
 
   @ChaiBapchya wrote the randint operator


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #13598: More fine-grained operator implementation dispatch & memory planning flow

2018-12-11 Thread GitBox
eric-haibin-lin commented on issue #13598: More fine-grained operator 
implementation dispatch & memory planning flow 
URL: 
https://github.com/apache/incubator-mxnet/issues/13598#issuecomment-446343940
 
 
   @marcoabreu thanks for the comments. True that the existing infer_storage 
interface and the proposed infer_storage_ex interface both need to write 
backend specific logics. What kind of abstraction would you like to see? Let's 
say each backend provides one implementation which only concerns that backend 
itself. Now how does MXNet provide a general guide to select/prioritize these 
implementations if it is built with MKLDNN+CUDA+AMDHIP? What order would you 
propose to invoke these functions, and what if one of them conflicts with other 
backends? How does MXNet resolve these conflicts? 
   I do want to limit the discussion to memory planning itself so that 
@DickJC123 's work on NHWC can be unblocked as soon as possible. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin edited a comment on issue #13598: More fine-grained operator implementation dispatch & memory planning flow

2018-12-11 Thread GitBox
eric-haibin-lin edited a comment on issue #13598: More fine-grained operator 
implementation dispatch & memory planning flow 
URL: 
https://github.com/apache/incubator-mxnet/issues/13598#issuecomment-446343940
 
 
   @marcoabreu thanks for the comments. True that the existing infer_storage 
interface and the proposed infer_storage_ex interface both need to write 
backend specific logics. What kind of abstraction would you like to see? Let's 
say each backend provides one implementation which only concerns that backend 
itself. Now how does MXNet provide a general guide to select/prioritize these 
implementations if it is built with MKLDNN+CUDA+AMDHIP? What order would you 
propose to invoke these functions, and what if one of them conflicts with other 
backends? How does MXNet resolve these conflicts on a per operator basis? 
   I do want to limit the discussion to memory planning itself so that 
@DickJC123 's work on NHWC can be unblocked as soon as possible. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >