[GitHub] lupesko commented on issue #9378: Mxnet C API CPU Mode segmentation fault

2018-01-16 Thread GitBox
lupesko commented on issue #9378: Mxnet C API CPU Mode segmentation fault
URL: 
https://github.com/apache/incubator-mxnet/issues/9378#issuecomment-358222749
 
 
   @pitLog it is reasonable to assume the problem is in your model.
   Try building w/o MKL and see if the issue goes away.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #9464: refactor logging in infer storage pass

2018-01-16 Thread GitBox
eric-haibin-lin commented on a change in pull request #9464: refactor logging 
in infer storage pass
URL: https://github.com/apache/incubator-mxnet/pull/9464#discussion_r161969301
 
 

 ##
 File path: src/executor/infer_graph_attr_pass.cc
 ##
 @@ -50,7 +50,15 @@ bool ApplyOpInferAttr(const 
nnvm::Graph& g,
   std::vector* out_attrs,
   DispatchMode* dispatch_mode) {
   const DevMaskVector& dev_masks = g.GetAttr("dev_mask");
-  return finfer(attrs, dev_masks[nid], dispatch_mode, in_attrs, out_attrs);
+  const bool success = finfer(attrs, dev_masks[nid], dispatch_mode, in_attrs, 
out_attrs);
+  if (!success) {
 
 Review comment:
   Current `finferstorage` returns false only on the cases where some ops are 
not implemented at all, such as `_sparse_retain(csr,csr)`. InferStorageType 
only happens once


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
eric-haibin-lin commented on a change in pull request #8302: Refactor operators 
& MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161939535
 
 

 ##
 File path: src/operator/tensor/matrix_op.cc
 ##
 @@ -180,6 +182,51 @@ If the argument `reverse` is set to 1, then the special 
values are inferred from
 .add_argument("data", "NDArray-or-Symbol", "Input data to reshape.")
 .add_arguments(ReshapeParam::__FIELDS__());
 
+static void FlattenEx(const nnvm::NodeAttrs& attrs,
+  const OpContext& ctx,
+  const std::vector& inputs,
+  const std::vector& req,
+  const std::vector& outputs) {
+  CHECK_EQ(inputs.size(), 1U);
+  CHECK_EQ(outputs.size(), 1U);
+#if MXNET_USE_MKLDNN == 1
+  const auto in_stype = inputs[0].storage_type();
+  const auto out_stype = outputs[0].storage_type();
+  if (inputs[0].IsMKLDNN()) {
+MKLDNNCopy(attrs, ctx, inputs[0], req[0], outputs[0]);
+// If the output is a special MKLDNN layout and the number of dimensions
+// is larger than 2, we should use the default layout.
+if (outputs[0].IsMKLDNN() && inputs[0].shape().ndim() > 2)
+  const_cast(outputs[0]).Reorder2Default();
+return;
+  } else {
+// This happens if inputs are supposed to be in MKLDNN format
+// but MKLDNN doesn't support the data type or the shape. We're
+// forced to convert it to the default format.
+FallBackCompute(UnaryOp::IdentityCompute, attrs, ctx, inputs, req, 
outputs);
+return;
+  }
+#endif
+}
+
+static inline bool FlattenStorageType(const nnvm::NodeAttrs& attrs,
+   const int dev_mask,
+   DispatchMode* dispatch_mode,
+   std::vector *in_attrs,
+   std::vector *out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1);
+  CHECK_EQ(out_attrs->size(), 1);
+  bool ret = ElemwiseStorageType<1, 1, false, true, true>(attrs, dev_mask, 
dispatch_mode,
 
 Review comment:
   why `ElemwiseStorageType<1, 1, false, true, true>`? the last two tparams 
indicate csr and row_sparse are both supported, which is not true


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #9464: refactor logging in infer storage pass

2018-01-16 Thread GitBox
reminisce commented on a change in pull request #9464: refactor logging in 
infer storage pass
URL: https://github.com/apache/incubator-mxnet/pull/9464#discussion_r161967144
 
 

 ##
 File path: src/executor/infer_graph_attr_pass.cc
 ##
 @@ -50,7 +50,15 @@ bool ApplyOpInferAttr(const 
nnvm::Graph& g,
   std::vector* out_attrs,
   DispatchMode* dispatch_mode) {
   const DevMaskVector& dev_masks = g.GetAttr("dev_mask");
-  return finfer(attrs, dev_masks[nid], dispatch_mode, in_attrs, out_attrs);
+  const bool success = finfer(attrs, dev_masks[nid], dispatch_mode, in_attrs, 
out_attrs);
+  if (!success) {
 
 Review comment:
   InferShape and InferType would run multiple times if they fail at one time 
until no more information can be deducted. Does this also apply to 
InferStorageType or it only infers for one time?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #9464: refactor logging in infer storage pass

2018-01-16 Thread GitBox
reminisce commented on a change in pull request #9464: refactor logging in 
infer storage pass
URL: https://github.com/apache/incubator-mxnet/pull/9464#discussion_r161968768
 
 

 ##
 File path: src/operator/operator_common.h
 ##
 @@ -479,67 +480,11 @@ inline void ParamParser(nnvm::NodeAttrs* attrs) {
   << ") == " << param << ".shape[0] (" << rsp.shape()[0] << ").";  
\
   }
 
-/*! \brief get string representation of the operator stypes */
-inline std::string operator_stype_string(const nnvm::NodeAttrs& attrs,
- const int dev_mask,
- const std::vector& in_attrs,
- const std::vector& out_attrs) {
-  std::string result = "";
-  result += "operator = " + attrs.op->name + "\n";
-  result += "input storage types = [";
-  for (const auto attr : in_attrs) {
-result += common::stype_string(attr) + ", ";
-  }
-  result += "]\n";
-  result += "output storage types = [";
-  for (const auto attr : out_attrs) {
-result += common::stype_string(attr) + ", ";
-  }
-  result += "]\n";
-  result += "params = {";
-  for (auto kv : attrs.dict) {
-result += "\"" + kv.first + "\" : " + kv.second + ", ";
+#define LOG_UNIMPLMENTED_OP(attrs, ctx, inputs, req, outputs)  
\
 
 Review comment:
   This looks like a function. It's normally preferred to define functions over 
macros since macros has several side effects, such as no type check, global 
name scope, no step-into in debugging, etc.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
eric-haibin-lin commented on a change in pull request #8302: Refactor operators 
& MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161939145
 
 

 ##
 File path: tests/python/unittest/test_executor.py
 ##
 @@ -140,22 +140,20 @@ def test_dot():
 
 def test_reshape():
 x = mx.sym.Variable('x')
-y = mx.sym.FullyConnected(x, num_hidden=4)
+y = mx.sym.Dropout(x, p=0.2)
 
 Review comment:
   Why was the FC test replaced? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #9464: refactor logging in infer storage pass

2018-01-16 Thread GitBox
reminisce commented on a change in pull request #9464: refactor logging in 
infer storage pass
URL: https://github.com/apache/incubator-mxnet/pull/9464#discussion_r161967547
 
 

 ##
 File path: src/imperative/imperative_utils.h
 ##
 @@ -138,15 +138,23 @@ inline void SetShapeType(const Context& ctx,
   for (auto& i : outputs) {
 out_storage_types.push_back(i->storage_type());
   }
+  bool infer_stype_success;
   if (inferstorage.count(attrs.op)) {
-CHECK(inferstorage[attrs.op](attrs, ctx.dev_mask(), dispatch_mode,
- _storage_types, _storage_types));
+infer_stype_success = inferstorage[attrs.op](attrs, ctx.dev_mask(), 
dispatch_mode,
+ _storage_types, 
_storage_types);
   } else {
 // if infer storage attr is not present, apply the default infer storage 
function
-bool success = exec::DefaultStorageType(attrs, ctx.dev_mask(), 
dispatch_mode,
-_storage_types, 
_storage_types);
-CHECK(success);
+infer_stype_success = exec::DefaultStorageType(attrs, ctx.dev_mask(), 
dispatch_mode,
+   _storage_types, 
_storage_types);
   }
+  if (!infer_stype_success) {
 
 Review comment:
   nit: Use CHECK to get rid of the `if` check.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #9464: refactor logging in infer storage pass

2018-01-16 Thread GitBox
eric-haibin-lin commented on a change in pull request #9464: refactor logging 
in infer storage pass
URL: https://github.com/apache/incubator-mxnet/pull/9464#discussion_r161964292
 
 

 ##
 File path: src/common/utils.h
 ##
 @@ -374,6 +416,25 @@ inline void LogOnce(const std::string& message) {
   }
 }
 
+/*! \brief log storage fallback event
+ */
+inline void LogStorageFallback(const nnvm::NodeAttrs& attrs,
+   const int dev_mask,
+   const std::vector* in_attrs,
+   const std::vector* out_attrs) {
+  static bool log = dmlc::GetEnv("MXNET_STORAGE_FALLBACK_LOG_VERBOSE", true);
+  if (!log) return;
+  const std::string op_str = operator_stype_string(attrs, dev_mask, *in_attrs, 
*out_attrs);
+  std::ostringstream os;
+  os << "\nStorage type fallback detected:\n" << op_str
 
 Review comment:
   Yeah I can use a multi-line string instead of all the ``<<``


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9464: refactor logging in infer storage pass

2018-01-16 Thread GitBox
cjolivier01 commented on a change in pull request #9464: refactor logging in 
infer storage pass
URL: https://github.com/apache/incubator-mxnet/pull/9464#discussion_r161963306
 
 

 ##
 File path: src/common/utils.h
 ##
 @@ -364,6 +364,48 @@ inline std::string dev_type_string(const int dev_type) {
   return "unknown";
 }
 
+/*! \brief get string representation of the operator stypes */
+inline std::string operator_stype_string(const nnvm::NodeAttrs& attrs,
+ const int dev_mask,
+ const std::vector& in_attrs,
+ const std::vector& out_attrs) {
+  std::string result = "";
+  result += "operator = " + attrs.op->name + "\n";
+  result += "input storage types = [";
+  for (const auto attr : in_attrs) {
+result += stype_string(attr) + ", ";
+  }
+  result += "]\n";
+  result += "output storage types = [";
+  for (const auto attr : out_attrs) {
+result += stype_string(attr) + ", ";
+  }
+  result += "]\n";
+  result += "params = {";
+  for (auto kv : attrs.dict) {
+result += "\"" + kv.first + "\" : " + kv.second + ", ";
+  }
+  result += "}\n";
+  result += "context.dev_mask = " + dev_type_string(dev_mask);
+  return result;
+}
+
+/*! \brief get string representation of the operator */
+inline std::string operator_string(const nnvm::NodeAttrs& attrs,
+  const OpContext& ctx,
+  const std::vector& inputs,
+  const std::vector& req,
+  const std::vector& outputs) {
+  std::string result = "";
+  std::vector in_stypes;
 
 Review comment:
   Can you call reserve on these first?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9464: refactor logging in infer storage pass

2018-01-16 Thread GitBox
cjolivier01 commented on a change in pull request #9464: refactor logging in 
infer storage pass
URL: https://github.com/apache/incubator-mxnet/pull/9464#discussion_r161963534
 
 

 ##
 File path: src/common/utils.h
 ##
 @@ -374,6 +416,25 @@ inline void LogOnce(const std::string& message) {
   }
 }
 
+/*! \brief log storage fallback event
+ */
+inline void LogStorageFallback(const nnvm::NodeAttrs& attrs,
+   const int dev_mask,
+   const std::vector* in_attrs,
+   const std::vector* out_attrs) {
+  static bool log = dmlc::GetEnv("MXNET_STORAGE_FALLBACK_LOG_VERBOSE", true);
+  if (!log) return;
+  const std::string op_str = operator_stype_string(attrs, dev_mask, *in_attrs, 
*out_attrs);
+  std::ostringstream os;
+  os << "\nStorage type fallback detected:\n" << op_str
 
 Review comment:
   This might get called a lot, right? If so, do you need all.of the << calls 
or can the string be continued without it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new pull request #9464: refactor logging in infer storage pass

2018-01-16 Thread GitBox
eric-haibin-lin opened a new pull request #9464: refactor logging in infer 
storage pass
URL: https://github.com/apache/incubator-mxnet/pull/9464
 
 
   ## Description ##
   This PR moves the logging of storage fallback message and other error 
messages from individual operator's FInferStorage to a common place, by 
checking the value of `dispatch_mode` and return code of FInferStorage.
   
   @reminisce @anirudh2290 @cjolivier01 
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jeremiedb commented on issue #9399: error: unable to load shared object /mxnet/libs/libmxnet.so

2018-01-16 Thread GitBox
jeremiedb commented on issue #9399:  error: unable to load shared object 
/mxnet/libs/libmxnet.so
URL: 
https://github.com/apache/incubator-mxnet/issues/9399#issuecomment-358201606
 
 
   Seems like an issue with OpenCV. Have you tried compiled with USE_OPENCV = 
0? 
   Did the `libopencv-dev` was successfully installed?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jeremiedb commented on issue #9453: [R] CNN Memory Leak - Needs to somehow call Garbage Collector

2018-01-16 Thread GitBox
jeremiedb commented on issue #9453: [R] CNN Memory Leak - Needs to somehow call 
Garbage Collector
URL: 
https://github.com/apache/incubator-mxnet/issues/9453#issuecomment-358200492
 
 
   Far from ideal, but as a dirty patch you can add a gc() call within the 
train.model function training loop here: 
   
https://github.com/apache/incubator-mxnet/blob/master/R-package/R/model.R#L221
   This will require to essentially copy and call the entire model.R script 
since some function has dependencies on functions that aren't exported. 
   It can be worthwile to add a condition such as if (nbatch %% 10) to avoid 
too many calls to gc() that may slowdown training. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #9396: inference speed drop after updating mxnet from 0.10.0 to 1.0.0

2018-01-16 Thread GitBox
cjolivier01 commented on issue #9396: inference speed drop after updating mxnet 
from 0.10.0 to 1.0.0
URL: 
https://github.com/apache/incubator-mxnet/issues/9396#issuecomment-358196577
 
 
   Apparently not -- was checking if something cpu-related was bottleneck but 
it seems not.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nicklhy commented on issue #9396: inference speed drop after updating mxnet from 0.10.0 to 1.0.0

2018-01-16 Thread GitBox
nicklhy commented on issue #9396: inference speed drop after updating mxnet 
from 0.10.0 to 1.0.0
URL: 
https://github.com/apache/incubator-mxnet/issues/9396#issuecomment-358193712
 
 
   @cjolivier01 Same result after setting it. The GPU usage is still 93~95% 
when testing ResNet101 or ResNet152(batch_size=1). However, mxnet 0.10.0 can 
easily reach 99-100%.
   
   Notice there are no disk IO or image pre-processing operations in my speed 
test script. The bottleneck should be in GPU, or more specifically, 
`cudaStreamSynchronize`. I guess "OMP_NUM_THREADS" won't be able to solve this 
problem?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cbalioglu opened a new pull request #9463: Rename kvstore/utils.* to kvstore/kvstore_utils.*

2018-01-16 Thread GitBox
cbalioglu opened a new pull request #9463: Rename kvstore/utils.* to 
kvstore/kvstore_utils.*
URL: https://github.com/apache/incubator-mxnet/pull/9463
 
 
   Using an old version of GCC (e.g. 4.9) as the host compiler for nvcc causes 
compilation failures in multi-threaded builds when there are multiple CUDA 
source files with the same name.
   
   ## Checklist ##
   ### Essentials ###
   - [x] Passed code style checking (`make lint`)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] To the my best knowledge, examples are either not affected by this 
change
   
   @ZiyueHuang  @eric-haibin-lin 
   #8732: `src/common/utils.cu` and `src/kvstore/utils.cu`
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: removed torch.html and references. Fixed Nesterov Momentum education link (#46)

2018-01-16 Thread skm
This is an automated email from the ASF dual-hosted git repository.

skm pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 2075c26  removed torch.html and references. Fixed Nesterov Momentum 
education link (#46)
2075c26 is described below

commit 2075c265433a3163f6dc87a46812d745884089a6
Author: thinksanky <31976455+thinksa...@users.noreply.github.com>
AuthorDate: Tue Jan 16 19:41:35 2018 -0800

removed torch.html and references. Fixed Nesterov Momentum education link 
(#46)
---
 _modules/mxnet/optimizer.html  |   2 +-
 api/python/model.html  |   2 +-
 api/python/optimization.html   |   2 +-
 api/python/optimization/optimization.html  |   2 +-
 faq/develop_and_hack.html  |   1 -
 faq/index.html |   2 -
 faq/torch.html | 315 -
 how_to/develop_and_hack.html   |   1 -
 how_to/index.html  |   2 -
 how_to/torch.html  | 264 -
 versions/0.11.0/api/python/model.html  |   2 +-
 versions/0.11.0/api/python/optimization.html   |   2 +-
 versions/0.11.0/how_to/develop_and_hack.html   |   1 -
 versions/0.11.0/how_to/index.html  |   2 -
 versions/0.12.0/api/python/model.html  |   2 +-
 versions/0.12.0/api/python/optimization.html   |   2 +-
 .../api/python/optimization/optimization.html  |   2 +-
 versions/0.12.0/faq/develop_and_hack.html  |   1 -
 versions/0.12.0/faq/index.html |   2 -
 versions/0.12.0/how_to/develop_and_hack.html   |   1 -
 versions/0.12.0/how_to/index.html  |   2 -
 versions/0.12.1/api/python/model.html  |   2 +-
 versions/0.12.1/api/python/optimization.html   |   2 +-
 .../api/python/optimization/optimization.html  |   2 +-
 versions/0.12.1/faq/develop_and_hack.html  |   1 -
 versions/0.12.1/faq/index.html |   2 -
 versions/0.12.1/how_to/develop_and_hack.html   |   1 -
 versions/0.12.1/how_to/index.html  |   2 -
 versions/master/_modules/mxnet/optimizer.html  |   2 +-
 versions/master/api/python/model.html  |   2 +-
 versions/master/api/python/optimization.html   |   2 +-
 .../api/python/optimization/optimization.html  |   2 +-
 versions/master/how_to/develop_and_hack.html   |   1 -
 versions/master/how_to/index.html  |   1 -
 34 files changed, 16 insertions(+), 618 deletions(-)

diff --git a/_modules/mxnet/optimizer.html b/_modules/mxnet/optimizer.html
index ec3ead4..9b29ac8 100644
--- a/_modules/mxnet/optimizer.html
+++ b/_modules/mxnet/optimizer.html
@@ -1267,7 +1267,7 @@
 
 Much like Adam is essentially RMSprop with 
momentum,
 Nadam is Adam RMSprop with Nesterov momentum 
available
-at 
http://cs229.stanford.edu/proj2015/054_report.pdf.
+at 
https://openreview.net/pdf?id=OM0jvwB8jIp57ZJjtNEZ.
 
 This optimizer accepts the following parameters in 
addition to those accepted
 by :class:`.Optimizer`.
diff --git a/api/python/model.html b/api/python/model.html
index 321daa1..bd9979b 100644
--- a/api/python/model.html
+++ b/api/python/model.html
@@ -2187,7 +2187,7 @@ by http://cs229.stanford.edu/proj2015/054_report.pdf;>http://cs229.stanford.edu/proj2015/054_report.pdf.
+at https://openreview.net/pdf?id=OM0jvwB8jIp57ZJjtNEZ;>https://openreview.net/pdf?id=OM0jvwB8jIp57ZJjtNEZ.
 This optimizer accepts the following parameters in addition to those 
accepted
 by Optimizer.
 
diff --git a/api/python/optimization.html b/api/python/optimization.html
index bb8cf9c..3c563e6 100644
--- a/api/python/optimization.html
+++ b/api/python/optimization.html
@@ -811,7 +811,7 @@ by http://cs229.stanford.edu/proj2015/054_report.pdf;>http://cs229.stanford.edu/proj2015/054_report.pdf.
+at https://openreview.net/pdf?id=OM0jvwB8jIp57ZJjtNEZ;>https://openreview.net/pdf?id=OM0jvwB8jIp57ZJjtNEZ.
 This optimizer accepts the following parameters in addition to those 
accepted
 by Optimizer.
 
diff --git a/api/python/optimization/optimization.html 
b/api/python/optimization/optimization.html
index 03e4c76..4fb684e 100644
--- a/api/python/optimization/optimization.html
+++ b/api/python/optimization/optimization.html
@@ -978,7 +978,7 @@ by http://cs229.stanford.edu/proj2015/054_report.pdf;>http://cs229.stanford.edu/proj2015/054_report.pdf.
+at https://openreview.net/pdf?id=OM0jvwB8jIp57ZJjtNEZ;>https://openreview.net/pdf?id=OM0jvwB8jIp57ZJjtNEZ.
 This optimizer accepts the following parameters in addition to those 
accepted
 by Optimizer.
 
diff --git a/faq/develop_and_hack.html b/faq/develop_and_hack.html
index b105ca9..1f06872 100644
--- 

[GitHub] piiswrong commented on a change in pull request #9460: Data-iterator tutorial made python3 compatible.

2018-01-16 Thread GitBox
piiswrong commented on a change in pull request #9460: Data-iterator tutorial 
made python3 compatible.
URL: https://github.com/apache/incubator-mxnet/pull/9460#discussion_r161949534
 
 

 ##
 File path: docs/tutorials/basic/data.md
 ##
 @@ -122,17 +126,17 @@ class SimpleIter(mx.io.DataIter):
 
 @property
 def provide_data(self):
-return self._provide_data
+return zip(self._data_names, self._data_shapes)
 
 @property
 def provide_label(self):
-return self._provide_label
+return zip(self._label_names, self._label_shapes)
 
 Review comment:
   ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #9460: Data-iterator tutorial made python3 compatible.

2018-01-16 Thread GitBox
piiswrong commented on a change in pull request #9460: Data-iterator tutorial 
made python3 compatible.
URL: https://github.com/apache/incubator-mxnet/pull/9460#discussion_r161949442
 
 

 ##
 File path: docs/tutorials/basic/data.md
 ##
 @@ -180,6 +184,30 @@ mod = mx.mod.Module(symbol=net)
 mod.fit(data_iter, num_epoch=5)
 ```
 
+A note on python 3 usage: Lot of the methods in mxnet use string for python2 
and bytes for python3. 
+In order to keep this tutorial readable, we are going to define a utility 
function that converts
+string to bytes in python 3 environment
+
+```python
+def str_or_bytes(str):
 
 Review comment:
   mxnet.base.py_str


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #9396: inference speed drop after updating mxnet from 0.10.0 to 1.0.0

2018-01-16 Thread GitBox
cjolivier01 commented on issue #9396: inference speed drop after updating mxnet 
from 0.10.0 to 1.0.0
URL: 
https://github.com/apache/incubator-mxnet/issues/9396#issuecomment-358187266
 
 
   (set before running python)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #9396: inference speed drop after updating mxnet from 0.10.0 to 1.0.0

2018-01-16 Thread GitBox
cjolivier01 commented on issue #9396: inference speed drop after updating mxnet 
from 0.10.0 to 1.0.0
URL: 
https://github.com/apache/incubator-mxnet/issues/9396#issuecomment-358187229
 
 
   can you try setting OMP_NUM_THREADS=1 in the environment and try again? does 
it speed up?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #3946: When predicting, does mxnet provide thread-safe interface?

2018-01-16 Thread GitBox
TaoLv commented on issue #3946: When predicting, does mxnet provide thread-safe 
interface?
URL: 
https://github.com/apache/incubator-mxnet/issues/3946#issuecomment-357873369
 
 
   @piiswrong @eric-haibin-lin Maybe not related. Can mxnet find two 
independent ops in a computation graph and execute them parallelly on two cores 
of one CPU, respectively?  Or if mxnet can konw how much cores are there, and 
give first half to the first operator and give the second half to the second 
operator.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
zheng-da commented on a change in pull request #8302: Refactor operators & 
MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161940072
 
 

 ##
 File path: src/storage/cpu_device_storage.h
 ##
 @@ -54,7 +54,11 @@ class CPUDeviceStorage {
   /*!
* \brief Alignment of allocation.
*/
+#if MXNET_USE_MKLDNN == 1
+  static constexpr size_t alignment_ = 4096;
 
 Review comment:
   MKLDNN requires memory to have alignment > 16. I'm not entirely sure what is 
the best alignment. The MKLDNN library uses 4096, so I use 4096.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
zheng-da commented on a change in pull request #8302: Refactor operators & 
MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161939043
 
 

 ##
 File path: include/mxnet/ndarray.h
 ##
 @@ -611,6 +565,64 @@ class NDArray {
  << "CheckAndAllocAuxData is not intended for kDefaultStorage";
 ptr_->CheckAndAllocAuxData(i, aux_shape);
   }
+
+#if MXNET_USE_MKLDNN == 1
+  bool IsMKLDNN() const {
+return ptr_->IsMKLDNN();
+  }
+  bool IsDefault() const {
+return ptr_->IsDefault();
+  }
+  /*
+   * All functions below return a raw pointer to mkldnn memory. Actually there
+   * is a shared pointer that hold the memory either in NDArray or in MKLDNN
+   * stream. As long as we call these functions inside an operator, the return
+   * memory is always valid.
+   */
+
+  /*
+   * This function returns mkldnn::memory with the default primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNData() const;
+  /*
+   * This function returns mkldnn::memory with the given primitive_desc
+   * as long as the array size meets the required size in the given 
primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNData(
+  const mkldnn::memory::primitive_desc ) const;
+  /*
+   * This function returns mkldnn::memory with the given primitive_desc.
+   * The returned mkldnn::memory will have the same physical layout as
+   * the given primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNDataReorder(
+  const mkldnn::memory::primitive_desc ) const;
+
+  void CopyFrom(const mkldnn::memory );
+  mkldnn::memory *CreateMKLDNNData(
+  const mkldnn::memory::primitive_desc );
+
+  /*
+   * Reorder the memory to the specified layout.
+   */
+  void Reorder(const mkldnn::memory::primitive_desc );
+  void Reorder2Default() {
+CHECK_EQ(storage_type(), kDefaultStorage);
+ptr_->Reorder2Default();
+  }
+
+  void InvalidateData() {
+// When we invalidate data, we don't need to care about the MKLDNN format.
+ptr_->Mkl_mem_ = nullptr;
+  }
+
+  /*
+   * This function is used inside operators to reshape an array.
+   * It's used by FullyConnected right now.
+   */
+  NDArray ReshapeMKLDNN(const TShape ) const;
 
 Review comment:
   If the array stores data in a special layout, Reshape will cause the data in 
the array to be converted to the default layout, which allocates memory from 
malloc directly.
   ReshapeMKLDNN won't change the layout of the original array and always uses 
the temporary memory buffer to store the reordered data.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Remove empty lines at the end of file (#9459)

2018-01-16 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 9353c2e  Remove empty lines at the end of file (#9459)
9353c2e is described below

commit 9353c2e067ed5ff7948cfc42ddc874045ae1d464
Author: mbaijal <30911248+mbai...@users.noreply.github.com>
AuthorDate: Tue Jan 16 17:55:23 2018 -0800

Remove empty lines at the end of file (#9459)
---
 NOTICE | 8 
 1 file changed, 8 deletions(-)

diff --git a/NOTICE b/NOTICE
index d532722..a12b99f 100644
--- a/NOTICE
+++ b/NOTICE
@@ -3,11 +3,3 @@
 
 This product includes software developed at
 The Apache Software Foundation (http://www.apache.org/).
-
-
-
-
-
-
-
-

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" '].


[GitHub] eric-haibin-lin closed pull request #9459: Remove empty lines at the end of NOTICE file

2018-01-16 Thread GitBox
eric-haibin-lin closed pull request #9459: Remove empty lines at the end of 
NOTICE file
URL: https://github.com/apache/incubator-mxnet/pull/9459
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/NOTICE b/NOTICE
index d5327226ae..a12b99f5b5 100644
--- a/NOTICE
+++ b/NOTICE
@@ -3,11 +3,3 @@
 
 This product includes software developed at
 The Apache Software Foundation (http://www.apache.org/).
-
-
-
-
-
-
-
-


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
zheng-da commented on a change in pull request #8302: Refactor operators & 
MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161937980
 
 

 ##
 File path: include/mxnet/ndarray.h
 ##
 @@ -611,6 +565,64 @@ class NDArray {
  << "CheckAndAllocAuxData is not intended for kDefaultStorage";
 ptr_->CheckAndAllocAuxData(i, aux_shape);
   }
+
+#if MXNET_USE_MKLDNN == 1
+  bool IsMKLDNN() const {
+return ptr_->IsMKLDNN();
+  }
+  bool IsDefault() const {
+return ptr_->IsDefault();
+  }
+  /*
+   * All functions below return a raw pointer to mkldnn memory. Actually there
+   * is a shared pointer that hold the memory either in NDArray or in MKLDNN
+   * stream. As long as we call these functions inside an operator, the return
+   * memory is always valid.
+   */
+
+  /*
+   * This function returns mkldnn::memory with the default primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNData() const;
+  /*
+   * This function returns mkldnn::memory with the given primitive_desc
+   * as long as the array size meets the required size in the given 
primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNData(
+  const mkldnn::memory::primitive_desc ) const;
+  /*
+   * This function returns mkldnn::memory with the given primitive_desc.
+   * The returned mkldnn::memory will have the same physical layout as
+   * the given primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNDataReorder(
+  const mkldnn::memory::primitive_desc ) const;
+
+  void CopyFrom(const mkldnn::memory );
+  mkldnn::memory *CreateMKLDNNData(
+  const mkldnn::memory::primitive_desc );
+
+  /*
+   * Reorder the memory to the specified layout.
+   */
+  void Reorder(const mkldnn::memory::primitive_desc );
+  void Reorder2Default() {
+CHECK_EQ(storage_type(), kDefaultStorage);
+ptr_->Reorder2Default();
+  }
+
+  void InvalidateData() {
+// When we invalidate data, we don't need to care about the MKLDNN format.
+ptr_->Mkl_mem_ = nullptr;
 
 Review comment:
   Maybe I should rename it as ResetMKLDNNData.
   Mkl_mem_ always references to the data in shandle and only provides the 
information of the data layout and data shape. removing Mkl_mem_ means the 
NDArray will store data in the default format.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nicklhy commented on issue #9396: inference speed drop after updating mxnet from 0.10.0 to 1.0.0

2018-01-16 Thread GitBox
nicklhy commented on issue #9396: inference speed drop after updating mxnet 
from 0.10.0 to 1.0.0
URL: 
https://github.com/apache/incubator-mxnet/issues/9396#issuecomment-358157026
 
 
   @eric-haibin-lin The above results are tested with a ResNet152 json file 
downloaded from 
[http://data.mxnet.io/models/imagenet/resnet/152-layers/resnet-152-symbol.json](http://data.mxnet.io/models/imagenet/resnet/152-layers/resnet-152-symbol.json).
   
   BTW, I also tested 
[ResNet101](http://data.mxnet.io/models/imagenet/resnet/101-layers/resnet-101-symbol.json)
 and got a similar result:
   ```
   MXNet version: 0.10.0
   
   speed test for batch size: 1
avg forward speed: 116.469000 samples/s
avg forward time: mean = 0.008585 s, std = 0.000308 s
   
   speed test for batch size: 4
avg forward speed: 293.537501 samples/s
avg forward time: mean = 0.013625 s, std = 0.000497 s
   
   speed test for batch size: 16
avg forward speed: 407.152972 samples/s
avg forward time: mean = 0.039295 s, std = 0.000929 s
   
   speed test for batch size: 64
avg forward speed: 432.887962 samples/s
avg forward time: mean = 0.147841 s, std = 0.001603 s
   
   speed test for batch size: 128
avg forward speed: 443.354422 samples/s
avg forward time: mean = 0.288704 s, std = 0.003057 s
   ```
   ```
   MXNet version: 1.0.1
   
   speed test for batch size: 1
avg forward speed: 109.414104 samples/s
avg forward time: mean = 0.009137 s, std = 0.000553 s
   
   speed test for batch size: 4
avg forward speed: 288.660234 samples/s
avg forward time: mean = 0.013856 s, std = 0.000231 s
   
   speed test for batch size: 16
avg forward speed: 412.575750 samples/s
avg forward time: mean = 0.038778 s, std = 0.001016 s
   
   speed test for batch size: 64
avg forward speed: 447.079919 samples/s
avg forward time: mean = 0.143147 s, std = 0.001947 s
   
   speed test for batch size: 128
avg forward speed: 458.120138 samples/s
avg forward time: mean = 0.279399 s, std = 0.004146 s
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook commented on issue #9458: Support auto infer input shape in convolution operator

2018-01-16 Thread GitBox
chinakook commented on issue #9458: Support auto infer input shape in 
convolution operator
URL: 
https://github.com/apache/incubator-mxnet/issues/9458#issuecomment-358166890
 
 
   Gluon can support this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
zheng-da commented on a change in pull request #8302: Refactor operators & 
MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161936090
 
 

 ##
 File path: include/mxnet/ndarray.h
 ##
 @@ -534,15 +491,12 @@ class NDArray {
 CHECK_GE(ptr_->shandle.size,
  shape.Size() * mshadow::mshadow_sizeof(dtype))
 << "NDArray.AsArray: target memory size is bigger";
-#if MKL_EXPERIMENTAL == 1
-if (Mkl_mem_ != nullptr) {
-  // convert prv to cpu
-  Mkl_mem_->check_and_prv_to_cpu(ptr_->shandle.dptr);
-}
-#endif
+// We can't reuse memory in a view.
+CHECK(!IsView());
 NDArray ret = *this;
 ret.shape_ = shape;
 ret.dtype_ = dtype;
+ret.reuse_ = true;
 
 Review comment:
   I searched for it in the entire repo, including submodules, and only find 
three locations it's used:
   ```
   src/executor/graph_executor.cc:  data_entry_[i] = src.AsArray(vshape[i], 
vdtype[i]);
   src/imperative/imperative_utils.h:*arrays[i] = 
buff.AsArray(shapes[i], dtypes[i]);
   src/imperative/imperative_utils.h:*arrays[i] = 
arrays[mem_plan[i].sid]->AsArray(shapes[i], dtypes[i]);
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mbaijal opened a new pull request #9462: [WIP][ReviewRequired] License Fixes post 1.0.0 Release

2018-01-16 Thread GitBox
mbaijal opened a new pull request #9462: [WIP][ReviewRequired] License Fixes 
post 1.0.0 Release 
URL: https://github.com/apache/incubator-mxnet/pull/9462
 
 
   ## Description ##
   Fixing Licenses based on comments from 1.0.0 Release. Recorded in this wiki 
- 
   https://cwiki.apache.org/confluence/display/MXNET/MXNet+Source+Licenses
   Details can be found in commit messages. 
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Removed ASF license from im2col.cuh and im2col.h (BSD Licensed)
   
   ## Comments ##
   This is currently an incremental work in progress for 1.0.1 release
   This needs to be reviewed
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
piiswrong commented on a change in pull request #8302: Refactor operators & 
MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161931886
 
 

 ##
 File path: include/mxnet/ndarray.h
 ##
 @@ -611,6 +565,64 @@ class NDArray {
  << "CheckAndAllocAuxData is not intended for kDefaultStorage";
 ptr_->CheckAndAllocAuxData(i, aux_shape);
   }
+
+#if MXNET_USE_MKLDNN == 1
+  bool IsMKLDNN() const {
+return ptr_->IsMKLDNN();
+  }
+  bool IsDefault() const {
+return ptr_->IsDefault();
+  }
+  /*
+   * All functions below return a raw pointer to mkldnn memory. Actually there
+   * is a shared pointer that hold the memory either in NDArray or in MKLDNN
+   * stream. As long as we call these functions inside an operator, the return
+   * memory is always valid.
+   */
+
+  /*
+   * This function returns mkldnn::memory with the default primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNData() const;
+  /*
+   * This function returns mkldnn::memory with the given primitive_desc
+   * as long as the array size meets the required size in the given 
primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNData(
+  const mkldnn::memory::primitive_desc ) const;
+  /*
+   * This function returns mkldnn::memory with the given primitive_desc.
+   * The returned mkldnn::memory will have the same physical layout as
+   * the given primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNDataReorder(
+  const mkldnn::memory::primitive_desc ) const;
+
+  void CopyFrom(const mkldnn::memory );
+  mkldnn::memory *CreateMKLDNNData(
+  const mkldnn::memory::primitive_desc );
+
+  /*
+   * Reorder the memory to the specified layout.
+   */
+  void Reorder(const mkldnn::memory::primitive_desc );
+  void Reorder2Default() {
+CHECK_EQ(storage_type(), kDefaultStorage);
+ptr_->Reorder2Default();
+  }
+
+  void InvalidateData() {
+// When we invalidate data, we don't need to care about the MKLDNN format.
+ptr_->Mkl_mem_ = nullptr;
 
 Review comment:
   What is this function intended to do? Setting Mkl_mem_ to nullptr without 
allocating Storage::Handle will cause the ndarray to contain no data.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
piiswrong commented on a change in pull request #8302: Refactor operators & 
MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161928578
 
 

 ##
 File path: src/kvstore/kvstore_local.h
 ##
 @@ -256,7 +256,13 @@ class KVStoreLocal : public KVStore {
 auto validator = [this](const int key, const NDArray& nd) -> bool {
   auto stype = nd.storage_type();
   // valid NDArray
-  if (stype == kDefaultStorage || stype == kRowSparseStorage) return true;
+  auto valid_stype = stype == kDefaultStorage || stype == 
kRowSparseStorage;
+#if MXNET_USE_MKLDNN == 1
+  // When it's kMKLDNNStorage, it'll be converted to a data layout
+  // compatible to the default storage.
+  valid_stype = valid_stype || stype == kMKLDNNStorage;
 
 Review comment:
   revert?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
piiswrong commented on a change in pull request #8302: Refactor operators & 
MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161932726
 
 

 ##
 File path: include/mxnet/ndarray.h
 ##
 @@ -611,6 +565,64 @@ class NDArray {
  << "CheckAndAllocAuxData is not intended for kDefaultStorage";
 ptr_->CheckAndAllocAuxData(i, aux_shape);
   }
+
+#if MXNET_USE_MKLDNN == 1
+  bool IsMKLDNN() const {
 
 Review comment:
   IsMKLDNNData


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
piiswrong commented on a change in pull request #8302: Refactor operators & 
MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161928812
 
 

 ##
 File path: src/ndarray/ndarray.cc
 ##
 @@ -64,14 +166,55 @@ nnvm::Symbol NDArray::get_autograd_symbol() const {
   return ret;
 }
 
+#if MXNET_USE_MKLDNN == 1
+
+struct EmptyMKLDNNDeleter {
+  void operator()(mkldnn::memory *mem) {
+  }
+};
+
+NDArray NDArray::ReshapeMKLDNN(const TShape ) const {
+  CHECK(!is_none()) << "NDArray is not initialized";
+  CHECK_GE(shape_.Size(), shape.Size())
+<< "NDArray.Reshape: target shape size is larger current shape";
+  CHECK_EQ(storage_type(), kDefaultStorage);
+  if (!IsMKLDNN()) {
+NDArray ret = this->Detach();
+ret.shape_ = shape;
+return ret;
+  } else {
+NDArray ret(shape, ctx(), true, dtype());
+// We shouldn't submit the reorder primitive here because submit will
+// be called in operators.
+auto format = 
GetDefaultFormat(ptr_->Mkl_mem_->get_primitive_desc().desc());
+CHECK_NE(format, ptr_->Mkl_mem_->get_primitive_desc().desc().data.format);
+auto def_pd = GetPrimitiveDesc(ptr_->Mkl_mem_->get_primitive_desc(), 
format);
+auto def_mem = TmpMemMgr::Get()->Alloc(def_pd);
+MKLDNNStream *stream = MKLDNNStream::Get();
+stream->RegisterMem(ptr_->Mkl_mem_);
+stream->RegisterPrim(mkldnn::reorder(*ptr_->Mkl_mem_, *def_mem));
+// def_mem points to a memory region in the temp space. It's only valid
+// inside an operator. As such, the returned NDArray can only be valid
+// inside an operator and the shared point doesn't need to do anything
+// when it's destroyed.
+ret.ptr_->Mkl_mem_ = std::shared_ptr(def_mem,
+ EmptyMKLDNNDeleter());
 
 Review comment:
   you can use a lambda here. No need to define a struct


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
piiswrong commented on a change in pull request #8302: Refactor operators & 
MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161926358
 
 

 ##
 File path: include/mxnet/ndarray.h
 ##
 @@ -534,15 +491,12 @@ class NDArray {
 CHECK_GE(ptr_->shandle.size,
  shape.Size() * mshadow::mshadow_sizeof(dtype))
 << "NDArray.AsArray: target memory size is bigger";
-#if MKL_EXPERIMENTAL == 1
-if (Mkl_mem_ != nullptr) {
-  // convert prv to cpu
-  Mkl_mem_->check_and_prv_to_cpu(ptr_->shandle.dptr);
-}
-#endif
+// We can't reuse memory in a view.
+CHECK(!IsView());
 NDArray ret = *this;
 ret.shape_ = shape;
 ret.dtype_ = dtype;
+ret.reuse_ = true;
 
 Review comment:
   Are you sure AsArray is only used for memory reuse?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
piiswrong commented on a change in pull request #8302: Refactor operators & 
MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161932918
 
 

 ##
 File path: include/mxnet/ndarray.h
 ##
 @@ -611,6 +565,64 @@ class NDArray {
  << "CheckAndAllocAuxData is not intended for kDefaultStorage";
 ptr_->CheckAndAllocAuxData(i, aux_shape);
   }
+
+#if MXNET_USE_MKLDNN == 1
+  bool IsMKLDNN() const {
+return ptr_->IsMKLDNN();
+  }
+  bool IsDefault() const {
+return ptr_->IsDefault();
+  }
+  /*
+   * All functions below return a raw pointer to mkldnn memory. Actually there
+   * is a shared pointer that hold the memory either in NDArray or in MKLDNN
+   * stream. As long as we call these functions inside an operator, the return
+   * memory is always valid.
+   */
+
+  /*
+   * This function returns mkldnn::memory with the default primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNData() const;
+  /*
+   * This function returns mkldnn::memory with the given primitive_desc
+   * as long as the array size meets the required size in the given 
primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNData(
+  const mkldnn::memory::primitive_desc ) const;
+  /*
+   * This function returns mkldnn::memory with the given primitive_desc.
+   * The returned mkldnn::memory will have the same physical layout as
+   * the given primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNDataReorder(
+  const mkldnn::memory::primitive_desc ) const;
+
+  void CopyFrom(const mkldnn::memory );
+  mkldnn::memory *CreateMKLDNNData(
+  const mkldnn::memory::primitive_desc );
+
+  /*
+   * Reorder the memory to the specified layout.
+   */
+  void Reorder(const mkldnn::memory::primitive_desc );
+  void Reorder2Default() {
+CHECK_EQ(storage_type(), kDefaultStorage);
+ptr_->Reorder2Default();
+  }
+
+  void InvalidateData() {
+// When we invalidate data, we don't need to care about the MKLDNN format.
+ptr_->Mkl_mem_ = nullptr;
+  }
+
+  /*
+   * This function is used inside operators to reshape an array.
+   * It's used by FullyConnected right now.
+   */
+  NDArray ReshapeMKLDNN(const TShape ) const;
 
 Review comment:
   BTW why is this different from the existing reshape?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
piiswrong commented on a change in pull request #8302: Refactor operators & 
MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161928410
 
 

 ##
 File path: src/executor/graph_executor.cc
 ##
 @@ -693,7 +701,7 @@ static NDArray ReshapeOrCreate(const std::string& name,
 const Context& ctx,
 std::unordered_map* 
shared_buffer,
 bool enable_row_sparse_sharing) {
-  bool stype_shareable = dest_arg_stype == kDefaultStorage;
+  bool stype_shareable = SharableStorage(dest_arg_stype);
 
 Review comment:
   revert this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
piiswrong commented on a change in pull request #8302: Refactor operators & 
MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161930522
 
 

 ##
 File path: src/storage/cpu_device_storage.h
 ##
 @@ -54,7 +54,11 @@ class CPUDeviceStorage {
   /*!
* \brief Alignment of allocation.
*/
+#if MXNET_USE_MKLDNN == 1
+  static constexpr size_t alignment_ = 4096;
 
 Review comment:
   why?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
piiswrong commented on a change in pull request #8302: Refactor operators & 
MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161932822
 
 

 ##
 File path: include/mxnet/ndarray.h
 ##
 @@ -611,6 +565,64 @@ class NDArray {
  << "CheckAndAllocAuxData is not intended for kDefaultStorage";
 ptr_->CheckAndAllocAuxData(i, aux_shape);
   }
+
+#if MXNET_USE_MKLDNN == 1
+  bool IsMKLDNN() const {
+return ptr_->IsMKLDNN();
+  }
+  bool IsDefault() const {
+return ptr_->IsDefault();
+  }
+  /*
+   * All functions below return a raw pointer to mkldnn memory. Actually there
+   * is a shared pointer that hold the memory either in NDArray or in MKLDNN
+   * stream. As long as we call these functions inside an operator, the return
+   * memory is always valid.
+   */
+
+  /*
+   * This function returns mkldnn::memory with the default primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNData() const;
+  /*
+   * This function returns mkldnn::memory with the given primitive_desc
+   * as long as the array size meets the required size in the given 
primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNData(
+  const mkldnn::memory::primitive_desc ) const;
+  /*
+   * This function returns mkldnn::memory with the given primitive_desc.
+   * The returned mkldnn::memory will have the same physical layout as
+   * the given primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNDataReorder(
+  const mkldnn::memory::primitive_desc ) const;
+
+  void CopyFrom(const mkldnn::memory );
+  mkldnn::memory *CreateMKLDNNData(
+  const mkldnn::memory::primitive_desc );
+
+  /*
+   * Reorder the memory to the specified layout.
+   */
+  void Reorder(const mkldnn::memory::primitive_desc );
+  void Reorder2Default() {
+CHECK_EQ(storage_type(), kDefaultStorage);
+ptr_->Reorder2Default();
+  }
+
+  void InvalidateData() {
+// When we invalidate data, we don't need to care about the MKLDNN format.
+ptr_->Mkl_mem_ = nullptr;
+  }
+
+  /*
+   * This function is used inside operators to reshape an array.
+   * It's used by FullyConnected right now.
+   */
+  NDArray ReshapeMKLDNN(const TShape ) const;
 
 Review comment:
   MKLDNNDataReshape


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
piiswrong commented on a change in pull request #8302: Refactor operators & 
MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161932131
 
 

 ##
 File path: include/mxnet/ndarray.h
 ##
 @@ -611,6 +565,64 @@ class NDArray {
  << "CheckAndAllocAuxData is not intended for kDefaultStorage";
 ptr_->CheckAndAllocAuxData(i, aux_shape);
   }
+
+#if MXNET_USE_MKLDNN == 1
+  bool IsMKLDNN() const {
+return ptr_->IsMKLDNN();
+  }
+  bool IsDefault() const {
 
 Review comment:
   Default is different from defaultstorage. The name is confusing.
   
   How about IsVectorized()?
   
   Also, move this function out of `#if MXNET_USE_MKLDNN` to avoid excessive 
switches across the codebase.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
piiswrong commented on a change in pull request #8302: Refactor operators & 
MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161932398
 
 

 ##
 File path: include/mxnet/ndarray.h
 ##
 @@ -611,6 +565,64 @@ class NDArray {
  << "CheckAndAllocAuxData is not intended for kDefaultStorage";
 ptr_->CheckAndAllocAuxData(i, aux_shape);
   }
+
+#if MXNET_USE_MKLDNN == 1
+  bool IsMKLDNN() const {
+return ptr_->IsMKLDNN();
+  }
+  bool IsDefault() const {
+return ptr_->IsDefault();
+  }
+  /*
+   * All functions below return a raw pointer to mkldnn memory. Actually there
+   * is a shared pointer that hold the memory either in NDArray or in MKLDNN
+   * stream. As long as we call these functions inside an operator, the return
+   * memory is always valid.
+   */
+
+  /*
+   * This function returns mkldnn::memory with the default primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNData() const;
+  /*
+   * This function returns mkldnn::memory with the given primitive_desc
+   * as long as the array size meets the required size in the given 
primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNData(
+  const mkldnn::memory::primitive_desc ) const;
+  /*
+   * This function returns mkldnn::memory with the given primitive_desc.
+   * The returned mkldnn::memory will have the same physical layout as
+   * the given primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNDataReorder(
+  const mkldnn::memory::primitive_desc ) const;
+
+  void CopyFrom(const mkldnn::memory );
 
 Review comment:
   CopyFromMKLDNNData


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhangguotai opened a new issue #9461: The fault in the process of running example

2018-01-16 Thread GitBox
zhangguotai opened a new issue #9461: The fault in the process of running 
example
URL: https://github.com/apache/incubator-mxnet/issues/9461
 
 
   I had compiled libmxnet.so and  image-classification-predict.When running 
order that ./image-classification-predict apple.jpg,I got a fault that 
image-classification-predict: image-classification-predict.cc:238: int 
main(int, char**): Assertion `pred_hnd' failed.
   I looked original code:
   MXPredCreate((const char*)json_data.GetBuffer(),
(const char*)param_data.GetBuffer(),
static_cast(param_data.GetLength()),
dev_type,
dev_id,
num_input_nodes,
input_keys,
input_shape_indptr,
input_shape_data,
_hnd);
   assert(pred_hnd);
   I should how to modify my code?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
piiswrong commented on a change in pull request #8302: Refactor operators & 
MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161932783
 
 

 ##
 File path: include/mxnet/ndarray.h
 ##
 @@ -611,6 +565,64 @@ class NDArray {
  << "CheckAndAllocAuxData is not intended for kDefaultStorage";
 ptr_->CheckAndAllocAuxData(i, aux_shape);
   }
+
+#if MXNET_USE_MKLDNN == 1
+  bool IsMKLDNN() const {
+return ptr_->IsMKLDNN();
+  }
+  bool IsDefault() const {
+return ptr_->IsDefault();
+  }
+  /*
+   * All functions below return a raw pointer to mkldnn memory. Actually there
+   * is a shared pointer that hold the memory either in NDArray or in MKLDNN
+   * stream. As long as we call these functions inside an operator, the return
+   * memory is always valid.
+   */
+
+  /*
+   * This function returns mkldnn::memory with the default primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNData() const;
+  /*
+   * This function returns mkldnn::memory with the given primitive_desc
+   * as long as the array size meets the required size in the given 
primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNData(
+  const mkldnn::memory::primitive_desc ) const;
+  /*
+   * This function returns mkldnn::memory with the given primitive_desc.
+   * The returned mkldnn::memory will have the same physical layout as
+   * the given primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNDataReorder(
+  const mkldnn::memory::primitive_desc ) const;
+
+  void CopyFrom(const mkldnn::memory );
+  mkldnn::memory *CreateMKLDNNData(
+  const mkldnn::memory::primitive_desc );
+
+  /*
+   * Reorder the memory to the specified layout.
+   */
+  void Reorder(const mkldnn::memory::primitive_desc );
 
 Review comment:
   MKLDNNDataReorder


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
piiswrong commented on a change in pull request #8302: Refactor operators & 
MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161933076
 
 

 ##
 File path: include/mxnet/ndarray.h
 ##
 @@ -645,6 +657,12 @@ class NDArray {
for csr, aux_handles[0] = indptr, aux_handles[1] = indices
 */
 std::vector aux_handles;
+
+#if MXNET_USE_MKLDNN == 1
+/*! This is created when data is stored in MKLDNN format.
+ */
+std::shared_ptr Mkl_mem_;
 
 Review comment:
   mkl_mem_. No capitalization
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nicklhy commented on issue #9396: inference speed drop after updating mxnet from 0.10.0 to 1.0.0

2018-01-16 Thread GitBox
nicklhy commented on issue #9396: inference speed drop after updating mxnet 
from 0.10.0 to 1.0.0
URL: 
https://github.com/apache/incubator-mxnet/issues/9396#issuecomment-358157026
 
 
   @eric-haibin-lin The above results are tested with a ResNet152 json file 
downloaded from 
[http://data.mxnet.io/models/imagenet/resnet/152-layers/resnet-152-symbol.json](http://data.mxnet.io/models/imagenet/resnet/152-layers/resnet-152-symbol.json).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nicklhy commented on issue #9396: inference speed drop after updating mxnet from 0.10.0 to 1.0.0

2018-01-16 Thread GitBox
nicklhy commented on issue #9396: inference speed drop after updating mxnet 
from 0.10.0 to 1.0.0
URL: 
https://github.com/apache/incubator-mxnet/issues/9396#issuecomment-358157026
 
 
   @eric-haibin-lin The above results are tested from a ResNet152 json file 
downloaded from 
[http://data.mxnet.io/models/imagenet/resnet/152-layers/resnet-152-symbol.json](http://data.mxnet.io/models/imagenet/resnet/152-layers/resnet-152-symbol.json).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nicklhy commented on issue #9396: inference speed drop after updating mxnet from 0.10.0 to 1.0.0

2018-01-16 Thread GitBox
nicklhy commented on issue #9396: inference speed drop after updating mxnet 
from 0.10.0 to 1.0.0
URL: 
https://github.com/apache/incubator-mxnet/issues/9396#issuecomment-358157026
 
 
   @eric-haibin-lin The above results are tested with a ResNet152 json file 
downloaded from 
[http://data.mxnet.io/models/imagenet/resnet/152-layers/resnet-152-symbol.json](http://data.mxnet.io/models/imagenet/resnet/152-layers/resnet-152-symbol.json).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #7987: Per-sample gradients

2018-01-16 Thread GitBox
szha commented on issue #7987: Per-sample gradients
URL: 
https://github.com/apache/incubator-mxnet/issues/7987#issuecomment-358153506
 
 
   @apache/mxnet-committers: This issue has been inactive for the past 90 days. 
It has no label and needs triage.
   
   For general "how-to" questions, our [user forum](https://discuss.mxnet.io/) 
(and [Chinese version](https://discuss.gluon.ai/)) is a good place to get help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pracheer commented on issue #8339: data iterators tutorial errors

2018-01-16 Thread GitBox
pracheer commented on issue #8339: data iterators tutorial errors
URL: 
https://github.com/apache/incubator-mxnet/issues/8339#issuecomment-358152063
 
 
   Apologies for the delay. Here is the pull request for it:
   https://github.com/apache/incubator-mxnet/pull/9460


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pracheer commented on issue #9460: Data-iterator tutorial made python3 compatible.

2018-01-16 Thread GitBox
pracheer commented on issue #9460: Data-iterator tutorial made python3 
compatible.
URL: https://github.com/apache/incubator-mxnet/pull/9460#issuecomment-358151885
 
 
   @aaronmarkham @eric-haibin-lin @piiswrong 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pracheer opened a new pull request #9460: Data-iterator tutorial made python3 compatible.

2018-01-16 Thread GitBox
pracheer opened a new pull request #9460: Data-iterator tutorial made python3 
compatible.
URL: https://github.com/apache/incubator-mxnet/pull/9460
 
 
   ## Description ##
   Faced 2 main issues while executing this 
http://mxnet.incubator.apache.org/tutorials/basic/data.html tutorial on python3:
   1. Zip function has changed in python3. It returns an iterator which gets 
exhausted after it is iterated over. More info: 
https://stackoverflow.com/questions/31683959/the-zip-function-in-python-3/31684038#31684038
   2. Some of the methods in MXNet assume the parameter to be of type string in 
python2
   but as bytes in python3.
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [Y] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [Y] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thinksanky commented on issue #46: Removed torch.html and all its references. Fixed Nesterov Momentum education ?

2018-01-16 Thread GitBox
thinksanky commented on issue #46: Removed torch.html and all its references. 
Fixed Nesterov Momentum education ?
URL: 
https://github.com/apache/incubator-mxnet-site/pull/46#issuecomment-358150760
 
 
   @sandeep-krishnamurthy  - please help merge this PR.
   Also fixing the Master branch to keep this in parity.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thinksanky commented on issue #46: Removed torch.html and all its references. Fixed Nesterov Momentum education ?

2018-01-16 Thread GitBox
thinksanky commented on issue #46: Removed torch.html and all its references. 
Fixed Nesterov Momentum education ?
URL: 
https://github.com/apache/incubator-mxnet-site/pull/46#issuecomment-358150539
 
 
   @sandeep-krishnamurthy,


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thinksanky opened a new pull request #46: Removed torch.html and all its references. Fixed Nesterov Momentum education ?

2018-01-16 Thread GitBox
thinksanky opened a new pull request #46: Removed torch.html and all its 
references. Fixed Nesterov Momentum education ?
URL: https://github.com/apache/incubator-mxnet-site/pull/46
 
 
   Fixed the broken links for the following -
   - Torch html is not used anymore. So removed torch.html and all its 
references. One of the broken links reported was in torch.html.
   - Fixed the optimizer documentation to point to the working link.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
zheng-da commented on a change in pull request #8302: Refactor operators & 
MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161921308
 
 

 ##
 File path: src/common/exec_utils.h
 ##
 @@ -43,19 +43,61 @@ namespace common {
   indices are not recorded
  * \return true if any source NDArray need to cast storage
  */
-inline bool SetupDefaultBlobs(const std::vector& src,
-  std::vector *blobs,
-  std::vector *temp_src,
-  std::vector *temp_dst,
-  std::unordered_map *idx_map 
= nullptr) {
+inline bool SetupDefaultBlobsIn(const std::vector& src,
+const std::vector *bufs,
+std::vector *blobs,
+std::vector *temp_src,
+std::vector *temp_dst,
+std::unordered_map 
*idx_map) {
   bool require_cast = false;
   for (size_t i = 0; i < src.size(); i++) {
 auto& nd = src[i];
-if (nd.storage_type() != kDefaultStorage) {
-  if (idx_map != nullptr) {
-(*idx_map)[i] = temp_dst->size();
-  }
-  NDArray temp(nd.shape(), nd.ctx(), false, nd.dtype());
+bool is_default = nd.storage_type() == kDefaultStorage;
+#if MXNET_USE_MKLDNN == 1
+// We have to make sure it's default storage and default layout.
+is_default = nd.IsDefault();
+#endif
+if (!is_default) {
+  (*idx_map)[i] = temp_dst->size();
+  NDArray temp = bufs != nullptr ? bufs->at(i) : NDArray(nd.shape(), 
nd.ctx(),
+ true, nd.dtype());
+#if MXNET_USE_MKLDNN == 1
+  CHECK(temp.IsDefault());
+#endif
+  temp_src->emplace_back(nd);
+  temp_dst->emplace_back(temp);
+  blobs->emplace_back(temp.data());
+  require_cast = true;
+} else {
+  blobs->push_back(nd.data());
+}
+  }
+  return require_cast;
+}
+
+inline bool SetupDefaultBlobsOut(const std::vector& src,
+ const std::vector ,
+ const std::vector *bufs,
+ std::vector *blobs,
+ std::vector *temp_src,
+ std::vector *temp_dst) {
+  bool require_cast = false;
+  for (size_t i = 0; i < src.size(); i++) {
+auto& nd = src[i];
+bool is_default = nd.storage_type() == kDefaultStorage;
+#if MXNET_USE_MKLDNN == 1
+// If it's writeTo, we don't need to worry whether it contains valid data.
+if (req[i] == kWriteTo && is_default)
 
 Review comment:
   The goal is to remove Mkl_ptr_ in the NDArray when an NDArray is reused.
   When we use WriteTo, the data in the output array shouldn't have any valid 
data. We should notify the NDArray of this. Otherwise, NDArray always thinks 
the data should be stored in a special layout.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
zheng-da commented on a change in pull request #8302: Refactor operators & 
MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161921308
 
 

 ##
 File path: src/common/exec_utils.h
 ##
 @@ -43,19 +43,61 @@ namespace common {
   indices are not recorded
  * \return true if any source NDArray need to cast storage
  */
-inline bool SetupDefaultBlobs(const std::vector& src,
-  std::vector *blobs,
-  std::vector *temp_src,
-  std::vector *temp_dst,
-  std::unordered_map *idx_map 
= nullptr) {
+inline bool SetupDefaultBlobsIn(const std::vector& src,
+const std::vector *bufs,
+std::vector *blobs,
+std::vector *temp_src,
+std::vector *temp_dst,
+std::unordered_map 
*idx_map) {
   bool require_cast = false;
   for (size_t i = 0; i < src.size(); i++) {
 auto& nd = src[i];
-if (nd.storage_type() != kDefaultStorage) {
-  if (idx_map != nullptr) {
-(*idx_map)[i] = temp_dst->size();
-  }
-  NDArray temp(nd.shape(), nd.ctx(), false, nd.dtype());
+bool is_default = nd.storage_type() == kDefaultStorage;
+#if MXNET_USE_MKLDNN == 1
+// We have to make sure it's default storage and default layout.
+is_default = nd.IsDefault();
+#endif
+if (!is_default) {
+  (*idx_map)[i] = temp_dst->size();
+  NDArray temp = bufs != nullptr ? bufs->at(i) : NDArray(nd.shape(), 
nd.ctx(),
+ true, nd.dtype());
+#if MXNET_USE_MKLDNN == 1
+  CHECK(temp.IsDefault());
+#endif
+  temp_src->emplace_back(nd);
+  temp_dst->emplace_back(temp);
+  blobs->emplace_back(temp.data());
+  require_cast = true;
+} else {
+  blobs->push_back(nd.data());
+}
+  }
+  return require_cast;
+}
+
+inline bool SetupDefaultBlobsOut(const std::vector& src,
+ const std::vector ,
+ const std::vector *bufs,
+ std::vector *blobs,
+ std::vector *temp_src,
+ std::vector *temp_dst) {
+  bool require_cast = false;
+  for (size_t i = 0; i < src.size(); i++) {
+auto& nd = src[i];
+bool is_default = nd.storage_type() == kDefaultStorage;
+#if MXNET_USE_MKLDNN == 1
+// If it's writeTo, we don't need to worry whether it contains valid data.
+if (req[i] == kWriteTo && is_default)
 
 Review comment:
   The goal is to remove Mkl_ptr_ in the NDArray when an NDArray is reused.
   When we use WriteTo, the data in the output array shouldn't have any valid 
data. We should notify the NDArray of this. Otherwise, NDArray always thinks 
the data is stored in a special layout.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
zheng-da commented on issue #8302: Refactor operators & MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#issuecomment-358145000
 
 
   @eric-haibin-lin the illegal instruction issue in CI has been resolved. The 
solution is in the latest commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on issue #42: Fix gpu install instructions

2018-01-16 Thread GitBox
sandeep-krishnamurthy commented on issue #42: Fix gpu install instructions
URL: 
https://github.com/apache/incubator-mxnet-site/pull/42#issuecomment-358138951
 
 
   Yes. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #8852: Environment variable MXNET_EXEC_BULK_EXEC_INFERENCE=0 does not work

2018-01-16 Thread GitBox
eric-haibin-lin commented on issue #8852: Environment variable 
MXNET_EXEC_BULK_EXEC_INFERENCE=0 does not work
URL: 
https://github.com/apache/incubator-mxnet/issues/8852#issuecomment-358133146
 
 
   Resolved by #9055 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin closed issue #8852: Environment variable MXNET_EXEC_BULK_EXEC_INFERENCE=0 does not work

2018-01-16 Thread GitBox
eric-haibin-lin closed issue #8852: Environment variable 
MXNET_EXEC_BULK_EXEC_INFERENCE=0 does not work
URL: https://github.com/apache/incubator-mxnet/issues/8852
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #8592: Simple bind doesnt infer the provided stype for non default stypes

2018-01-16 Thread GitBox
eric-haibin-lin commented on issue #8592: Simple bind doesnt infer the provided 
stype for non default stypes
URL: 
https://github.com/apache/incubator-mxnet/issues/8592#issuecomment-358132319
 
 
   I think that caused by a typo in the test:
   ```
   >>> xv = mx.symbol.Variable('x')
   >>> y = mx.symbol.identity(xv)
   >>> executor = y.simple_bind(ctx=mx.cpu(), x=(4, 10), **stype_dict**={'x': 
'csr'})
   >>> executor.forward()
   [
   ]
   >>> outputs = executor.outputs
   >>> inputs = executor.arg_arrays
   >>>
   >>> print outputs[0].stype
   csr
   >>> print inputs[0].stype
   csr
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin closed issue #8592: Simple bind doesnt infer the provided stype for non default stypes

2018-01-16 Thread GitBox
eric-haibin-lin closed issue #8592: Simple bind doesnt infer the provided stype 
for non default stypes
URL: https://github.com/apache/incubator-mxnet/issues/8592
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
eric-haibin-lin commented on a change in pull request #8302: Refactor operators 
& MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161899483
 
 

 ##
 File path: include/mxnet/ndarray.h
 ##
 @@ -61,6 +62,9 @@ enum NDArrayStorageType {
   kDefaultStorage, // dense
   kRowSparseStorage,   // row sparse
   kCSRStorage, // csr
+#if MXNET_USE_MKLDNN == 1
+  kMKLDNNStorage,  // MKLDNN
 
 Review comment:
   Should this be removed? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
eric-haibin-lin commented on a change in pull request #8302: Refactor operators 
& MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161903292
 
 

 ##
 File path: src/common/exec_utils.h
 ##
 @@ -43,19 +43,61 @@ namespace common {
   indices are not recorded
  * \return true if any source NDArray need to cast storage
  */
-inline bool SetupDefaultBlobs(const std::vector& src,
-  std::vector *blobs,
-  std::vector *temp_src,
-  std::vector *temp_dst,
-  std::unordered_map *idx_map 
= nullptr) {
+inline bool SetupDefaultBlobsIn(const std::vector& src,
+const std::vector *bufs,
+std::vector *blobs,
+std::vector *temp_src,
+std::vector *temp_dst,
+std::unordered_map 
*idx_map) {
   bool require_cast = false;
   for (size_t i = 0; i < src.size(); i++) {
 auto& nd = src[i];
-if (nd.storage_type() != kDefaultStorage) {
-  if (idx_map != nullptr) {
-(*idx_map)[i] = temp_dst->size();
-  }
-  NDArray temp(nd.shape(), nd.ctx(), false, nd.dtype());
+bool is_default = nd.storage_type() == kDefaultStorage;
+#if MXNET_USE_MKLDNN == 1
+// We have to make sure it's default storage and default layout.
+is_default = nd.IsDefault();
+#endif
+if (!is_default) {
+  (*idx_map)[i] = temp_dst->size();
+  NDArray temp = bufs != nullptr ? bufs->at(i) : NDArray(nd.shape(), 
nd.ctx(),
+ true, nd.dtype());
+#if MXNET_USE_MKLDNN == 1
+  CHECK(temp.IsDefault());
+#endif
+  temp_src->emplace_back(nd);
+  temp_dst->emplace_back(temp);
+  blobs->emplace_back(temp.data());
+  require_cast = true;
+} else {
+  blobs->push_back(nd.data());
+}
+  }
+  return require_cast;
+}
+
+inline bool SetupDefaultBlobsOut(const std::vector& src,
+ const std::vector ,
+ const std::vector *bufs,
+ std::vector *blobs,
+ std::vector *temp_src,
+ std::vector *temp_dst) {
+  bool require_cast = false;
+  for (size_t i = 0; i < src.size(); i++) {
+auto& nd = src[i];
+bool is_default = nd.storage_type() == kDefaultStorage;
+#if MXNET_USE_MKLDNN == 1
+// If it's writeTo, we don't need to worry whether it contains valid data.
+if (req[i] == kWriteTo && is_default)
 
 Review comment:
   why invalidate the data here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
eric-haibin-lin commented on a change in pull request #8302: Refactor operators 
& MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161900792
 
 

 ##
 File path: include/mxnet/ndarray.h
 ##
 @@ -789,15 +810,22 @@ class NDArray {
 // size is the number of bytes
 void CheckAndAlloc(uint64_t dbytes) {
   CHECK_EQ(kDefaultStorage, storage_type)
-  << "CheckAndAlloc(dbytes) is not intended for kDefaultStorage";
+  << "CheckAndAlloc(dbytes) is not intended for kDefaultStorage";
 
 Review comment:
   That was an incorrect error msg: `CheckAndAlloc(dbytes) is **only** intended 
for kDefaultStorage`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
eric-haibin-lin commented on a change in pull request #8302: Refactor operators 
& MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161905366
 
 

 ##
 File path: src/ndarray/ndarray.cc
 ##
 @@ -64,14 +166,55 @@ nnvm::Symbol NDArray::get_autograd_symbol() const {
   return ret;
 }
 
+#if MXNET_USE_MKLDNN == 1
+
+struct EmptyMKLDNNDeleter {
+  void operator()(mkldnn::memory *mem) {
+  }
+};
+
+NDArray NDArray::ReshapeMKLDNN(const TShape ) const {
+  CHECK(!is_none()) << "NDArray is not initialized";
+  CHECK_GE(shape_.Size(), shape.Size())
+<< "NDArray.Reshape: target shape size is larger current shape";
+  CHECK_EQ(storage_type(), kDefaultStorage);
+  if (!IsMKLDNN()) {
+NDArray ret = this->Detach();
+ret.shape_ = shape;
+return ret;
+  } else {
+NDArray ret(shape, ctx(), true, dtype());
+// We shouldn't submit the reorder primitive here because submit will
+// be called in operators.
+auto format = 
GetDefaultFormat(ptr_->Mkl_mem_->get_primitive_desc().desc());
+CHECK_NE(format, ptr_->Mkl_mem_->get_primitive_desc().desc().data.format);
+auto def_pd = GetPrimitiveDesc(ptr_->Mkl_mem_->get_primitive_desc(), 
format);
+auto def_mem = TmpMemMgr::Get()->Alloc(def_pd);
+MKLDNNStream *stream = MKLDNNStream::Get();
+stream->RegisterMem(ptr_->Mkl_mem_);
+stream->RegisterPrim(mkldnn::reorder(*ptr_->Mkl_mem_, *def_mem));
+// def_mem points to a memory region in the temp space. It's only valid
+// inside an operator. As such, the returned NDArray can only be valid
+// inside an operator and the shared point doesn't need to do anything
+// when it's destroyed.
+ret.ptr_->Mkl_mem_ = std::shared_ptr(def_mem,
+ EmptyMKLDNNDeleter());
+ret.ptr_->shandle.dptr = def_mem->get_data_handle();
+ret.ptr_->shandle.size = def_mem->get_primitive_desc().get_size();
+ret.ptr_->delay_alloc = false;
+ret.ptr_->static_data = true;
+ret.byte_offset_ = byte_offset_;
+return ret;
+  }
+}
+
+#endif
+
 NDArray NDArray::Reshape(const TShape ) const {
   CHECK(!is_none()) << "NDArray is not initialized";
-  auto stype = storage_type();
-  // reshape is not supported for non-default ndarray with dismatching shapes
-  CHECK((shape_ == shape) || stype == kDefaultStorage)
-<< "Reshape for storage type " << stype << " is not implemented yet";
   CHECK_GE(shape_.Size(), shape.Size())
 << "NDArray.Reshape: target shape size is larger current shape";
+  CHECK_EQ(storage_type(), kDefaultStorage);
 
 Review comment:
   `Reshape(sparse_ndarray, sparse_ndarray.shape())` should still work and 
return itself


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
eric-haibin-lin commented on a change in pull request #8302: Refactor operators 
& MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161901479
 
 

 ##
 File path: include/mxnet/ndarray.h
 ##
 @@ -823,20 +851,24 @@ class NDArray {
 // storage shape is also updated
 // if data is already allocated, try reuse the storage. Otherwise, free 
the current one
 // and allocate new storage
-inline void CheckAndAllocData(const TShape , int dtype) {
-  CHECK_NE(aux_shapes.size(), 0) << "data is expected to be allocated 
after aux_data";
-  auto dbytes = shape.Size() * mshadow::mshadow_sizeof(dtype);
-  if (shandle.size < dbytes) {
-// free storage if necessary and alloc again
-if (shandle.size > 0) Storage::Get()->Free(shandle);
-// init storage
-shandle = Storage::Get()->Alloc(dbytes, ctx);
-  }
-  // init shape
-  storage_shape = shape;
-  // delay_alloc is only set when data storage handle is present
-  delay_alloc = false;
+void CheckAndAllocData(const TShape , int dtype);
+
+#if MXNET_USE_MKLDNN == 1
+// Have MKL memory reference to the data in the default storage
+// or create memory for MKLDNN.
+void SetMKLMem(const TShape , int dtype);
+void ResetMKLMem() {
+  // If Mkl_mem_ isn't referencing to shandle, we need to reset Mkl_mem_.
 
 Review comment:
   isn't `Mkl_mem_` always referencing to shandle?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
eric-haibin-lin commented on a change in pull request #8302: Refactor operators 
& MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161900341
 
 

 ##
 File path: include/mxnet/ndarray.h
 ##
 @@ -611,6 +565,64 @@ class NDArray {
  << "CheckAndAllocAuxData is not intended for kDefaultStorage";
 ptr_->CheckAndAllocAuxData(i, aux_shape);
   }
+
+#if MXNET_USE_MKLDNN == 1
+  bool IsMKLDNN() const {
+return ptr_->IsMKLDNN();
+  }
+  bool IsDefault() const {
+return ptr_->IsDefault();
+  }
+  /*
+   * All functions below return a raw pointer to mkldnn memory. Actually there
+   * is a shared pointer that hold the memory either in NDArray or in MKLDNN
+   * stream. As long as we call these functions inside an operator, the return
+   * memory is always valid.
+   */
+
+  /*
+   * This function returns mkldnn::memory with the default primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNData() const;
+  /*
+   * This function returns mkldnn::memory with the given primitive_desc
+   * as long as the array size meets the required size in the given 
primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNData(
+  const mkldnn::memory::primitive_desc ) const;
+  /*
+   * This function returns mkldnn::memory with the given primitive_desc.
+   * The returned mkldnn::memory will have the same physical layout as
+   * the given primitive_desc.
+   */
+  const mkldnn::memory *GetMKLDNNDataReorder(
+  const mkldnn::memory::primitive_desc ) const;
+
+  void CopyFrom(const mkldnn::memory );
 
 Review comment:
   Do you mind adding the missing documentations for these two functions?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
eric-haibin-lin commented on a change in pull request #8302: Refactor operators 
& MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161899924
 
 

 ##
 File path: include/mxnet/ndarray.h
 ##
 @@ -184,11 +138,17 @@ class NDArray {
   const TBlob , const std::vector _data, int dev_id)
   : ptr_(std::make_shared(stype, data, aux_data, dev_id)), 
shape_(shape),
 dtype_(data.type_flag_), storage_type_(stype), entry_({nullptr, 0, 0}) 
{
-#if MKL_EXPERIMENTAL == 1
-Mkl_mem_ = std::make_shared();
-#endif
   }
 
+  inline bool IsView() const {
+// Sparse arrays don't have a view.
+if (storage_type() == kRowSparseStorage || storage_type() == kCSRStorage)
 
 Review comment:
   In case more sparse storages are added in the future, I suggest change it to 
`if (storage_type() != kDefaultStorage)`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
eric-haibin-lin commented on a change in pull request #8302: Refactor operators 
& MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161904685
 
 

 ##
 File path: src/executor/graph_executor.cc
 ##
 @@ -54,6 +54,14 @@ GraphExecutor::~GraphExecutor() {
   }
 }
 
+inline bool SharableStorage(NDArrayStorageType stype) {
 
 Review comment:
   Not used anymore?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
eric-haibin-lin commented on a change in pull request #8302: Refactor operators 
& MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#discussion_r161902810
 
 

 ##
 File path: src/common/utils.cc
 ##
 @@ -41,5 +41,21 @@ void CastStorageDispatch(const OpContext& ctx,
   mxnet::op::CastStorageComputeImpl(ctx, input, output);
 }
 
+std::string stype_string(const int x) {
+  switch (x) {
+case kDefaultStorage:
+  return "default";
+case kCSRStorage:
+  return "csr";
+case kRowSparseStorage:
+  return "row_sparse";
+#if MXNET_USE_MKLDNN == 1
+case kMKLDNNStorage:
 
 Review comment:
   kMKLDNNStorage should be removed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #9449: Can sequential module be distributed into mulit machines and how?

2018-01-16 Thread GitBox
eric-haibin-lin commented on issue #9449: Can sequential module be distributed 
into mulit machines and how?
URL: 
https://github.com/apache/incubator-mxnet/issues/9449#issuecomment-358120806
 
 
   I'm afraid not yet. What's your use case and why are you considering it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mbaijal opened a new pull request #9459: Remove empty lines at the end of file

2018-01-16 Thread GitBox
mbaijal opened a new pull request #9459: Remove empty lines at the end of file
URL: https://github.com/apache/incubator-mxnet/pull/9459
 
 
   ## Description ##
   Remove blank lines from end of NOTICE file as requested in the 1.0.0 vote 
thread. 
   Issue 10 discussed here - 
https://github.com/apache/incubator-mxnet/issues/8913 
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Removed empty lines from NOTICE
   
   ## Comments ##
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #8914: The custom operator not supported for group context?

2018-01-16 Thread GitBox
eric-haibin-lin commented on issue #8914: The custom operator not supported for 
group context?
URL: 
https://github.com/apache/incubator-mxnet/issues/8914#issuecomment-358111349
 
 
   @mg0880gm could you try latest version of MXNet and update the `__init__` 
function of your `XXCustomOpProp` from 
   ```
   def __init__(self):
   ```
   to 
   ```
   def __init__(self, **kwargs):
   ```
   And see if it works? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
eric-haibin-lin commented on issue #8302: Refactor operators & MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#issuecomment-358109197
 
 
   @marcoabreu @zheng-da what's the action item to resolve the illegal 
instruction issue on CI? Do we need to update the setup in CI? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy opened a new issue #9458: Support auto infer input shape in convolution operator

2018-01-16 Thread GitBox
sandeep-krishnamurthy opened a new issue #9458: Support auto infer input shape 
in convolution operator
URL: https://github.com/apache/incubator-mxnet/issues/9458
 
 
   MXNet does not support auto inferring of input shape for convolution 
operator. It would be very convenient for users to have this functionality.
   
   Apart from advantage to users, this is a limiting functionality for MXNet 
backend to Keras. Keras allows users to specify "None" to allow the backend 
framework to infer the input shape which is supported by other backends (TF and 
CNTK).
   
   An example ResNet test case for this use case can be found here - 
https://github.com/keras-team/keras/blob/master/tests/keras/applications/applications_test.py#L57


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #9396: inference speed drop after updating mxnet from 0.10.0 to 1.0.0

2018-01-16 Thread GitBox
eric-haibin-lin commented on issue #9396: inference speed drop after updating 
mxnet from 0.10.0 to 1.0.0
URL: 
https://github.com/apache/incubator-mxnet/issues/9396#issuecomment-358107218
 
 
   Hi @nicklhy what network are you using? Does it include any custom 
operators? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on issue #8582: Yolo2 operator

2018-01-16 Thread GitBox
zhreshold commented on issue #8582: Yolo2 operator
URL: https://github.com/apache/incubator-mxnet/pull/8582#issuecomment-358102363
 
 
   @eric-haibin-lin CI build problem solved, could you have a look?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #8302: Refactor operators & MKLDNN

2018-01-16 Thread GitBox
zheng-da commented on issue #8302: Refactor operators & MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/8302#issuecomment-358086101
 
 
   The problem that mkldnn+GPU can't run on G3 is caused by the fact that 
mkldnn is compiled with avx512 instructions on C5 but G3 doesn't support this 
instruction set. The problem is solved after I compiled it with avx2 on C5.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong closed pull request #9447: Fix broken links in model_parallel_lstm.md

2018-01-16 Thread GitBox
piiswrong closed pull request #9447: Fix broken links in model_parallel_lstm.md
URL: https://github.com/apache/incubator-mxnet/pull/9447
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/faq/model_parallel_lstm.md b/docs/faq/model_parallel_lstm.md
index 4a02288d5e..1e367eb5f2 100644
--- a/docs/faq/model_parallel_lstm.md
+++ b/docs/faq/model_parallel_lstm.md
@@ -65,15 +65,15 @@ Although the LSTM layers consume less memory than the 
decoder/encoder layers, th
 Thus, the partition on the left will be faster than the one on the right
 because the workload is more evenly distributed.
 
-Currently, the layer partition is implemented in 
[lstm.py](https://github.com/eric-haibin-lin/mxnet/blob/master/example/model-parallel-lstm/lstm.py#L187)
 and configured in 
[lstm_ptb.py](https://github.com/eric-haibin-lin/mxnet/blob/master/example/model-parallel-lstm/lstm.py#L187)
 using the `group2ctx` option.
+Currently, the layer partition is implemented in 
[lstm.py](https://github.com/apache/incubator-mxnet/blob/master/example/model-parallel/lstm/lstm.py#L171)
 and configured in 
[lstm_ptb.py](https://github.com/apache/incubator-mxnet/blob/master/example/model-parallel/lstm/lstm_ptb.py#L97-L102)
 using the `group2ctx` option.
 
 ## Apply Bucketing to Model Parallelism
 
 To achieve model parallelism while using bucketing,
 you need to unroll an LSTM model for each bucket
 to obtain an executor for each.
-For details about how the model is bound, see 
[lstm.py](https://github.com/eric-haibin-lin/mxnet/blob/master/example/model-parallel-lstm/lstm.py#L154).
+For details about how the model is bound, see 
[lstm.py](https://github.com/apache/incubator-mxnet/blob/master/example/model-parallel/lstm/lstm.py#L225-L235).
 
 On the other hand, because model parallelism partitions the model/layers,
 the input data has to be transformed/transposed to the agreed shape.
-For more details, see 
[bucket_io](https://github.com/eric-haibin-lin/mxnet/blob/master/example/model-parallel-lstm/lstm.py#L154).
+For more details, see 
[bucket_io](https://github.com/apache/incubator-mxnet/blob/master/example/rnn/old/bucket_io.py).


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Update model_parallel_lstm.md (#9447)

2018-01-16 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new a182aec  Update model_parallel_lstm.md (#9447)
a182aec is described below

commit a182aecee5185a276fac6a0ec3c6825f93c94a22
Author: Haibin Lin 
AuthorDate: Tue Jan 16 11:53:30 2018 -0800

Update model_parallel_lstm.md (#9447)
---
 docs/faq/model_parallel_lstm.md | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/docs/faq/model_parallel_lstm.md b/docs/faq/model_parallel_lstm.md
index 4a02288..1e367eb 100644
--- a/docs/faq/model_parallel_lstm.md
+++ b/docs/faq/model_parallel_lstm.md
@@ -65,15 +65,15 @@ Although the LSTM layers consume less memory than the 
decoder/encoder layers, th
 Thus, the partition on the left will be faster than the one on the right
 because the workload is more evenly distributed.
 
-Currently, the layer partition is implemented in 
[lstm.py](https://github.com/eric-haibin-lin/mxnet/blob/master/example/model-parallel-lstm/lstm.py#L187)
 and configured in 
[lstm_ptb.py](https://github.com/eric-haibin-lin/mxnet/blob/master/example/model-parallel-lstm/lstm.py#L187)
 using the `group2ctx` option.
+Currently, the layer partition is implemented in 
[lstm.py](https://github.com/apache/incubator-mxnet/blob/master/example/model-parallel/lstm/lstm.py#L171)
 and configured in 
[lstm_ptb.py](https://github.com/apache/incubator-mxnet/blob/master/example/model-parallel/lstm/lstm_ptb.py#L97-L102)
 using the `group2ctx` option.
 
 ## Apply Bucketing to Model Parallelism
 
 To achieve model parallelism while using bucketing,
 you need to unroll an LSTM model for each bucket
 to obtain an executor for each.
-For details about how the model is bound, see 
[lstm.py](https://github.com/eric-haibin-lin/mxnet/blob/master/example/model-parallel-lstm/lstm.py#L154).
+For details about how the model is bound, see 
[lstm.py](https://github.com/apache/incubator-mxnet/blob/master/example/model-parallel/lstm/lstm.py#L225-L235).
 
 On the other hand, because model parallelism partitions the model/layers,
 the input data has to be transformed/transposed to the agreed shape.
-For more details, see 
[bucket_io](https://github.com/eric-haibin-lin/mxnet/blob/master/example/model-parallel-lstm/lstm.py#L154).
+For more details, see 
[bucket_io](https://github.com/apache/incubator-mxnet/blob/master/example/rnn/old/bucket_io.py).

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" '].


[incubator-mxnet] branch master updated: correct usage of bool arguments from command line (#8880)

2018-01-16 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 89b4fa0  correct usage of bool arguments from command line (#8880)
89b4fa0 is described below

commit 89b4fa0648405d755355957f99a2c794a3555f46
Author: Rahul Huilgol 
AuthorDate: Tue Jan 16 11:51:41 2018 -0800

correct usage of bool arguments from command line (#8880)

* correct boolean args in arg parsing

Signed-off-by: Rahul 

* remove print

Signed-off-by: Rahul 

* remove debug changes

Signed-off-by: Rahul 

* using store_true/false

Signed-off-by: Rahul 
---
 example/caffe/caffe_net.py  |  4 ++--
 example/cnn_chinese_text_classification/text_cnn.py |  4 ++--
 example/cnn_text_classification/text_cnn.py |  4 ++--
 example/reinforcement-learning/dqn/dqn_demo.py  |  4 ++--
 example/rnn/bucketing/cudnn_lstm_bucketing.py   |  4 ++--
 example/ssd/demo.py |  8 
 example/ssd/deploy.py   |  4 ++--
 example/ssd/evaluate.py |  8 
 example/ssd/tools/prepare_dataset.py|  4 ++--
 example/ssd/train.py|  8 
 tools/im2rec.py | 15 ---
 11 files changed, 34 insertions(+), 33 deletions(-)

diff --git a/example/caffe/caffe_net.py b/example/caffe/caffe_net.py
index 0dc4770..0459c90 100644
--- a/example/caffe/caffe_net.py
+++ b/example/caffe/caffe_net.py
@@ -77,8 +77,8 @@ def parse_args():
 help='the cnn to use (mlp | lenet | ')
 parser.add_argument('--caffe-loss', type=int, default=0,
 help='Use CaffeLoss symbol')
-parser.add_argument('--caffe-data', type=bool, default=False,
-help='Use Caffe input-data layer (True | False)')
+parser.add_argument('--caffe-data', action='store_true',
+help='Use Caffe input-data layer only if specified')
 parser.add_argument('--data-dir', type=str, default='mnist/',
 help='the input data directory')
 parser.add_argument('--gpus', type=str,
diff --git a/example/cnn_chinese_text_classification/text_cnn.py 
b/example/cnn_chinese_text_classification/text_cnn.py
index 2a78fd2..4598a52 100644
--- a/example/cnn_chinese_text_classification/text_cnn.py
+++ b/example/cnn_chinese_text_classification/text_cnn.py
@@ -38,8 +38,8 @@ logger = logging.getLogger(__name__)
 
 parser = argparse.ArgumentParser(description="CNN for text classification",
  
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
-parser.add_argument('--pretrained-embedding', type=bool, default=False,
-help='use pre-trained word2vec')
+parser.add_argument('--pretrained-embedding', action='store_true',
+help='use pre-trained word2vec only if specified')
 parser.add_argument('--num-embed', type=int, default=300,
 help='embedding layer size')
 parser.add_argument('--gpus', type=str, default='',
diff --git a/example/cnn_text_classification/text_cnn.py 
b/example/cnn_text_classification/text_cnn.py
index d88a8e6..9ad9443 100644
--- a/example/cnn_text_classification/text_cnn.py
+++ b/example/cnn_text_classification/text_cnn.py
@@ -31,8 +31,8 @@ logging.basicConfig(level=logging.DEBUG)
 
 parser = argparse.ArgumentParser(description="CNN for text classification",
  
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
-parser.add_argument('--pretrained-embedding', type=bool, default=False,
-help='use pre-trained word2vec')
+parser.add_argument('--pretrained-embedding', action='store_true',
+help='use pre-trained word2vec only if specified')
 parser.add_argument('--num-embed', type=int, default=300,
 help='embedding layer size')
 parser.add_argument('--gpus', type=str, default='',
diff --git a/example/reinforcement-learning/dqn/dqn_demo.py 
b/example/reinforcement-learning/dqn/dqn_demo.py
index 750da7a..8655d5c 100644
--- a/example/reinforcement-learning/dqn/dqn_demo.py
+++ b/example/reinforcement-learning/dqn/dqn_demo.py
@@ -55,8 +55,8 @@ def main():
 help='Eps of the AdaGrad optimizer')
 parser.add_argument('--clip-gradient', required=False, type=float, 
default=None,
 help='Clip threshold of the AdaGrad optimizer')
-parser.add_argument('--double-q', required=False, type=bool, default=False,
-help='Use Double DQN')
+parser.add_argument('--double-q', 

[GitHub] piiswrong closed pull request #8880: correct usage of bool arguments from command line

2018-01-16 Thread GitBox
piiswrong closed pull request #8880: correct usage of bool arguments from 
command line
URL: https://github.com/apache/incubator-mxnet/pull/8880
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/example/caffe/caffe_net.py b/example/caffe/caffe_net.py
index 0dc4770a24..0459c901e1 100644
--- a/example/caffe/caffe_net.py
+++ b/example/caffe/caffe_net.py
@@ -77,8 +77,8 @@ def parse_args():
 help='the cnn to use (mlp | lenet | ')
 parser.add_argument('--caffe-loss', type=int, default=0,
 help='Use CaffeLoss symbol')
-parser.add_argument('--caffe-data', type=bool, default=False,
-help='Use Caffe input-data layer (True | False)')
+parser.add_argument('--caffe-data', action='store_true',
+help='Use Caffe input-data layer only if specified')
 parser.add_argument('--data-dir', type=str, default='mnist/',
 help='the input data directory')
 parser.add_argument('--gpus', type=str,
diff --git a/example/cnn_chinese_text_classification/text_cnn.py 
b/example/cnn_chinese_text_classification/text_cnn.py
index 2a78fd2578..4598a52e66 100644
--- a/example/cnn_chinese_text_classification/text_cnn.py
+++ b/example/cnn_chinese_text_classification/text_cnn.py
@@ -38,8 +38,8 @@
 
 parser = argparse.ArgumentParser(description="CNN for text classification",
  
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
-parser.add_argument('--pretrained-embedding', type=bool, default=False,
-help='use pre-trained word2vec')
+parser.add_argument('--pretrained-embedding', action='store_true',
+help='use pre-trained word2vec only if specified')
 parser.add_argument('--num-embed', type=int, default=300,
 help='embedding layer size')
 parser.add_argument('--gpus', type=str, default='',
diff --git a/example/cnn_text_classification/text_cnn.py 
b/example/cnn_text_classification/text_cnn.py
index d88a8e6994..9ad9443984 100644
--- a/example/cnn_text_classification/text_cnn.py
+++ b/example/cnn_text_classification/text_cnn.py
@@ -31,8 +31,8 @@
 
 parser = argparse.ArgumentParser(description="CNN for text classification",
  
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
-parser.add_argument('--pretrained-embedding', type=bool, default=False,
-help='use pre-trained word2vec')
+parser.add_argument('--pretrained-embedding', action='store_true',
+help='use pre-trained word2vec only if specified')
 parser.add_argument('--num-embed', type=int, default=300,
 help='embedding layer size')
 parser.add_argument('--gpus', type=str, default='',
diff --git a/example/reinforcement-learning/dqn/dqn_demo.py 
b/example/reinforcement-learning/dqn/dqn_demo.py
index 750da7a69a..8655d5cb55 100644
--- a/example/reinforcement-learning/dqn/dqn_demo.py
+++ b/example/reinforcement-learning/dqn/dqn_demo.py
@@ -55,8 +55,8 @@ def main():
 help='Eps of the AdaGrad optimizer')
 parser.add_argument('--clip-gradient', required=False, type=float, 
default=None,
 help='Clip threshold of the AdaGrad optimizer')
-parser.add_argument('--double-q', required=False, type=bool, default=False,
-help='Use Double DQN')
+parser.add_argument('--double-q', action='store_true',
+help='Use Double DQN only if specified')
 parser.add_argument('--wd', required=False, type=float, default=0.0,
 help='Weight of the L2 Regularizer')
 parser.add_argument('-c', '--ctx', required=False, type=str, default='gpu',
diff --git a/example/rnn/bucketing/cudnn_lstm_bucketing.py 
b/example/rnn/bucketing/cudnn_lstm_bucketing.py
index e9c3237f26..84cfc9d438 100644
--- a/example/rnn/bucketing/cudnn_lstm_bucketing.py
+++ b/example/rnn/bucketing/cudnn_lstm_bucketing.py
@@ -33,8 +33,8 @@
 help='hidden layer size')
 parser.add_argument('--num-embed', type=int, default=200,
 help='embedding layer size')
-parser.add_argument('--bidirectional', type=bool, default=False,
-help='whether to use bidirectional layers')
+parser.add_argument('--bidirectional', action='store_true',
+help='uses bidirectional layers if specified')
 parser.add_argument('--gpus', type=str,
 help='list of gpus to run, e.g. 0 or 0,2,5. empty means 
using cpu. ' \
  'Increase batch size when using multiple gpus for 
best performance.')
diff --git a/example/ssd/demo.py b/example/ssd/demo.py
index 965f2ecec5..8106eb553a 100644
--- 

[GitHub] szha commented on issue #6257: Merge AMD-HIP port

2018-01-16 Thread GitBox
szha commented on issue #6257: Merge AMD-HIP port
URL: 
https://github.com/apache/incubator-mxnet/issues/6257#issuecomment-358082623
 
 
   @sriharikarnam submodules are versioned by commit hash, so you may go ahead 
and make changes in the submodules first, and once merged, update the 
submodules in mxnet.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9369: UT fix for Windows

2018-01-16 Thread GitBox
cjolivier01 commented on a change in pull request #9369: UT fix for Windows
URL: https://github.com/apache/incubator-mxnet/pull/9369#discussion_r161864850
 
 

 ##
 File path: src/operator/nn/batch_norm-inl.h
 ##
 @@ -56,7 +56,7 @@ constexpr int DEFAULT_AXIS = 1;
 }  // namespace batchnorm
 
 /*! \brief Parameters for BatchNoram operator */
-struct BatchNormParam : public dmlc::Parameter {
+struct MXNET_API BatchNormParam : public dmlc::Parameter {
 
 Review comment:
   Ok, I get it now -- for the unit tests. I think better to link statically?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9369: UT fix for Windows

2018-01-16 Thread GitBox
cjolivier01 commented on a change in pull request #9369: UT fix for Windows
URL: https://github.com/apache/incubator-mxnet/pull/9369#discussion_r161858079
 
 

 ##
 File path: tests/CMakeLists.txt
 ##
 @@ -21,7 +21,7 @@ if(NOT MSVC)
 endif()
 
 # FIXME MSVC unit test linking issue
-if(GTEST_FOUND AND NOT MSVC)
+if(GTEST_FOUND)
 
 Review comment:
   This works now? Nice!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9369: UT fix for Windows

2018-01-16 Thread GitBox
cjolivier01 commented on a change in pull request #9369: UT fix for Windows
URL: https://github.com/apache/incubator-mxnet/pull/9369#discussion_r161857886
 
 

 ##
 File path: src/operator/nn/batch_norm-inl.h
 ##
 @@ -56,7 +56,7 @@ constexpr int DEFAULT_AXIS = 1;
 }  // namespace batchnorm
 
 /*! \brief Parameters for BatchNoram operator */
-struct BatchNormParam : public dmlc::Parameter {
+struct MXNET_API BatchNormParam : public dmlc::Parameter {
 
 Review comment:
   I don't see MXNET_API used outside of main mcnet include directory.  Why 
have it internally like this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9369: UT fix for Windows

2018-01-16 Thread GitBox
cjolivier01 commented on a change in pull request #9369: UT fix for Windows
URL: https://github.com/apache/incubator-mxnet/pull/9369#discussion_r161857543
 
 

 ##
 File path: src/operator/nn/batch_norm-imp.h
 ##
 @@ -0,0 +1,22 @@
+
+#ifndef MXNET_OPERATOR_NN_BATCH_NORM_IMP_H_
 
 Review comment:
   Is this just a move?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thinksanky commented on issue #43: fixed version drop down selection menu bar for 0.12.0 and 0.12.1 vers?

2018-01-16 Thread GitBox
thinksanky commented on issue #43: fixed version drop down selection menu bar 
for 0.12.0 and 0.12.1 vers?
URL: 
https://github.com/apache/incubator-mxnet-site/pull/43#issuecomment-358075434
 
 
   @szha - This change is only for versioned website (Static) which do not run 
nightly. So they are not changed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thinksanky commented on issue #42: Fix gpu install instructions

2018-01-16 Thread GitBox
thinksanky commented on issue #42: Fix gpu install instructions
URL: 
https://github.com/apache/incubator-mxnet-site/pull/42#issuecomment-358074588
 
 
   When you say MXNet Code base you mean master ? Master already had these 
changes and I didnt have to change this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #9369: UT fix for Windows

2018-01-16 Thread GitBox
cjolivier01 commented on issue #9369: UT fix for Windows
URL: https://github.com/apache/incubator-mxnet/pull/9369#issuecomment-358073203
 
 
   Would it be better to just have the WIndows unit tests link statically like 
the UNiex tests do? That way, you wouldn't need to export internal classes


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated (8d0313b -> 942cfb7)

2018-01-16 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git.


from 8d0313b  Nightly build
 add 6ae4ef8  fixed broken links for scala index page
 new 942cfb7  Merge pull request #45 from 
thinksanky/fix-broken-links-01152018

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 api/scala/index.html | 8 
 versions/0.11.0/api/scala/index.html | 8 
 versions/0.12.0/api/scala/index.html | 8 
 versions/0.12.1/api/scala/index.html | 8 
 4 files changed, 16 insertions(+), 16 deletions(-)

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" '].


[incubator-mxnet-site] 01/01: Merge pull request #45 from thinksanky/fix-broken-links-01152018

2018-01-16 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git

commit 942cfb71559a9f4c645df60ff5b8ceadbe80cc45
Merge: 8d0313b 6ae4ef8
Author: Haibin Lin 
AuthorDate: Tue Jan 16 11:05:53 2018 -0800

Merge pull request #45 from thinksanky/fix-broken-links-01152018

fixed broken links for scala index page

 api/scala/index.html | 8 
 versions/0.11.0/api/scala/index.html | 8 
 versions/0.12.0/api/scala/index.html | 8 
 versions/0.12.1/api/scala/index.html | 8 
 4 files changed, 16 insertions(+), 16 deletions(-)

-- 
To stop receiving notification emails like this one, please contact
"comm...@mxnet.apache.org" .


[GitHub] eric-haibin-lin closed pull request #45: fixed broken links for scala index page

2018-01-16 Thread GitBox
eric-haibin-lin closed pull request #45: fixed broken links for scala index page
URL: https://github.com/apache/incubator-mxnet-site/pull/45
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/api/scala/index.html b/api/scala/index.html
index 97b66a51..8e53b9ea 100644
--- a/api/scala/index.html
+++ b/api/scala/index.html
@@ -206,8 +206,8 @@ 
 Resources
 https://mxnet.incubator.apache.org/api/scala/docs/index.html;>MXNet Scala 
API Documentation
 https://mxnet.incubator.apache.org/tutorials/scala/mnist.html;>Handwritten
 Digit Classification in Scala
-https://github.com/dmlc/mxnet/blob/master/scala-package/examples/src/main/scala/ml/dmlc/mxnetexamples/neuralstyle/NeuralStyle.scala;>Neural
 Style in Scala on MXNet
-https://github.com/dmlc/mxnet/tree/master/scala-package/examples/src/main/scala/ml/dmlc/mxnetexamples;>More
 Scala Examples
+https://github.com/dmlc/mxnet/blob/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/neuralstyle/NeuralStyle.scala;>Neural
 Style in Scala on MXNet
+https://github.com/dmlc/mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples;>More
 Scala Examples
 
 
 
@@ -264,8 +264,8 @@ 
 
 https://mxnet.incubator.apache.org/api/scala/docs/index.html;>MXNet Scala 
API Documentation
 https://mxnet.incubator.apache.org/tutorials/scala/mnist.html;>Handwritten
 Digit Classification in Scala
-https://github.com/dmlc/mxnet/blob/master/scala-package/examples/src/main/scala/ml/dmlc/mxnetexamples/neuralstyle/NeuralStyle.scala;>Neural
 Style in Scala on MXNet
-https://github.com/dmlc/mxnet/tree/master/scala-package/examples/src/main/scala/ml/dmlc/mxnetexamples;>More
 Scala Examples
+https://github.com/dmlc/mxnet/blob/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/neuralstyle/NeuralStyle.scala;>Neural
 Style in Scala on MXNet
+https://github.com/dmlc/mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples;>More
 Scala Examples
 
 
 
diff --git a/versions/0.11.0/api/scala/index.html 
b/versions/0.11.0/api/scala/index.html
index 6f45666b..e66d2878 100644
--- a/versions/0.11.0/api/scala/index.html
+++ b/versions/0.11.0/api/scala/index.html
@@ -162,8 +162,8 @@ 
 Resources
 https://mxnet.incubator.apache.org/api/scala/docs/index.html;>MXNet Scala 
API Documentation
 https://mxnet.incubator.apache.org/tutorials/scala/mnist.html;>Handwritten
 Digit Classification in Scala
-https://github.com/dmlc/mxnet/blob/master/scala-package/examples/src/main/scala/ml/dmlc/mxnetexamples/neuralstyle/NeuralStyle.scala;>Neural
 Style in Scala on MXNet
-https://github.com/dmlc/mxnet/tree/master/scala-package/examples/src/main/scala/ml/dmlc/mxnetexamples;>More
 Scala Examples
+https://github.com/dmlc/mxnet/blob/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/neuralstyle/NeuralStyle.scala;>Neural
 Style in Scala on MXNet
+https://github.com/dmlc/mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples;>More
 Scala Examples
 
 
 
@@ -216,8 +216,8 @@ 
 
 https://mxnet.incubator.apache.org/api/scala/docs/index.html;>MXNet Scala 
API Documentation
 https://mxnet.incubator.apache.org/tutorials/scala/mnist.html;>Handwritten
 Digit Classification in Scala
-https://github.com/dmlc/mxnet/blob/master/scala-package/examples/src/main/scala/ml/dmlc/mxnetexamples/neuralstyle/NeuralStyle.scala;>Neural
 Style in Scala on MXNet
-https://github.com/dmlc/mxnet/tree/master/scala-package/examples/src/main/scala/ml/dmlc/mxnetexamples;>More
 Scala Examples
+https://github.com/dmlc/mxnet/blob/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/neuralstyle/NeuralStyle.scala;>Neural
 Style in Scala on MXNet
+https://github.com/dmlc/mxnet/tree/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples;>More
 Scala Examples
 
 
 
diff --git a/versions/0.12.0/api/scala/index.html 
b/versions/0.12.0/api/scala/index.html
index 88d05831..d117a92a 100644
--- a/versions/0.12.0/api/scala/index.html
+++ b/versions/0.12.0/api/scala/index.html
@@ -206,8 +206,8 @@ 
 Resources
 https://mxnet.incubator.apache.org/api/scala/docs/index.html;>MXNet Scala 
API Documentation
 https://mxnet.incubator.apache.org/tutorials/scala/mnist.html;>Handwritten
 Digit Classification in Scala
-https://github.com/dmlc/mxnet/blob/master/scala-package/examples/src/main/scala/ml/dmlc/mxnetexamples/neuralstyle/NeuralStyle.scala;>Neural
 Style in Scala on MXNet
-https://github.com/dmlc/mxnet/tree/master/scala-package/examples/src/main/scala/ml/dmlc/mxnetexamples;>More
 Scala Examples
+https://github.com/dmlc/mxnet/blob/master/scala-package/examples/src/main/scala/org/apache/mxnetexamples/neuralstyle/NeuralStyle.scala;>Neural
 Style in Scala on MXNet

[GitHub] thinksanky commented on issue #45: fixed broken links for scala index page

2018-01-16 Thread GitBox
thinksanky commented on issue #45: fixed broken links for scala index page
URL: 
https://github.com/apache/incubator-mxnet-site/pull/45#issuecomment-358068795
 
 
   @eric-haibin-lin - please have a look into this. Thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thinksanky opened a new pull request #45: fixed broken links for scala index page

2018-01-16 Thread GitBox
thinksanky opened a new pull request #45: fixed broken links for scala index 
page
URL: https://github.com/apache/incubator-mxnet-site/pull/45
 
 
   ## Description ##
   Fixed Scala API index.html to reference to the right URLs


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: text: use _contan, import modules, and keep code style consistent (#9446)

2018-01-16 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new b396bc3  text: use _contan, import modules, and keep code style 
consistent (#9446)
b396bc3 is described below

commit b396bc38397e3a26fd6d496deeca8ca50568c0c2
Author: Aston Zhang 
AuthorDate: Tue Jan 16 10:57:10 2018 -0800

text: use _contan, import modules, and keep code style consistent (#9446)

* text: Change constant to _contant and import modules in tests

* import modules only

* 80 chars per line

* Ensure consistency for 100-char per line

* rebuild
---
 .../contrib/text/{constants.py => _constants.py}   |   0
 python/mxnet/contrib/text/embedding.py | 315 ++
 python/mxnet/contrib/text/glossary.py  |  88 ++--
 python/mxnet/contrib/text/indexer.py   |  91 ++--
 python/mxnet/contrib/text/utils.py |  20 +-
 tests/python/unittest/test_contrib_text.py | 467 ++---
 6 files changed, 429 insertions(+), 552 deletions(-)

diff --git a/python/mxnet/contrib/text/constants.py 
b/python/mxnet/contrib/text/_constants.py
similarity index 100%
rename from python/mxnet/contrib/text/constants.py
rename to python/mxnet/contrib/text/_constants.py
diff --git a/python/mxnet/contrib/text/embedding.py 
b/python/mxnet/contrib/text/embedding.py
index 2996f1e..adba867 100644
--- a/python/mxnet/contrib/text/embedding.py
+++ b/python/mxnet/contrib/text/embedding.py
@@ -29,37 +29,34 @@ import tarfile
 import warnings
 import zipfile
 
-from . import constants as C
-from .indexer import TokenIndexer
+from . import _constants as C
+from . import indexer
 from ... import ndarray as nd
 from ... import registry
 
 
-class TokenEmbedding(TokenIndexer):
+class TokenEmbedding(indexer.TokenIndexer):
 """Token embedding base class.
 
 
-To load token embeddings from an externally hosted pre-trained
-token embedding file, such as those of GloVe and FastText, use
-`TokenEmbedding.create(embedding_name, pretrained_file_name)`. To get all
-the available `embedding_name` and `pretrained_file_name`, use
+To load token embeddings from an externally hosted pre-trained token 
embedding file, such as
+those of GloVe and FastText, use `TokenEmbedding.create(embedding_name, 
pretrained_file_name)`.
+To get all the available `embedding_name` and `pretrained_file_name`, use
 `TokenEmbedding.get_embedding_and_pretrained_file_names()`.
 
-Alternatively, to load embedding vectors from a custom pre-trained token
-embedding file, use :class:`~mxnet.text.embedding.CustomEmbedding`.
+Alternatively, to load embedding vectors from a custom pre-trained token 
embedding file, use
+:class:`~mxnet.text.embedding.CustomEmbedding`.
 
-For every unknown token, if its representation `self.unknown_token` is
-encountered in the pre-trained token embedding file, index 0 of
-`self.idx_to_vec` maps to the pre-trained token embedding vector loaded 
from
-the file; otherwise, index 0 of `self.idx_to_vec` maps to the token
-embedding vector initialized by `init_unknown_vec`.
+For every unknown token, if its representation `self.unknown_token` is 
encountered in the
+pre-trained token embedding file, index 0 of `self.idx_to_vec` maps to the 
pre-trained token
+embedding vector loaded from the file; otherwise, index 0 of 
`self.idx_to_vec` maps to the
+token embedding vector initialized by `init_unknown_vec`.
 
-If a token is encountered multiple times in the pre-trained token embedding
-file, only the first-encountered token embedding vector will be loaded and
-the rest will be skipped.
+If a token is encountered multiple times in the pre-trained token 
embedding file, only the
+first-encountered token embedding vector will be loaded and the rest will 
be skipped.
 
-For the same token, its index and embedding vector may vary across 
different
-instances of :class:`~mxnet.text.embedding.TokenEmbedding`.
+For the same token, its index and embedding vector may vary across 
different instances of
+:class:`~mxnet.text.embedding.TokenEmbedding`.
 
 
 Properties
@@ -67,20 +64,18 @@ class TokenEmbedding(TokenIndexer):
 token_to_idx : dict mapping str to int
 A dict mapping each token to its index integer.
 idx_to_token : list of strs
-A list of indexed tokens where the list indices and the token indices
-are aligned.
+A list of indexed tokens where the list indices and the token indices 
are aligned.
 unknown_token : hashable object
-The representation for any unknown token. In other words, any
-unknown token will be indexed as the same representation.
+The representation for any unknown 

[GitHub] piiswrong closed pull request #9446: text: use _contan, import modules, and keep code style consistent

2018-01-16 Thread GitBox
piiswrong closed pull request #9446: text: use _contan, import modules, and 
keep code style consistent
URL: https://github.com/apache/incubator-mxnet/pull/9446
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/contrib/text/constants.py 
b/python/mxnet/contrib/text/_constants.py
similarity index 100%
rename from python/mxnet/contrib/text/constants.py
rename to python/mxnet/contrib/text/_constants.py
diff --git a/python/mxnet/contrib/text/embedding.py 
b/python/mxnet/contrib/text/embedding.py
index 2996f1ea9f..adba867223 100644
--- a/python/mxnet/contrib/text/embedding.py
+++ b/python/mxnet/contrib/text/embedding.py
@@ -29,37 +29,34 @@
 import warnings
 import zipfile
 
-from . import constants as C
-from .indexer import TokenIndexer
+from . import _constants as C
+from . import indexer
 from ... import ndarray as nd
 from ... import registry
 
 
-class TokenEmbedding(TokenIndexer):
+class TokenEmbedding(indexer.TokenIndexer):
 """Token embedding base class.
 
 
-To load token embeddings from an externally hosted pre-trained
-token embedding file, such as those of GloVe and FastText, use
-`TokenEmbedding.create(embedding_name, pretrained_file_name)`. To get all
-the available `embedding_name` and `pretrained_file_name`, use
+To load token embeddings from an externally hosted pre-trained token 
embedding file, such as
+those of GloVe and FastText, use `TokenEmbedding.create(embedding_name, 
pretrained_file_name)`.
+To get all the available `embedding_name` and `pretrained_file_name`, use
 `TokenEmbedding.get_embedding_and_pretrained_file_names()`.
 
-Alternatively, to load embedding vectors from a custom pre-trained token
-embedding file, use :class:`~mxnet.text.embedding.CustomEmbedding`.
+Alternatively, to load embedding vectors from a custom pre-trained token 
embedding file, use
+:class:`~mxnet.text.embedding.CustomEmbedding`.
 
-For every unknown token, if its representation `self.unknown_token` is
-encountered in the pre-trained token embedding file, index 0 of
-`self.idx_to_vec` maps to the pre-trained token embedding vector loaded 
from
-the file; otherwise, index 0 of `self.idx_to_vec` maps to the token
-embedding vector initialized by `init_unknown_vec`.
+For every unknown token, if its representation `self.unknown_token` is 
encountered in the
+pre-trained token embedding file, index 0 of `self.idx_to_vec` maps to the 
pre-trained token
+embedding vector loaded from the file; otherwise, index 0 of 
`self.idx_to_vec` maps to the
+token embedding vector initialized by `init_unknown_vec`.
 
-If a token is encountered multiple times in the pre-trained token embedding
-file, only the first-encountered token embedding vector will be loaded and
-the rest will be skipped.
+If a token is encountered multiple times in the pre-trained token 
embedding file, only the
+first-encountered token embedding vector will be loaded and the rest will 
be skipped.
 
-For the same token, its index and embedding vector may vary across 
different
-instances of :class:`~mxnet.text.embedding.TokenEmbedding`.
+For the same token, its index and embedding vector may vary across 
different instances of
+:class:`~mxnet.text.embedding.TokenEmbedding`.
 
 
 Properties
@@ -67,20 +64,18 @@ class TokenEmbedding(TokenIndexer):
 token_to_idx : dict mapping str to int
 A dict mapping each token to its index integer.
 idx_to_token : list of strs
-A list of indexed tokens where the list indices and the token indices
-are aligned.
+A list of indexed tokens where the list indices and the token indices 
are aligned.
 unknown_token : hashable object
-The representation for any unknown token. In other words, any
-unknown token will be indexed as the same representation.
+The representation for any unknown token. In other words, any unknown 
token will be indexed
+as the same representation.
 reserved_tokens : list of strs or None
 A list of reserved tokens that will always be indexed.
 vec_len : int
 The length of the embedding vector for each token.
 idx_to_vec : mxnet.ndarray.NDArray
-For all the indexed tokens in this embedding, this NDArray maps each
-token's index to an embedding vector. The largest valid index maps
-to the initialized embedding vector for every reserved token, such as 
an
-unknown_token token and a padding token.
+For all the indexed tokens in this embedding, this NDArray maps each 
token's index to an
+embedding vector. The largest valid index maps to the initialized 
embedding vector for every
+  

[GitHub] eric-haibin-lin commented on issue #9452: A new deep learning visualization tool from baidu support MXNet!

2018-01-16 Thread GitBox
eric-haibin-lin commented on issue #9452: A new deep learning visualization 
tool from baidu support MXNet!
URL: 
https://github.com/apache/incubator-mxnet/issues/9452#issuecomment-358062669
 
 
   @reminisce @zihaolucky 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >