[GitHub] thbupt commented on issue #9420: add use_global_stats in nn.BatchNorm

2018-02-25 Thread GitBox
thbupt commented on issue #9420: add use_global_stats in nn.BatchNorm
URL: https://github.com/apache/incubator-mxnet/pull/9420#issuecomment-368417931
 
 
   @7oud I have the same question. I think use_global_stats=True should be used 
as you finetune some pretrained model such as ResNet, VGG.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
cjolivier01 commented on issue #9880: TVM bridge support to JIT NDArray 
Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368417298
 
 
   LGTM


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
cjolivier01 commented on a change in pull request #9880: TVM bridge support to 
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170513959
 
 

 ##
 File path: src/nnvm/tvm_bridge.cc
 ##
 @@ -0,0 +1,180 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file tvm_bridge.cc
+ * \brief Bridge to run TVM's PackedFunc in MXNet's async engine.
+ *
+ *  This bridge is mainly used to expose MXNet's async engine push to
+ *  TVM. It only uses TVM runtime in aheader only mode, which means
+ *  there is no link dependencies.
+ *
+ *  Support for TVM is optional even when this code
+ *  is always compiled and built with the project.
+ *  We choose this strategy because we do not yet want
+ *  llvm as dependency(which TVM uses). So instead we expose hook
+ *  to TVM and let user use this feature when they have TVM installed.
+ *
+ *  We do require TVM and MXNet to be built with same C++ ABI of std::function
+ */
+#define TVM_RUNTIME_HEADER_ONLY 1
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+namespace mxnet {
+
+using tvm::runtime::PackedFunc;
+using tvm::runtime::TVMArgs;
+using tvm::runtime::TVMRetValue;
+
+/*!
+ * \brief Async functor object
+ *  calling argument of the function.
+ */
+class TVMFunctor {
+ public:
+  // constructor
+  explicit TVMFunctor(PackedFunc func, PackedFunc fset_stream)
+  : func_(func), fset_stream_(fset_stream) {}
+
+  void Init(const TVMArgs& args,
+const std::vector& const_loc,
+std::vector* const_vars,
+std::vector* mutate_vars) {
+values_.clear();
+type_codes_.clear();
+values_.insert(values_.end(), args.values, args.values + args.size());
+type_codes_.insert(
+type_codes_.end(), args.type_codes, args.type_codes + args.size());
+
+size_t const_loc_ptr = 0;
+for (int i = 0; i < args.size(); ++i) {
+  if (args.type_codes[i] == kTVMNDArrayTypeCode) {
+const NDArray& nd =
+static_cast(args.values[i].v_handle)[0];
+// We cannot set the value until
+type_codes_[i] = kArrayHandle;
+array_data_.push_back(nd);
+array_loc_.push_back(i);
+// check if there is read or mutate
+// by default assume we mutate the array.
+if (const_loc_ptr < const_loc.size() &&
+i == const_loc[const_loc_ptr]) {
+  const_vars->push_back(nd.var());
 
 Review comment:
   ok


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
cjolivier01 commented on a change in pull request #9880: TVM bridge support to 
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170513895
 
 

 ##
 File path: src/nnvm/tvm_bridge.cc
 ##
 @@ -0,0 +1,180 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file tvm_bridge.cc
+ * \brief Bridge to run TVM's PackedFunc in MXNet's async engine.
+ *
+ *  This bridge is mainly used to expose MXNet's async engine push to
+ *  TVM. It only uses TVM runtime in aheader only mode, which means
+ *  there is no link dependencies.
+ *
+ *  Support for TVM is optional even when this code
+ *  is always compiled and built with the project.
+ *  We choose this strategy because we do not yet want
+ *  llvm as dependency(which TVM uses). So instead we expose hook
+ *  to TVM and let user use this feature when they have TVM installed.
+ *
+ *  We do require TVM and MXNet to be built with same C++ ABI of std::function
+ */
+#define TVM_RUNTIME_HEADER_ONLY 1
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+namespace mxnet {
+
+using tvm::runtime::PackedFunc;
+using tvm::runtime::TVMArgs;
+using tvm::runtime::TVMRetValue;
+
+/*!
+ * \brief Async functor object
+ *  calling argument of the function.
+ */
+class TVMFunctor {
+ public:
+  // constructor
+  explicit TVMFunctor(PackedFunc func, PackedFunc fset_stream)
+  : func_(func), fset_stream_(fset_stream) {}
+
+  void Init(const TVMArgs& args,
+const std::vector& const_loc,
+std::vector* const_vars,
+std::vector* mutate_vars) {
+values_.clear();
+type_codes_.clear();
+values_.insert(values_.end(), args.values, args.values + args.size());
+type_codes_.insert(
+type_codes_.end(), args.type_codes, args.type_codes + args.size());
+
+size_t const_loc_ptr = 0;
+for (int i = 0; i < args.size(); ++i) {
+  if (args.type_codes[i] == kTVMNDArrayTypeCode) {
+const NDArray& nd =
+static_cast(args.values[i].v_handle)[0];
+// We cannot set the value until
+type_codes_[i] = kArrayHandle;
+array_data_.push_back(nd);
+array_loc_.push_back(i);
+// check if there is read or mutate
+// by default assume we mutate the array.
+if (const_loc_ptr < const_loc.size() &&
+i == const_loc[const_loc_ptr]) {
+  const_vars->push_back(nd.var());
 
 Review comment:
   ik


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new pull request #9887: Non-blocking row_sparse_pull

2018-02-25 Thread GitBox
eric-haibin-lin opened a new pull request #9887: Non-blocking row_sparse_pull 
URL: https://github.com/apache/incubator-mxnet/pull/9887
 
 
   ## Description ##
   This PR adds async execution support for kv.row_sparse_pull. 
   The operation was blocking because it requires unique row_ids, whose shape 
cannot be inferred ahead of time. This PR stores the unique row_ids in a 
row_sparse NDArray, whose data shape can be changed at run time when executed 
asynchronously. 
   
   - Removed `use_copy` param in `BroadcastRowSparse ` - this is essentially 
the same as calling `Broadcast`
   - Removed `CopyRetainedRowsToGPU ` and always use `SparseRetain` because 
`CopyRetainedRowsToGPU` has high invocation overhead and `SparseRetain` has 
improved performance
   - Revised test cases to test against shapes/dtypes commonly used by users
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #8922: fix a bug in sparse batch loader when batch size is extremely large

2018-02-25 Thread GitBox
eric-haibin-lin commented on issue #8922: fix a bug in sparse batch loader when 
batch size is extremely large
URL: https://github.com/apache/incubator-mxnet/pull/8922#issuecomment-368408689
 
 
   Closing it for now until the test is fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin closed pull request #8922: fix a bug in sparse batch loader when batch size is extremely large

2018-02-25 Thread GitBox
eric-haibin-lin closed pull request #8922: fix a bug in sparse batch loader 
when batch size is extremely large
URL: https://github.com/apache/incubator-mxnet/pull/8922
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/io/iter_sparse_batchloader.h b/src/io/iter_sparse_batchloader.h
index d5c9bd2f45..e5e9c1fe38 100644
--- a/src/io/iter_sparse_batchloader.h
+++ b/src/io/iter_sparse_batchloader.h
@@ -68,53 +68,36 @@ class SparseBatchLoader : public BatchLoader, public 
SparseIIterator
 // if overflown from previous round, directly return false, until before 
first is called
 if (num_overflow_ != 0) return false;
 index_t top = 0;
-inst_cache_.clear();
+offsets_.clear();
 while (sparse_base_->Next()) {
-  inst_cache_.emplace_back(sparse_base_->Value());
-  if (inst_cache_.size() >= param_.batch_size) break;
-}
-// no more data instance
-if (inst_cache_.size() == 0) {
-  return false;
+  const DataInst& inst = sparse_base_->Value();
+  // initialize the data buffer, only called once
+  if (data_.size() == 0) this->InitData(inst);
+  // initialize the number of elements in each buffer, called once per 
batch
+  if (offsets_.size() == 0) offsets_.resize(inst.data.size(), 0);
+  CopyData(inst, top);
+  if (++top >= param_.batch_size) {
+SetOutputShape();
+return true;
+  }
 }
-if (inst_cache_.size() < param_.batch_size) {
-  CHECK_GT(param_.round_batch, 0);
+if (top != 0) {
+  CHECK_NE(param_.round_batch, 0)
+<< "round_batch = False is not supported for sparse data iterator";
   num_overflow_ = 0;
   sparse_base_->BeforeFirst();
-  for (; inst_cache_.size() < param_.batch_size; ++num_overflow_) {
+  for (; top < param_.batch_size; ++top, ++num_overflow_) {
 CHECK(sparse_base_->Next()) << "number of input must be bigger than 
batch size";
-inst_cache_.emplace_back(sparse_base_->Value());
-  }
-}
-out_.num_batch_padd = num_overflow_;
-CHECK_EQ(inst_cache_.size(), param_.batch_size);
-this->InitDataFromBatch();
-for (size_t j = 0; j < inst_cache_.size(); j++) {
-  const auto& d = inst_cache_[j];
-  out_.inst_index[top] = d.index;
-  // TODO(haibin) double check the type?
-  int64_t unit_size = 0;
-  for (size_t i = 0; i < d.data.size(); ++i) {
-// indptr tensor
-if (IsIndPtr(i)) {
-  auto indptr = data_[i].get();
-  if (j == 0) indptr[0] = 0;
-  indptr[j + 1] = indptr[j] + unit_size;
-  offsets_[i] = j;
-} else {
-  // indices and values tensor
-  unit_size = d.data[i].shape_.Size();
-  MSHADOW_TYPE_SWITCH(data_[i].type_flag_, DType, {
-const auto begin = offsets_[i];
-const auto end = offsets_[i] + unit_size;
-mshadow::Copy(data_[i].get().Slice(begin, end),
-  d.data[i].get_with_shape(mshadow::Shape1(unit_size)));
-});
-  offsets_[i] += unit_size;
-}
+const DataInst& inst = sparse_base_->Value();
+// copy data
+CopyData(inst, top);
   }
+  SetOutputShape();
+  out_.num_batch_padd = num_overflow_;
+  return true;
 }
-return true;
+// no more data instance
+return false;
   }
 
   virtual const TBlobBatch (void) const {
@@ -138,14 +121,14 @@ class SparseBatchLoader : public BatchLoader, public 
SparseIIterator
  private:
   /*! \brief base sparse iterator */
   SparseIIterator *sparse_base_;
-  /*! \brief data instances */
-  std::vector inst_cache_;
   /*! \brief data storage type */
   NDArrayStorageType data_stype_;
   /*! \brief data label type */
   NDArrayStorageType label_stype_;
-  /*! \brief tensor offset for slicing */
+  /*! \brief tensor offsets for slicing */
   std::vector offsets_;
+  /*! \brief tensor dtypes */
+  std::vector dtypes_;
 
   // check whether ith position is the indptr tensor for a CSR tensor
   inline bool IsIndPtr(size_t i) {
@@ -157,44 +140,100 @@ class SparseBatchLoader : public BatchLoader, public 
SparseIIterator
   return true;
 }
 // label indptr
-if (i == label_indptr_offset && label_stype_ == kCSRStorage && data_stype_ 
== kCSRStorage) {
+if (i == label_indptr_offset && label_stype_ == kCSRStorage &&
+data_stype_ == kCSRStorage) {
   return true;
 }
 return false;
   }
 
   // initialize the data holder by using from the batch
-  inline void InitDataFromBatch() {
+  inline void InitData(const DataInst& first_batch) {
 CHECK(data_stype_ == kCSRStorage || label_stype_ == kCSRStorage);
-CHECK_GT(inst_cache_.size(), 0);
 

[GitHub] moveforever commented on issue #9819: Sometime MXDataIter load data quickly, sometime it load data slowly?

2018-02-25 Thread GitBox
moveforever commented on issue #9819: Sometime MXDataIter load data quickly, 
sometime it load data slowly?
URL: 
https://github.com/apache/incubator-mxnet/issues/9819#issuecomment-368396978
 
 
   The network is not complex, and it's only including 5 fully connected hidden 
layer.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] moveforever commented on issue #9819: Sometime MXDataIter load data quickly, sometime it load data slowly?

2018-02-25 Thread GitBox
moveforever commented on issue #9819: Sometime MXDataIter load data quickly, 
sometime it load data slowly?
URL: 
https://github.com/apache/incubator-mxnet/issues/9819#issuecomment-368396075
 
 
   I implement the Iter as followed
   
![15984851272413857](https://user-images.githubusercontent.com/5248288/36654869-2c65667c-1afa-11e8-81f4-8d6c0cf77188.jpg)
   I mainly implement KBIter similar to LibSVMIter. I implement KBIter and 
KBParser, then mend SparsePrefetcherIter, SparseBatchLoader, and i add some 
base data structure in dmlc-core as followed

   
![image](https://user-images.githubusercontent.com/5248288/36654965-cfd729da-1afa-11e8-820d-d044992a88a7.png)
   
![image](https://user-images.githubusercontent.com/5248288/36654975-e42a958e-1afa-11e8-9c75-c5ef53fdec94.png)
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] moveforever commented on issue #9819: Sometime MXDataIter load data quickly, sometime it load data slowly?

2018-02-25 Thread GitBox
moveforever commented on issue #9819: Sometime MXDataIter load data quickly, 
sometime it load data slowly?
URL: 
https://github.com/apache/incubator-mxnet/issues/9819#issuecomment-368396075
 
 
   I implement the Iter as followed
   
![15984851272413857](https://user-images.githubusercontent.com/5248288/36654869-2c65667c-1afa-11e8-81f4-8d6c0cf77188.jpg)
   I mainly implement KBIter similar to LibSVMIter. I implement KBIter and 
KBParser, then mend SparsePrefetcherIter, SparseBatchLoader, and i add some 
base data structure as followed

   
![image](https://user-images.githubusercontent.com/5248288/36654965-cfd729da-1afa-11e8-820d-d044992a88a7.png)
   
![image](https://user-images.githubusercontent.com/5248288/36654975-e42a958e-1afa-11e8-9c75-c5ef53fdec94.png)
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] moveforever commented on issue #9819: Sometime MXDataIter load data quickly, sometime it load data slowly?

2018-02-25 Thread GitBox
moveforever commented on issue #9819: Sometime MXDataIter load data quickly, 
sometime it load data slowly?
URL: 
https://github.com/apache/incubator-mxnet/issues/9819#issuecomment-368395245
 
 
   The test of every step's consuming has been done as followed:
   
![image](https://user-images.githubusercontent.com/5248288/36654797-d2207dfa-1af9-11e8-98b2-af5977200753.png)
   loading data is mainly most consuming time step, that is 90 percent of  one 
batch update time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] moveforever commented on issue #9819: Sometime MXDataIter load data quickly, sometime it load data slowly?

2018-02-25 Thread GitBox
moveforever commented on issue #9819: Sometime MXDataIter load data quickly, 
sometime it load data slowly?
URL: 
https://github.com/apache/incubator-mxnet/issues/9819#issuecomment-368394556
 
 
   My data format of one line is including dense and sparse as followed
   ![1519619151 
1](https://user-images.githubusercontent.com/5248288/36654678-4b27bf2a-1af9-11e8-9e97-52ab41df5289.png)
   The first column is label, the second is dense(csv), and the the third is 
sparse(libsvm).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] moveforever commented on issue #9819: Sometime MXDataIter load data quickly, sometime it load data slowly?

2018-02-25 Thread GitBox
moveforever commented on issue #9819: Sometime MXDataIter load data quickly, 
sometime it load data slowly?
URL: 
https://github.com/apache/incubator-mxnet/issues/9819#issuecomment-368394300
 
 
   My data format is including sparse
   
![image](https://user-images.githubusercontent.com/5248288/36654508-1f325ef8-1af8-11e8-9259-778ea2f578ee.png)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] moveforever commented on issue #9819: Sometime MXDataIter load data quickly, sometime it load data slowly?

2018-02-25 Thread GitBox
moveforever commented on issue #9819: Sometime MXDataIter load data quickly, 
sometime it load data slowly?
URL: 
https://github.com/apache/incubator-mxnet/issues/9819#issuecomment-368394300
 
 
   My data format is including sparse
   
![image](https://user-images.githubusercontent.com/5248288/36654508-1f325ef8-1af8-11e8-9259-778ea2f578ee.png)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on a change in pull request #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
tqchen commented on a change in pull request #9880: TVM bridge support to JIT 
NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170495717
 
 

 ##
 File path: src/nnvm/tvm_bridge.cc
 ##
 @@ -0,0 +1,180 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file tvm_bridge.cc
+ * \brief Bridge to run TVM's PackedFunc in MXNet's async engine.
+ *
+ *  This bridge is mainly used to expose MXNet's async engine push to
+ *  TVM. It only uses TVM runtime in aheader only mode, which means
+ *  there is no link dependencies.
+ *
+ *  Support for TVM is optional even when this code
+ *  is always compiled and built with the project.
+ *  We choose this strategy because we do not yet want
+ *  llvm as dependency(which TVM uses). So instead we expose hook
+ *  to TVM and let user use this feature when they have TVM installed.
+ *
+ *  We do require TVM and MXNet to be built with same C++ ABI of std::function
+ */
+#define TVM_RUNTIME_HEADER_ONLY 1
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+namespace mxnet {
+
+using tvm::runtime::PackedFunc;
+using tvm::runtime::TVMArgs;
+using tvm::runtime::TVMRetValue;
+
+/*!
+ * \brief Async functor object
+ *  calling argument of the function.
+ */
+class TVMFunctor {
+ public:
+  // constructor
+  explicit TVMFunctor(PackedFunc func, PackedFunc fset_stream)
+  : func_(func), fset_stream_(fset_stream) {}
+
+  void Init(const TVMArgs& args,
+const std::vector& const_loc,
+std::vector* const_vars,
+std::vector* mutate_vars) {
+values_.clear();
+type_codes_.clear();
+values_.insert(values_.end(), args.values, args.values + args.size());
+type_codes_.insert(
+type_codes_.end(), args.type_codes, args.type_codes + args.size());
+
+size_t const_loc_ptr = 0;
+for (int i = 0; i < args.size(); ++i) {
+  if (args.type_codes[i] == kTVMNDArrayTypeCode) {
+const NDArray& nd =
+static_cast(args.values[i].v_handle)[0];
+// We cannot set the value until
+type_codes_[i] = kArrayHandle;
+array_data_.push_back(nd);
+array_loc_.push_back(i);
+// check if there is read or mutate
+// by default assume we mutate the array.
+if (const_loc_ptr < const_loc.size() &&
+i == const_loc[const_loc_ptr]) {
+  const_vars->push_back(nd.var());
+  ++const_loc_ptr;
+} else {
+  mutate_vars->push_back(nd.var());
+}
+  } else {
+CHECK_LT(args.type_codes[i], kTVMType)
+<< "Only allow POD type in mxnet async call";
+  }
+}
+  }
+
+  Context ctx() {
+return array_data_[0].ctx();
+  }
+
+  void Run(const RunContext& rctx) {
+// setup DLTensor
+for (size_t i = 0; i < array_loc_.size(); ++i) {
+  values_[array_loc_[i]].v_handle =
+  const_cast(&(array_data_[i].data().dltensor()));
+}
+// run the packed function
+TVMRetValue rv;
+TVMArgs args(_[0], _codes_[0], values_.size());
+if (ctx().dev_type == Context::kGPU) {
+#if MXNET_USE_CUDA
+  // pass stream via last argument.
+  void* strm = static_cast(rctx.get_stream()->stream_);
+  int dev_type = kDLGPU;
+  fset_stream_(dev_type, rctx.ctx.dev_id, strm);
+  func_.CallPacked(args, );
+  fset_stream_(dev_type, rctx.ctx.dev_id, nullptr);
+#else
+  LOG(FATAL) << "Please compile with CUDA enabled for cuda features";
+#endif
+} else {
+  func_.CallPacked(args, );
+}
+  }
+
+ private:
+  /*! \brief The function */
+  PackedFunc func_;
+  /*! \brief Set stream */
+  PackedFunc fset_stream_;
+  /*! \brief Values field */
+  std::vector values_;
+  /*! \brief type code field */
+  std::vector type_codes_;
+  /*! \brief arrays field */
+  std::vector array_data_;
+  /*! \brief position of array in arguments */
+  std::vector array_loc_;
+};
+
+
+// Wrap a TVM function to a function that invokes MXNet's Engine
+// It does two things: call the engine properly
+// set up the NDArray to DLTensor during invocation.
+void WrapAsyncCall(TVMArgs wrap_args, TVMRetValue* wrap_rv) {
+  PackedFunc f = 

[GitHub] tqchen commented on a change in pull request #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
tqchen commented on a change in pull request #9880: TVM bridge support to JIT 
NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170495669
 
 

 ##
 File path: src/nnvm/tvm_bridge.cc
 ##
 @@ -0,0 +1,180 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file tvm_bridge.cc
+ * \brief Bridge to run TVM's PackedFunc in MXNet's async engine.
+ *
+ *  This bridge is mainly used to expose MXNet's async engine push to
+ *  TVM. It only uses TVM runtime in aheader only mode, which means
+ *  there is no link dependencies.
+ *
+ *  Support for TVM is optional even when this code
+ *  is always compiled and built with the project.
+ *  We choose this strategy because we do not yet want
+ *  llvm as dependency(which TVM uses). So instead we expose hook
+ *  to TVM and let user use this feature when they have TVM installed.
+ *
+ *  We do require TVM and MXNet to be built with same C++ ABI of std::function
+ */
+#define TVM_RUNTIME_HEADER_ONLY 1
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+namespace mxnet {
+
+using tvm::runtime::PackedFunc;
+using tvm::runtime::TVMArgs;
+using tvm::runtime::TVMRetValue;
+
+/*!
+ * \brief Async functor object
+ *  calling argument of the function.
+ */
+class TVMFunctor {
+ public:
+  // constructor
+  explicit TVMFunctor(PackedFunc func, PackedFunc fset_stream)
+  : func_(func), fset_stream_(fset_stream) {}
+
+  void Init(const TVMArgs& args,
+const std::vector& const_loc,
+std::vector* const_vars,
+std::vector* mutate_vars) {
+values_.clear();
+type_codes_.clear();
+values_.insert(values_.end(), args.values, args.values + args.size());
+type_codes_.insert(
+type_codes_.end(), args.type_codes, args.type_codes + args.size());
+
+size_t const_loc_ptr = 0;
+for (int i = 0; i < args.size(); ++i) {
+  if (args.type_codes[i] == kTVMNDArrayTypeCode) {
+const NDArray& nd =
+static_cast(args.values[i].v_handle)[0];
+// We cannot set the value until
+type_codes_[i] = kArrayHandle;
+array_data_.push_back(nd);
+array_loc_.push_back(i);
+// check if there is read or mutate
+// by default assume we mutate the array.
+if (const_loc_ptr < const_loc.size() &&
+i == const_loc[const_loc_ptr]) {
+  const_vars->push_back(nd.var());
 
 Review comment:
   we don't know the size of vector before hand


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #9872: A bug in an example in the python API document

2018-02-25 Thread GitBox
sxjscience commented on issue #9872: A bug in an example in the python API 
document
URL: 
https://github.com/apache/incubator-mxnet/issues/9872#issuecomment-368390268
 
 
   Sorry for not submitting the fix.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dotelos opened a new issue #9872: A bug in an example in the python API document

2018-02-25 Thread GitBox
dotelos opened a new issue #9872: A bug in an example in the python API document
URL: https://github.com/apache/incubator-mxnet/issues/9872
 
 
   This is an example found in the doc for 
[mxnet.autograd.Function](https://mxnet.incubator.apache.org/api/python/autograd.html#mxnet.autograd.Function).
   ```python
   class sigmoid(Function):
   def forward(self, x):
   y = 1 / (1 + mx.nd.exp(-x))
   self.save_for_backward(y)
   return y
   
   def backward(self, dy):
   # backward takes as many inputs as forward's return value,
   # and returns as many NDArrays as forward's arguments.
   y, = self.saved_tensors
   return y * (1-y)
   ```
   The `backward` method should return `dy * y * (1-y)` instead of `y * (1-y)` 
according to the chain rule.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
cjolivier01 commented on a change in pull request #9880: TVM bridge support to 
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170493947
 
 

 ##
 File path: src/nnvm/tvm_bridge.cc
 ##
 @@ -0,0 +1,180 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file tvm_bridge.cc
+ * \brief Bridge to run TVM's PackedFunc in MXNet's async engine.
+ *
+ *  This bridge is mainly used to expose MXNet's async engine push to
+ *  TVM. It only uses TVM runtime in aheader only mode, which means
+ *  there is no link dependencies.
+ *
+ *  Support for TVM is optional even when this code
+ *  is always compiled and built with the project.
+ *  We choose this strategy because we do not yet want
+ *  llvm as dependency(which TVM uses). So instead we expose hook
+ *  to TVM and let user use this feature when they have TVM installed.
+ *
+ *  We do require TVM and MXNet to be built with same C++ ABI of std::function
+ */
+#define TVM_RUNTIME_HEADER_ONLY 1
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+namespace mxnet {
+
+using tvm::runtime::PackedFunc;
+using tvm::runtime::TVMArgs;
+using tvm::runtime::TVMRetValue;
+
+/*!
+ * \brief Async functor object
+ *  calling argument of the function.
+ */
+class TVMFunctor {
+ public:
+  // constructor
+  explicit TVMFunctor(PackedFunc func, PackedFunc fset_stream)
+  : func_(func), fset_stream_(fset_stream) {}
+
+  void Init(const TVMArgs& args,
+const std::vector& const_loc,
+std::vector* const_vars,
+std::vector* mutate_vars) {
+values_.clear();
+type_codes_.clear();
+values_.insert(values_.end(), args.values, args.values + args.size());
+type_codes_.insert(
+type_codes_.end(), args.type_codes, args.type_codes + args.size());
+
+size_t const_loc_ptr = 0;
+for (int i = 0; i < args.size(); ++i) {
+  if (args.type_codes[i] == kTVMNDArrayTypeCode) {
+const NDArray& nd =
+static_cast(args.values[i].v_handle)[0];
+// We cannot set the value until
+type_codes_[i] = kArrayHandle;
+array_data_.push_back(nd);
+array_loc_.push_back(i);
+// check if there is read or mutate
+// by default assume we mutate the array.
+if (const_loc_ptr < const_loc.size() &&
+i == const_loc[const_loc_ptr]) {
+  const_vars->push_back(nd.var());
+  ++const_loc_ptr;
+} else {
+  mutate_vars->push_back(nd.var());
+}
+  } else {
+CHECK_LT(args.type_codes[i], kTVMType)
+<< "Only allow POD type in mxnet async call";
+  }
+}
+  }
+
+  Context ctx() {
+return array_data_[0].ctx();
+  }
+
+  void Run(const RunContext& rctx) {
+// setup DLTensor
+for (size_t i = 0; i < array_loc_.size(); ++i) {
+  values_[array_loc_[i]].v_handle =
+  const_cast(&(array_data_[i].data().dltensor()));
+}
+// run the packed function
+TVMRetValue rv;
+TVMArgs args(_[0], _codes_[0], values_.size());
+if (ctx().dev_type == Context::kGPU) {
+#if MXNET_USE_CUDA
+  // pass stream via last argument.
+  void* strm = static_cast(rctx.get_stream()->stream_);
+  int dev_type = kDLGPU;
+  fset_stream_(dev_type, rctx.ctx.dev_id, strm);
+  func_.CallPacked(args, );
+  fset_stream_(dev_type, rctx.ctx.dev_id, nullptr);
+#else
+  LOG(FATAL) << "Please compile with CUDA enabled for cuda features";
+#endif
+} else {
+  func_.CallPacked(args, );
+}
+  }
+
+ private:
+  /*! \brief The function */
+  PackedFunc func_;
+  /*! \brief Set stream */
+  PackedFunc fset_stream_;
+  /*! \brief Values field */
+  std::vector values_;
+  /*! \brief type code field */
+  std::vector type_codes_;
+  /*! \brief arrays field */
+  std::vector array_data_;
+  /*! \brief position of array in arguments */
+  std::vector array_loc_;
+};
+
+
+// Wrap a TVM function to a function that invokes MXNet's Engine
+// It does two things: call the engine properly
+// set up the NDArray to DLTensor during invocation.
+void WrapAsyncCall(TVMArgs wrap_args, TVMRetValue* wrap_rv) {
+  PackedFunc f = 

[GitHub] cjolivier01 commented on a change in pull request #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
cjolivier01 commented on a change in pull request #9880: TVM bridge support to 
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170493808
 
 

 ##
 File path: src/nnvm/tvm_bridge.cc
 ##
 @@ -0,0 +1,180 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file tvm_bridge.cc
+ * \brief Bridge to run TVM's PackedFunc in MXNet's async engine.
+ *
+ *  This bridge is mainly used to expose MXNet's async engine push to
+ *  TVM. It only uses TVM runtime in aheader only mode, which means
+ *  there is no link dependencies.
+ *
+ *  Support for TVM is optional even when this code
+ *  is always compiled and built with the project.
+ *  We choose this strategy because we do not yet want
+ *  llvm as dependency(which TVM uses). So instead we expose hook
+ *  to TVM and let user use this feature when they have TVM installed.
+ *
+ *  We do require TVM and MXNet to be built with same C++ ABI of std::function
+ */
+#define TVM_RUNTIME_HEADER_ONLY 1
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+namespace mxnet {
+
+using tvm::runtime::PackedFunc;
+using tvm::runtime::TVMArgs;
+using tvm::runtime::TVMRetValue;
+
+/*!
+ * \brief Async functor object
+ *  calling argument of the function.
+ */
+class TVMFunctor {
+ public:
+  // constructor
+  explicit TVMFunctor(PackedFunc func, PackedFunc fset_stream)
+  : func_(func), fset_stream_(fset_stream) {}
+
+  void Init(const TVMArgs& args,
+const std::vector& const_loc,
+std::vector* const_vars,
+std::vector* mutate_vars) {
+values_.clear();
+type_codes_.clear();
+values_.insert(values_.end(), args.values, args.values + args.size());
+type_codes_.insert(
+type_codes_.end(), args.type_codes, args.type_codes + args.size());
+
+size_t const_loc_ptr = 0;
+for (int i = 0; i < args.size(); ++i) {
+  if (args.type_codes[i] == kTVMNDArrayTypeCode) {
+const NDArray& nd =
+static_cast(args.values[i].v_handle)[0];
+// We cannot set the value until
+type_codes_[i] = kArrayHandle;
+array_data_.push_back(nd);
+array_loc_.push_back(i);
+// check if there is read or mutate
+// by default assume we mutate the array.
+if (const_loc_ptr < const_loc.size() &&
+i == const_loc[const_loc_ptr]) {
+  const_vars->push_back(nd.var());
 
 Review comment:
   is this called a lot in performance-sensitive areas? should we do a 
reserve()?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
cjolivier01 commented on a change in pull request #9880: TVM bridge support to 
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170493864
 
 

 ##
 File path: src/nnvm/tvm_bridge.cc
 ##
 @@ -0,0 +1,180 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file tvm_bridge.cc
+ * \brief Bridge to run TVM's PackedFunc in MXNet's async engine.
+ *
+ *  This bridge is mainly used to expose MXNet's async engine push to
+ *  TVM. It only uses TVM runtime in aheader only mode, which means
+ *  there is no link dependencies.
+ *
+ *  Support for TVM is optional even when this code
+ *  is always compiled and built with the project.
+ *  We choose this strategy because we do not yet want
+ *  llvm as dependency(which TVM uses). So instead we expose hook
+ *  to TVM and let user use this feature when they have TVM installed.
+ *
+ *  We do require TVM and MXNet to be built with same C++ ABI of std::function
+ */
+#define TVM_RUNTIME_HEADER_ONLY 1
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+namespace mxnet {
+
+using tvm::runtime::PackedFunc;
+using tvm::runtime::TVMArgs;
+using tvm::runtime::TVMRetValue;
+
+/*!
+ * \brief Async functor object
+ *  calling argument of the function.
+ */
+class TVMFunctor {
+ public:
+  // constructor
+  explicit TVMFunctor(PackedFunc func, PackedFunc fset_stream)
+  : func_(func), fset_stream_(fset_stream) {}
+
+  void Init(const TVMArgs& args,
+const std::vector& const_loc,
+std::vector* const_vars,
+std::vector* mutate_vars) {
+values_.clear();
+type_codes_.clear();
+values_.insert(values_.end(), args.values, args.values + args.size());
+type_codes_.insert(
+type_codes_.end(), args.type_codes, args.type_codes + args.size());
+
+size_t const_loc_ptr = 0;
+for (int i = 0; i < args.size(); ++i) {
+  if (args.type_codes[i] == kTVMNDArrayTypeCode) {
+const NDArray& nd =
+static_cast(args.values[i].v_handle)[0];
+// We cannot set the value until
+type_codes_[i] = kArrayHandle;
+array_data_.push_back(nd);
+array_loc_.push_back(i);
+// check if there is read or mutate
+// by default assume we mutate the array.
+if (const_loc_ptr < const_loc.size() &&
+i == const_loc[const_loc_ptr]) {
+  const_vars->push_back(nd.var());
 
 Review comment:
   (for all vectors here)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on a change in pull request #9878: Docs build all versions refactor

2018-02-25 Thread GitBox
aaronmarkham commented on a change in pull request #9878: Docs build all 
versions refactor
URL: https://github.com/apache/incubator-mxnet/pull/9878#discussion_r170493135
 
 

 ##
 File path: docs/build_version_doc/build_all_version.sh
 ##
 @@ -59,27 +65,8 @@ for tag in $tag_list; do
 make clean
 cd docs
 make clean
-make html USE_OPENMP=0 || exit 1
-python build_version_doc/AddVersion.py --file_path "_build/html/" 
--current_version "$tag" || exit 1
-
-if [ $tag != 'master' ]
-then 
-python build_version_doc/AddPackageLink.py --file_path 
"_build/html/get_started/install.html" \
-   --current_version "$tag" || 
exit 1
-fi
-
-if [ $version_num == 0 ]
-then
-cp -a _build/html/. "../../$built"
-else
-file_loc="../../$built/versions/$tag"
-mkdir "$file_loc"
-cp -a _build/html/. "$file_loc"
-fi
+make html USE_OPENMP=1 || exit 1
 
 ((++version_num))
 done
-
-mv "$tag_file" "../../$built/tag.txt"
-cd ../..
-rm -rf "$mxnet_folder"
+
 
 Review comment:
   Added.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lx75249 commented on issue #9809: fix optimizer bug in CPP-Package

2018-02-25 Thread GitBox
lx75249 commented on issue #9809: fix optimizer bug in CPP-Package
URL: https://github.com/apache/incubator-mxnet/pull/9809#issuecomment-368388316
 
 
   Unfortunately we don't have static constructors in c++, and that's why the 
initialization becomes so weird. Emulating a static constructor will make the 
code more confusing, but registering optimizers in Find is as acceptable as 
creating an instance in singleton getters.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dotelos commented on issue #9872: A bug in an example in the python API document

2018-02-25 Thread GitBox
dotelos commented on issue #9872: A bug in an example in the python API document
URL: 
https://github.com/apache/incubator-mxnet/issues/9872#issuecomment-368386176
 
 
   @sxjscience Please fix the example in the doc. 
https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/autograd.py#L375


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by 
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368382610
 
 
   @marcoabreu  addressed the comments. The current test case already covers 
the API use-case of CPU and GPU of the async engine wrapping.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on a change in pull request #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
tqchen commented on a change in pull request #9880: TVM bridge support to JIT 
NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170490477
 
 

 ##
 File path: include/mxnet/tensor_blob.h
 ##
 @@ -36,8 +36,15 @@
 #include 
 #include 
 #include "./base.h"
+
 namespace mxnet {
 
+// redefine DLPack enumeration to be backward compatible.
+const int kCPU = kDLCPU;
 
 Review comment:
   good catch, will change to constexpr


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on a change in pull request #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
tqchen commented on a change in pull request #9880: TVM bridge support to JIT 
NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170490405
 
 

 ##
 File path: include/mxnet/tensor_blob.h
 ##
 @@ -36,8 +36,15 @@
 #include 
 #include 
 #include "./base.h"
+
 namespace mxnet {
 
+// redefine DLPack enumeration to be backward compatible.
+const int kCPU = kDLCPU;
+const int kGPU = kDLGPU;
+// extension type code under TVM function.
+const int kTVMNDArrayTypeCode = 19;
 
 Review comment:
   It is picked to be the last reserved enumerator for NNVM


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on a change in pull request #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
tqchen commented on a change in pull request #9880: TVM bridge support to JIT 
NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170490389
 
 

 ##
 File path: include/mxnet/tensor_blob.h
 ##
 @@ -36,8 +36,15 @@
 #include 
 #include 
 #include "./base.h"
+
 namespace mxnet {
 
+// redefine DLPack enumeration to be backward compatible.
+const int kCPU = kDLCPU;
+const int kGPU = kDLGPU;
+// extension type code under TVM function.
+const int kTVMNDArrayTypeCode = 19;
 
 Review comment:
   This enumerator is allocated in the TVM side and reserved for MXNet and NNVM 
project, so it is not arbitrary chosen 
https://github.com/dmlc/tvm/blob/master/include/tvm/runtime/c_runtime_api.h#L97
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yajiedesign commented on issue #9798: fix cmake

2018-02-25 Thread GitBox
yajiedesign commented on issue #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#issuecomment-368384301
 
 
   It should be more appropriate to use a capital name,like CUDA_TOOLSET_LIST


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
cjolivier01 commented on a change in pull request #9880: TVM bridge support to 
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170489601
 
 

 ##
 File path: include/mxnet/tensor_blob.h
 ##
 @@ -36,8 +36,15 @@
 #include 
 #include 
 #include "./base.h"
+
 namespace mxnet {
 
+// redefine DLPack enumeration to be backward compatible.
+const int kCPU = kDLCPU;
+const int kGPU = kDLGPU;
+// extension type code under TVM function.
+const int kTVMNDArrayTypeCode = 19;
 
 Review comment:
   would it make sense to make it an enumerator?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
cjolivier01 commented on a change in pull request #9880: TVM bridge support to 
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170489831
 
 

 ##
 File path: include/mxnet/tensor_blob.h
 ##
 @@ -36,8 +36,15 @@
 #include 
 #include 
 #include "./base.h"
+
 namespace mxnet {
 
+// redefine DLPack enumeration to be backward compatible.
+const int kCPU = kDLCPU;
+const int kGPU = kDLGPU;
+// extension type code under TVM function.
+const int kTVMNDArrayTypeCode = 19;
 
 Review comment:
   mostly because the 19 seems arbitrary? and maybe extensible to other numbers 
in the future? in that case, an enum could help to manage accidental overlap.
   although my assumptions here may not be correct.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
cjolivier01 commented on a change in pull request #9880: TVM bridge support to 
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170489543
 
 

 ##
 File path: include/mxnet/tensor_blob.h
 ##
 @@ -36,8 +36,15 @@
 #include 
 #include 
 #include "./base.h"
+
 namespace mxnet {
 
+// redefine DLPack enumeration to be backward compatible.
+const int kCPU = kDLCPU;
 
 Review comment:
   should this be constexpr? what keeps it from generating an integer in the 
data segment for each file compiled?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on a change in pull request #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
tqchen commented on a change in pull request #9880: TVM bridge support to JIT 
NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170489245
 
 

 ##
 File path: tests/ci_build/install/ubuntu_install_tvm.sh
 ##
 @@ -0,0 +1,38 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# Build and install TVM
+cd /tmp
+git clone https://github.com/dmlc/tvm/ --recursive
 
 Review comment:
   i am aware of that, change to used a fixed tag


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by 
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368382610
 
 
   addressed the comments. The current test case already covers the API 
use-case of CPU and GPU of the async engine wrapping.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on a change in pull request #9878: Docs build all versions refactor

2018-02-25 Thread GitBox
aaronmarkham commented on a change in pull request #9878: Docs build all 
versions refactor
URL: https://github.com/apache/incubator-mxnet/pull/9878#discussion_r170488869
 
 

 ##
 File path: docs/build_version_doc/setup_docs_ubuntu.sh
 ##
 @@ -0,0 +1,42 @@
+# If you need to build <= v0.12.0 then use a Python 2 environment
+# mxdoc.py - a sphinx extension, was not Python 3 compatible in the old 
versions
+# source activate mxnet_p27
+
+# Install dependencies
+sudo apt-get update
 
 Review comment:
   This is taking a really long time to test, so that's why an update hasn't 
landed yet! I had the benefit of some deps handled by the DL AMI, but in a 
vanilla container I had a lot of troubleshooting to do. That being said, I'm 
hoping I'm near the end of testing the scripts inside a container for every tag 
we need.
   
   I'm quite tempted to modify the scripts though so that they take in the tag 
as an input and we do a `RUN` for each, so we can use the docker cache for when 
the build steps are completed fine.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on a change in pull request #9878: Docs build all versions refactor

2018-02-25 Thread GitBox
aaronmarkham commented on a change in pull request #9878: Docs build all 
versions refactor
URL: https://github.com/apache/incubator-mxnet/pull/9878#discussion_r170488626
 
 

 ##
 File path: docs/build_version_doc/setup_docker.sh
 ##
 @@ -0,0 +1,17 @@
+# Setup Docker
 
 Review comment:
   Ok, I'm removing this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9798: fix cmake

2018-02-25 Thread GitBox
cjolivier01 commented on a change in pull request #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#discussion_r170487327
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -181,14 +190,6 @@ include_directories(${CMAKE_CURRENT_SOURCE_DIR}/src)
 if(USE_CUDA)
   find_package(CUDA REQUIRED)
 
 Review comment:
   one problem was without find_package(CUDA), headers weren?t being found for 
compile.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #9798: fix cmake

2018-02-25 Thread GitBox
cjolivier01 commented on issue #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#issuecomment-368379579
 
 
   is direct usage of __cuda_toolset documented?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #9798: fix cmake

2018-02-25 Thread GitBox
cjolivier01 commented on issue #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#issuecomment-368378873
 
 
   btw setting a double-underscore variable like __cuda_toolset at a top level 
looks suspicious. I don?t know of any other cmake packages that require such a 
thing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9798: fix cmake

2018-02-25 Thread GitBox
cjolivier01 commented on a change in pull request #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#discussion_r170486378
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -46,6 +46,15 @@ if(USE_CUDA AND NOT USE_OLDCMAKECUDA)
 (${CMAKE_VERSION} VERSION_GREATER "3.9.0") OR (${CMAKE_VERSION} 
VERSION_EQUAL "3.9.0")
   )
 )
+
+set(__cuda_toolset "auto" "7.5" "8.0" "9.0")
 
 Review comment:
   right i mean without any of these changes. the problem is it just picks the 
latest one?
   the current implementation allows for setting the cuda toolset version, 
doesn?t it?
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sunyonggang commented on issue #9622: Unable to reproduce the published mAP for example/ssd with VGGNET model VOC0712 data

2018-02-25 Thread GitBox
sunyonggang commented on issue #9622: Unable to reproduce the published mAP for 
example/ssd with VGGNET model VOC0712  data
URL: 
https://github.com/apache/incubator-mxnet/issues/9622#issuecomment-368375650
 
 
   I trained the example with all default params, but gpu only  2. 
   the example `VGG16_reduced 300x300` shows 0.778, but I get only 0.72 with 
epoch 100.
   then, I change learning_rate to 0.001, mAp only 0.71 with epoch 200.-_-
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv opened a new pull request #9886: Remove useless code in ndarray.h

2018-02-25 Thread GitBox
TaoLv opened a new pull request #9886: Remove useless code in ndarray.h
URL: https://github.com/apache/incubator-mxnet/pull/9886
 
 
   ## Description ##
   Remove useless code in ndarray.h
   
   ## Checklist ##
   ### Essentials ###
   - [x] Passed code style checking (`make lint`)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yajiedesign commented on a change in pull request #9798: fix cmake

2018-02-25 Thread GitBox
yajiedesign commented on a change in pull request #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#discussion_r170477580
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -46,6 +46,15 @@ if(USE_CUDA AND NOT USE_OLDCMAKECUDA)
 (${CMAKE_VERSION} VERSION_GREATER "3.9.0") OR (${CMAKE_VERSION} 
VERSION_EQUAL "3.9.0")
   )
 )
+
+set(__cuda_toolset "auto" "7.5" "8.0" "9.0")
 
 Review comment:
   My build machine install multiple versions cuda,It needs to be compiled with 
different versions of CUDA.like cu80 cu90 cu91 etc.and auto detecting always 
use latest version cuda.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yajiedesign commented on a change in pull request #9798: fix cmake

2018-02-25 Thread GitBox
yajiedesign commented on a change in pull request #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#discussion_r170477137
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -46,6 +46,15 @@ if(USE_CUDA AND NOT USE_OLDCMAKECUDA)
 (${CMAKE_VERSION} VERSION_GREATER "3.9.0") OR (${CMAKE_VERSION} 
VERSION_EQUAL "3.9.0")
   )
 )
+
+set(__cuda_toolset "auto" "7.5" "8.0" "9.0")
 
 Review comment:
   or add 9.1?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9798: fix cmake

2018-02-25 Thread GitBox
cjolivier01 commented on a change in pull request #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#discussion_r170477110
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -46,6 +46,15 @@ if(USE_CUDA AND NOT USE_OLDCMAKECUDA)
 (${CMAKE_VERSION} VERSION_GREATER "3.9.0") OR (${CMAKE_VERSION} 
VERSION_EQUAL "3.9.0")
   )
 )
+
+set(__cuda_toolset "auto" "7.5" "8.0" "9.0")
 
 Review comment:
   it already is auto detecting and offering an override option.
   what doesn?t it work on your machine but it works elsewhere?
   also there should be very little above the main config options


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yajiedesign commented on a change in pull request #9798: fix cmake

2018-02-25 Thread GitBox
yajiedesign commented on a change in pull request #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#discussion_r170477010
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -46,6 +46,15 @@ if(USE_CUDA AND NOT USE_OLDCMAKECUDA)
 (${CMAKE_VERSION} VERSION_GREATER "3.9.0") OR (${CMAKE_VERSION} 
VERSION_EQUAL "3.9.0")
   )
 )
+
+set(__cuda_toolset "auto" "7.5" "8.0" "9.0")
 
 Review comment:
   You mean you want to automatically detect the installed version of the CUDA?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ascust opened a new issue #9885: A question about Operator "crop" and "slice".

2018-02-25 Thread GitBox
ascust opened a new issue #9885: A question about Operator "crop" and "slice".
URL: https://github.com/apache/incubator-mxnet/issues/9885
 
 
   In the document, it says "crop is deprecated. Use slice instead". But I 
think "slice" is not a complete alternative to "crop", because "crop" can use a 
"reference" symbol as an input, while "slice" can only use fixed parameters to 
crop. Sometimes, the input can have various size, we can not define everything 
in advance. In the future release, is "crop" going to be kept?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9798: fix cmake

2018-02-25 Thread GitBox
cjolivier01 commented on a change in pull request #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#discussion_r170476526
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -46,6 +46,15 @@ if(USE_CUDA AND NOT USE_OLDCMAKECUDA)
 (${CMAKE_VERSION} VERSION_GREATER "3.9.0") OR (${CMAKE_VERSION} 
VERSION_EQUAL "3.9.0")
   )
 )
+
+set(__cuda_toolset "auto" "7.5" "8.0" "9.0")
 
 Review comment:
   this can?t go here above all of the config options 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9798: fix cmake

2018-02-25 Thread GitBox
cjolivier01 commented on a change in pull request #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#discussion_r170476483
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -47,10 +47,14 @@ if(USE_CUDA AND NOT USE_OLDCMAKECUDA)
   )
 )
 
-set(__cuda_toolset "7.5" "8.0" "9.0")
 
 Review comment:
   i thought this was removed already


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
marcoabreu commented on a change in pull request #9880: TVM bridge support to 
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170474738
 
 

 ##
 File path: tests/ci_build/install/ubuntu_install_tvm.sh
 ##
 @@ -0,0 +1,38 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# Build and install TVM
+cd /tmp
+git clone https://github.com/dmlc/tvm/ --recursive
 
 Review comment:
   Are you aware that the result of this script is being cached indefinitely? 
In that case, it would be better to specify a stable version instead of Master 
as otherwise environments may differ on different slaves


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
marcoabreu commented on a change in pull request #9880: TVM bridge support to 
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170474680
 
 

 ##
 File path: tests/ci_build/install/ubuntu_install_tvm.sh
 ##
 @@ -0,0 +1,38 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# Build and install TVM
+cd /tmp
+git clone https://github.com/dmlc/tvm/ --recursive
+cd tvm
+cp make/config.mk
+echo USE_CUDA=1 >> config.mk
+echo LLVM_CONFIG=llvm-config-5.0 >> config.mk
+echo USE_RPC=1 >> config.mk
+echo USE_GRAPH_RUNTIME=1 >> config.mk
+echo CUDA_PATH=/usr/local/cuda >> config.mk
+make -j10
 
 Review comment:
   Please make use of all CPU cores


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
marcoabreu commented on a change in pull request #9880: TVM bridge support to 
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170474564
 
 

 ##
 File path: tests/python/gpu/test_tvm_bridge.py
 ##
 @@ -0,0 +1,57 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Test TVM bridge, only enable this when TVM is available"""
+import mxnet as mx
+import numpy as np
+
+def test_tvm_bridge():
+# only enable test if TVM is available
+try:
+import tvm
+import tvm.contrib.mxnet
+import topi
+except ImportError:
+return
 
 Review comment:
   TVM*


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
marcoabreu commented on a change in pull request #9880: TVM bridge support to 
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170474555
 
 

 ##
 File path: tests/python/gpu/test_tvm_bridge.py
 ##
 @@ -0,0 +1,57 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Test TVM bridge, only enable this when TVM is available"""
+import mxnet as mx
+import numpy as np
+
+def test_tvm_bridge():
+# only enable test if TVM is available
+try:
+import tvm
+import tvm.contrib.mxnet
+import topi
+except ImportError:
+return
 
 Review comment:
   Print message that test is not run because of missing tv


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] johnbroughton2017 commented on issue #9884: How to speed up prediction run time? Copying gpu->cpu takes a long time

2018-02-25 Thread GitBox
johnbroughton2017 commented on issue #9884: How to speed up prediction run 
time? Copying gpu->cpu takes a long time
URL: 
https://github.com/apache/incubator-mxnet/issues/9884#issuecomment-368356112
 
 
   Follow-up.
   
   Found this more interesting. Using caffenet instead of resnet50, it looks 
like this:
 batch sizemod.forward() (ms)mod.get_outputs...asnumpy() (ms)
       --
 16 156.661.3
 32 183.428.9
 48 166.425.3
 64 166.732.1
 80 171.338.6
 96 181.833.4
112 181.441.6
128 188.246.8
144 236.561.2
160 193.154.4
176 195.861.8
192 198.965.9
208 196.770.3
224 199.575.3
240 203.577.4
256 206  81.9
   
   The output dimension should be the same but for some reason the data copying 
time is reduced a lot. Cannot figure out why
   
   -- John


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] johnbroughton2017 commented on issue #9884: How to speed up prediction run time? Copying gpu->cpu takes a long time

2018-02-25 Thread GitBox
johnbroughton2017 commented on issue #9884: How to speed up prediction run 
time? Copying gpu->cpu takes a long time
URL: 
https://github.com/apache/incubator-mxnet/issues/9884#issuecomment-368356112
 
 
   Follow-up.
   
   Found this more interesting. Using caffenet instead of resnet50, it looks 
like this:
   
   ```
 batch sizemod.forward() (ms)mod.get_outputs...asnumpy() (ms)
       --
 16 156.661.3
 32 183.428.9
 48 166.425.3
 64 166.732.1
 80 171.338.6
 96 181.833.4
112 181.441.6
128 188.246.8
144 236.561.2
160 193.154.4
176 195.861.8
192 198.965.9
208 196.770.3
224 199.575.3
240 203.577.4
256 206  81.9
   
   ```
   The output dimension should be the same but for some reason the data copying 
time is reduced a lot. Cannot figure out why
   
   -- John


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by 
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368355290
 
 
   Testcase and ci added


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] johnbroughton2017 opened a new issue #9884: How to speed up prediction run time? Copying gpu->cpu takes a long time

2018-02-25 Thread GitBox
johnbroughton2017 opened a new issue #9884: How to speed up prediction run 
time? Copying gpu->cpu takes a long time
URL: https://github.com/apache/incubator-mxnet/issues/9884
 
 
   Hi all, 
   
   Doing prediction using mxnet has two major part: forward pass and copy 
results from gpu to cpu memory, as
   ```
   mod.forward(Batch([mx.nd.array(data)]))
   prob = mod.get_outputs(0)[0][0].asnumpy()
   ```
   
   I did a quick timing based on batch size (see below). It seems like the 
second operation above takes a lot of time when batch size increases.
   
 batch sizemod.forward() (ms)mod.get_outputs...asnumpy() (ms)
   

 16   5.830.1
 32  10.551.1
 48  14  78.7
 64  17.895.6
 80  33.2   121.3
 96  36.2   147.5
112  41.3   174.3
128  46.4   245.5
144  52 219
160  56.9   241.2
176  64.9   267.4
192  69.5   329.1
208  73.4   317.1
224  80.7   337.4
240  83.4   446.7
256  93.4   380.7
   
   I don't understand this because copying data from gpu to cpu should be 
really fast. For example, the following code takes only 0.1ms to run.
   ```
   # speed test
   import time
   import mxnet as mx
   a = mx.nd.random_uniform(shape=(256, 3, 224, 224), ctx=mx.cpu())
   b = mx.nd.random_uniform(shape=(256, 3, 224, 224), ctx=mx.gpu())
   
   t0 = time.time()
   b.copyto(a)
   print time.time()-t0
   ```
   
   Am I doing this in a wrong way? Any help is highly appreciated. Thanks.
   
   -- John
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
marcoabreu commented on issue #9880: TVM bridge support to JIT NDArray Function 
by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368342939
 
 
   Don't worry about that. We are currently looking into ccache integration 
which should reduce the impact by a lot - especially if only GCC but not nvcc 
is being used.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by 
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368342384
 
 
   I just mean the cost of building TVM's LLVM dependency. I don't want to 
directly introduce additional burden to the CI while this being purely 
optional. Anyway, I get your point and will see if we can do a test with TVM's 
minimum dependency build


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
marcoabreu commented on issue #9880: TVM bridge support to JIT NDArray Function 
by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368341971
 
 
   I don't see any issues in building a dependency, we're doing this for a lot 
of cases. The test execution would be part of the integration test stage while 
any compilation happens during build stage.
   
   Well if we want to advertise MXNet being compatible with TVM, then it should 
be properly tested. What kind of discussion would you expect?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by 
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368341749
 
 
   This being said, I totally agree that having proper testing is important. 
That is why there is already test-cases that get locally verified for these 
changes (which is also optional test that only runs when both are available in 
TVM side of changes). So the correctness and quality of the code change is 
covered by the 
[test_mxnet_bridge.py](https://github.com/tqchen/tvm/blob/master/tests/python/contrib/test_mxnet_bridge.py).
 
   
   The only question is that if we want to directly bring that testcase to this 
PR now, that involves bring TVM's build into current jenkins pipeline, which I 
think deserves some discussion before we do so.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #9883: added function for loading content of nd_array files

2018-02-25 Thread GitBox
marcoabreu commented on issue #9883: added function for loading content of 
nd_array files
URL: https://github.com/apache/incubator-mxnet/pull/9883#issuecomment-368341672
 
 
   Exactly, usually we test the C Backend in python. I'm not familiar with the
   Cpp package, but maybe that could be another place to test your
   modifications.
   
   In the end there has to be some type of Interface that exposes this
   function to the interfaces, right?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by 
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368341278
 
 
   Just to be clear, it is the way TVM bridge works that starts this special 
situation. This PR requires joint changes in both repo, and the feature won't 
be available until changes in TVM and MXNet are both made. 
   
   Unlike mkldnn, the user do not have to switch on USE_TVM as hard dependency, 
but can directly use this feature when both TVM and MXNet are available. When 
user do not have TVM, this won't affect user at all(unlike in cases like 
mkldnn, which requires user to install mkldnn by default)
   
   I can add a minimum MXNet side of testcase that verifies the bridge function 
exist. A more thourough test, however, would require bring TVM's build to 
MXNet's CI, which is a decision that I think need more discussions.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dabraude commented on issue #9883: added function for loading content of nd_array files

2018-02-25 Thread GitBox
dabraude commented on issue #9883: added function for loading content of 
nd_array files
URL: https://github.com/apache/incubator-mxnet/pull/9883#issuecomment-368341114
 
 
   @marcoabreu  Where should the test case be?  With grep I couldn't find the C 
ones for loading an array only the ones in the python wrapper tests. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by 
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368338632
 
 
   I have detailed my reasoning of but yet adding test-case to this PR. The TVM 
bridge depends on a header only component of TVM and does not have to link 
against the tvm runtime. So merging this won't introduce any additional burdens 
to MXNet's runtime. 
   
   This feature can only be used when TVM and MXNet are both available in the 
system.
   
   If we are open to bring TVM(with LLVM dependency) as part of CI, we can 
propose another PR to change the Jenkinsfile (to add LLVM as part of build), 
and bring the testcase into mxnet CI
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on a change in pull request #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
tqchen commented on a change in pull request #9880: TVM bridge support to JIT 
NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170464450
 
 

 ##
 File path: python/mxnet/ndarray/ndarray.py
 ##
 @@ -174,8 +174,14 @@ class NDArray(NDArrayBase):
 __slots__ = []
 # make numpy functions return NDArray instead of numpy object array
 __array_priority__ = 1000.0
+# used by tvm bridge
+_tvm_tcode = 19
 # pylint: disable= no-member, undefined-variable
 
+@property
+def _tvm_handle(self):
+return self.handle.value
 
 Review comment:
   This is a handle exposed for PackedFunc convention interface of TVM, to 
allow arbitrary positional arguments calls without adding new C API. 
Specifically, the wrapped function is a TVM PackedFunc that will recognize 
NDArray as an extension object, and pass the address of NDArray handles 
correctly to the arguments.
   
   It is later received in here 
https://github.com/apache/incubator-mxnet/pull/9880/files#diff-3aa2a3c799e125e086769bc1d5f6490aR74


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
szha commented on a change in pull request #9880: TVM bridge support to JIT 
NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170464185
 
 

 ##
 File path: python/mxnet/ndarray/ndarray.py
 ##
 @@ -174,8 +174,14 @@ class NDArray(NDArrayBase):
 __slots__ = []
 # make numpy functions return NDArray instead of numpy object array
 __array_priority__ = 1000.0
+# used by tvm bridge
+_tvm_tcode = 19
 # pylint: disable= no-member, undefined-variable
 
+@property
+def _tvm_handle(self):
+return self.handle.value
 
 Review comment:
   what's this for?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dabraude commented on a change in pull request #9883: added function for loading content of nd_array files

2018-02-25 Thread GitBox
dabraude commented on a change in pull request #9883: added function for 
loading content of nd_array files
URL: https://github.com/apache/incubator-mxnet/pull/9883#discussion_r170463783
 
 

 ##
 File path: src/c_api/c_api.cc
 ##
 @@ -322,6 +322,38 @@ int MXNDArrayLoad(const char* fname,
   API_END();
 }
 
+int MXNDArrayLoadFileContent(const void *nd_file,
 
 Review comment:
   changed to suggested name, which is better


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dabraude commented on a change in pull request #9883: added function for loading content of nd_array files

2018-02-25 Thread GitBox
dabraude commented on a change in pull request #9883: added function for 
loading content of nd_array files
URL: https://github.com/apache/incubator-mxnet/pull/9883#discussion_r170463743
 
 

 ##
 File path: src/c_api/c_api.cc
 ##
 @@ -322,6 +322,38 @@ int MXNDArrayLoad(const char* fname,
   API_END();
 }
 
+int MXNDArrayLoadFileContent(const void *nd_file,
+size_t size,
+mx_uint *out_size,
+NDArrayHandle** out_arr,
+mx_uint *out_name_size,
+const char*** out_names) {
+  MXAPIThreadLocalEntry *ret = MXAPIThreadLocalStore::Get();
+  ret->ret_vec_str.clear();
+  API_BEGIN();
+  std::vector data;
+  std::vector  = ret->ret_vec_str;
+  {
+std::unique_ptr fi(new 
dmlc::MemoryFixedSizeStream((void*)nd_file, size)); // NOLINT(*)
+mxnet::NDArray::Load(fi.get(), , );
 
 Review comment:
   Just checked multiple versions of this, it will return -1 with the last 
error message being:
   `Check failed: header == kMXAPINDArrayListMagic Invalid NDArray file format`
   Python can check for that pretty easily.
   
   Just added a check against NULL pointers which I didn't think of originally.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by 
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368327775
 
 
   The test now pass, @piiswrong @szha  can you review?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9883: added function for loading content of nd_array files

2018-02-25 Thread GitBox
cjolivier01 commented on a change in pull request #9883: added function for 
loading content of nd_array files
URL: https://github.com/apache/incubator-mxnet/pull/9883#discussion_r170460340
 
 

 ##
 File path: src/c_api/c_api.cc
 ##
 @@ -322,6 +322,38 @@ int MXNDArrayLoad(const char* fname,
   API_END();
 }
 
+int MXNDArrayLoadFileContent(const void *nd_file,
 
 Review comment:
   It?s not really a file, right? More like LoadFromBuffer or something, right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9883: added function for loading content of nd_array files

2018-02-25 Thread GitBox
cjolivier01 commented on a change in pull request #9883: added function for 
loading content of nd_array files
URL: https://github.com/apache/incubator-mxnet/pull/9883#discussion_r170460309
 
 

 ##
 File path: src/c_api/c_api.cc
 ##
 @@ -322,6 +322,38 @@ int MXNDArrayLoad(const char* fname,
   API_END();
 }
 
+int MXNDArrayLoadFileContent(const void *nd_file,
+size_t size,
+mx_uint *out_size,
+NDArrayHandle** out_arr,
+mx_uint *out_name_size,
+const char*** out_names) {
+  MXAPIThreadLocalEntry *ret = MXAPIThreadLocalStore::Get();
+  ret->ret_vec_str.clear();
+  API_BEGIN();
+  std::vector data;
+  std::vector  = ret->ret_vec_str;
+  {
+std::unique_ptr fi(new 
dmlc::MemoryFixedSizeStream((void*)nd_file, size)); // NOLINT(*)
+mxnet::NDArray::Load(fi.get(), , );
 
 Review comment:
   what happens if there is an error/formatting problem? from the python 
perspective? how catastrophic is that?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9883: added function for loading content of nd_array files

2018-02-25 Thread GitBox
cjolivier01 commented on a change in pull request #9883: added function for 
loading content of nd_array files
URL: https://github.com/apache/incubator-mxnet/pull/9883#discussion_r170460275
 
 

 ##
 File path: src/c_api/c_api.cc
 ##
 @@ -322,6 +322,38 @@ int MXNDArrayLoad(const char* fname,
   API_END();
 }
 
+int MXNDArrayLoadFileContent(const void *nd_file,
+size_t size,
+mx_uint *out_size,
+NDArrayHandle** out_arr,
+mx_uint *out_name_size,
+const char*** out_names) {
+  MXAPIThreadLocalEntry *ret = MXAPIThreadLocalStore::Get();
+  ret->ret_vec_str.clear();
+  API_BEGIN();
+  std::vector data;
+  std::vector  = ret->ret_vec_str;
+  {
+std::unique_ptr fi(new 
dmlc::MemoryFixedSizeStream((void*)nd_file, size)); // NOLINT(*)
 
 Review comment:
   nit: why can?t continue this line on the next line and. it require NOLINT?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by TVM

2018-02-25 Thread GitBox
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by 
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368327775
 
 
   The r test factor appears to be not related to this commit. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dabraude commented on issue #9860: [WIP] CMake NNPack support

2018-02-25 Thread GitBox
dabraude commented on issue #9860: [WIP] CMake NNPack support
URL: https://github.com/apache/incubator-mxnet/pull/9860#issuecomment-368323497
 
 
   @cjolivier01 I need to create a thread pool for NNPack, should I do 
something similar to the CpuEngine which is used by MKLDNN? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] 7oud commented on issue #9420: add use_global_stats in nn.BatchNorm

2018-02-25 Thread GitBox
7oud commented on issue #9420: add use_global_stats in nn.BatchNorm
URL: https://github.com/apache/incubator-mxnet/pull/9420#issuecomment-368320260
 
 
   @szha @tornadomeet if training with use_global_stats=True, it seemed all the 
moving_mean = 0 and moving_var = 1 in the trained model, is is right ? then 
batch norm changed into a scalar shift op. what situation should 
use_global_stats=True be used ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #9819: Sometime MXDataIter load data quickly, sometime it load data slowly?

2018-02-25 Thread GitBox
eric-haibin-lin commented on issue #9819: Sometime MXDataIter load data 
quickly, sometime it load data slowly?
URL: 
https://github.com/apache/incubator-mxnet/issues/9819#issuecomment-368314938
 
 
   I think discuss.mxnet.io is a good place to discuss questions like this. 
   
   How did you implement the data iterator? Usually it?s not a problem if it?s 
not the bottleneck of your network 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] iblis17 commented on issue #9677: Refactor operators and add MKLDNN

2018-02-25 Thread GitBox
iblis17 commented on issue #9677: Refactor operators and add MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/9677#issuecomment-368280425
 
 
   @marcoabreu 
   About the reason of hosting Julia code in another repository,
   Julia's package manager is built on top of git, and it requires some 
specific structure in the cloned dir.
   For example, we have `MXNet.jl`, we cloned it as `MXNet` and it should have 
following dir structure:
   ```
   MXNet/ # the pkg name
 |- src/
 ||- MXNet.jl   # this file is the pkg entry point, should be named as 
same as pkg name
 ||- other.jl ... etc
 |- test/
 ||- runtests.jl  # test cases entry point
   ```
   Since git cannot checkout a subdir as svn does, the only choice is to put 
Julia binding as a single repo.
   
   
   
   > and thus not part of our PR validation chain?
   
   I tried to ping developer several times on both GitHub and Slack, but did 
not get progress.
   See: 
   - https://github.com/apache/incubator-mxnet/pull/8727
   - https://github.com/apache/incubator-mxnet/pull/8175
   
   I finished the patch already, but I do not have any permission to trigger 
new Jenkins script via PR 
(https://github.com/apache/incubator-mxnet/pull/8175#issuecomment-336340005).
   
   I beg for help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dabraude opened a new pull request #9883: added function for loading content of nd_array files

2018-02-25 Thread GitBox
dabraude opened a new pull request #9883: added function for loading content of 
nd_array files
URL: https://github.com/apache/incubator-mxnet/pull/9883
 
 
   ## Description ##
   Adds a function for loading the content of an NDArray file.
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - Added function MXNDArrayLoadFileContent
   
   
   ## Comments ##
   - As far as I can determine it is not possible for the current API to be 
able to load the content of an NDArray file without knowing a lot of details 
about the file format, saving the dtypes, and using both the c_api.h and the 
c_predict_api.h. 
   - Not having this function is inconsistent with the symbol loading API which 
allows you to load from a file by name or by a string containing the JSON 
content.
   - required for issue #9827
   - Not sure if you are happy with the names but that is the best I could come 
up with.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: remove MKL_EXPERIMENTAL and update make files for MKL-DNN (#9810)

2018-02-25 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 5c5a904  remove MKL_EXPERIMENTAL and update make files for MKL-DNN 
(#9810)
5c5a904 is described below

commit 5c5a904209900e21b20ca206b043ea7a8252ebfc
Author: Ashok Emani 
AuthorDate: Sun Feb 25 02:05:57 2018 -0800

remove MKL_EXPERIMENTAL and update make files for MKL-DNN (#9810)

* replace MKL2017 references with MKL-DNN

* remove MKLML_ROOT

* MKL_README.md for Full MKL

* update test_mkldnn

* update Jenkinsfile

* update jenkins

* trigger Jenkins with new changes

* trigger Jenkins with new changes
---
 Jenkinsfile| 18 +
 MKL_README.md  | 43 --
 docker_multiarch/arm.crosscompile.android.mk   | 20 +-
 docker_multiarch/arm.crosscompile.mk   | 22 +--
 docs/faq/perf.md   |  5 +--
 example/image-classification/benchmark_score.py|  2 +-
 make/config.mk | 16 +---
 make/osx.mk|  2 +-
 tests/python/cpu/{test_mklml.py => test_mkldnn.py} | 18 -
 9 files changed, 35 insertions(+), 111 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 81ddb73..c23bbbf 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -24,6 +24,7 @@
 mx_lib = 'lib/libmxnet.so, lib/libmxnet.a, dmlc-core/libdmlc.a, 
nnvm/lib/libnnvm.a'
 // mxnet cmake libraries, in cmake builds we do not produce a libnvvm static 
library by default.
 mx_cmake_lib = 'build/libmxnet.so, build/libmxnet.a, 
build/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, 
build/3rdparty/openmp/runtime/src/libomp.so'
+mx_cmake_mkldnn_lib = 'build/libmxnet.so, build/libmxnet.a, 
build/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, 
build/3rdparty/openmp/runtime/src/libomp.so, 
build/3rdparty/mkldnn/src/libmkldnn.so, 
build/3rdparty/mkldnn/src/libmkldnn.so.0'
 mx_mkldnn_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libiomp5.so, 
lib/libmklml_gnu.so, lib/libmkldnn.so, lib/libmkldnn.so.0, 
lib/libmklml_intel.so, dmlc-core/libdmlc.a, nnvm/lib/libnnvm.a'
 // command to start a docker container
 docker_run = 'tests/ci_build/ci_build.sh'
@@ -260,6 +261,23 @@ try {
 }
   }
 },
+'GPU: CMake MKLDNN': {
+  node('mxnetlinux-cpu') {
+ws('workspace/build-cmake-mkldnn-gpu') {
+  init_git()
+  def defines = """\
+-DUSE_CUDA=1   \
+-DUSE_CUDNN=1  \
+-DUSE_MKLML_MKL=1  \
+-DUSE_MKLDNN=1 \
+-DCMAKE_BUILD_TYPE=Release \
+"""
+def flag = "-v"
+cmake("build_cuda", defines, flag)
+  pack_lib('cmake_mkldnn_gpu', mx_cmake_mkldnn_lib)
+}
+  }
+},
 'GPU: CMake': {
   node('mxnetlinux-cpu') {
 ws('workspace/build-cmake-gpu') {
diff --git a/MKL_README.md b/MKL_README.md
index 0f97416..5374adb 100644
--- a/MKL_README.md
+++ b/MKL_README.md
@@ -17,46 +17,3 @@ Installing and enabling the full MKL installation enables 
MKL support for all op
 
   5. Run 'sudo python setup.py install'
 
-# MKL2017 PLUGIN
-
-MKL2017 is an INTEL released library to accelerate Deep Neural Network (DNN) 
applications on Intel architecture.
-
-MKL2017_ML is a subset of MKL2017 and only contains DNN acceleration feature, 
MKL2017 release cycle is longer then MKL2017_ML and MKL2017_ML support latest 
feature
-
-This README shows the user how to setup and install MKL2017 library with mxnet.
-
-## Build/Install MXNet with MKL:
-
-  1. Enable USE_MKL2017=1 in make/config.mk
-
-1.1 By default, MKL_2017_EXPRIEMENTAL=0. If setting 
MKL_2017_EXPRIEMENTAL=1, MKL buffer will be created and transferred between 
layers to achiever much higher performance.
-
-1.2 By default, MKLML_ROOT=/usr/local, MKL2017_ML will be used
-
-  1.2.1 when excute make, Makefile will execute "prepare_mkl.sh" to 
download the MKL2017_ML library under 
-
-  1.2.2 manually steps for download MKL2017_ML problem
-
-1.2.2.1 wget 
https://github.com/dmlc/web-data/raw/master/mxnet/mklml-release/mklml_lnx_.tgz
-
-1.2.2.2 tar zxvf mklml_lnx_.tgz
-
-1.2.2.3 cp -rf mklml_lnx_/* /
-
-  1.2.3 Set LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$MKLML_ROOT/lib
-
-1.3 If setting USE_BLAS=mkl
-
-  1.3.1 mshadow can also utilize mkl blas function in mklml package  
-
-1.4 MKL version compatibility
-
-1.3.2.1 If you already have MKL installed and MKLROOT being set in 
your system, by default, it will not attempt to download the latest mklml 
package unless you unset MKLROOT. 
-
-  

[GitHub] marcoabreu closed pull request #9810: remove MKL_EXPERIMENTAL and update make files for MKL-DNN

2018-02-25 Thread GitBox
marcoabreu closed pull request #9810: remove MKL_EXPERIMENTAL and update make 
files for MKL-DNN
URL: https://github.com/apache/incubator-mxnet/pull/9810
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/Jenkinsfile b/Jenkinsfile
index 17d546c87f4..a20d9db545c 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -24,6 +24,7 @@
 mx_lib = 'lib/libmxnet.so, lib/libmxnet.a, dmlc-core/libdmlc.a, 
nnvm/lib/libnnvm.a'
 // mxnet cmake libraries, in cmake builds we do not produce a libnvvm static 
library by default.
 mx_cmake_lib = 'build/libmxnet.so, build/libmxnet.a, 
build/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, 
build/3rdparty/openmp/runtime/src/libomp.so'
+mx_cmake_mkldnn_lib = 'build/libmxnet.so, build/libmxnet.a, 
build/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, 
build/3rdparty/openmp/runtime/src/libomp.so, 
build/3rdparty/mkldnn/src/libmkldnn.so, 
build/3rdparty/mkldnn/src/libmkldnn.so.0'
 mx_mkldnn_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libiomp5.so, 
lib/libmklml_gnu.so, lib/libmkldnn.so, lib/libmkldnn.so.0, 
lib/libmklml_intel.so, dmlc-core/libdmlc.a, nnvm/lib/libnnvm.a'
 // command to start a docker container
 docker_run = 'tests/ci_build/ci_build.sh'
@@ -260,6 +261,23 @@ try {
 }
   }
 },
+'GPU: CMake MKLDNN': {
+  node('mxnetlinux-cpu') {
+ws('workspace/build-cmake-mkldnn-gpu') {
+  init_git()
+  def defines = """\
+-DUSE_CUDA=1   \
+-DUSE_CUDNN=1  \
+-DUSE_MKLML_MKL=1  \
+-DUSE_MKLDNN=1 \
+-DCMAKE_BUILD_TYPE=Release \
+"""
+def flag = "-v"
+cmake("build_cuda", defines, flag)
+  pack_lib('cmake_mkldnn_gpu', mx_cmake_mkldnn_lib)
+}
+  }
+},
 'GPU: CMake': {
   node('mxnetlinux-cpu') {
 ws('workspace/build-cmake-gpu') {
diff --git a/MKL_README.md b/MKL_README.md
index 0f97416ac36..5374adb8e42 100644
--- a/MKL_README.md
+++ b/MKL_README.md
@@ -17,46 +17,3 @@ Installing and enabling the full MKL installation enables 
MKL support for all op
 
   5. Run 'sudo python setup.py install'
 
-# MKL2017 PLUGIN
-
-MKL2017 is an INTEL released library to accelerate Deep Neural Network (DNN) 
applications on Intel architecture.
-
-MKL2017_ML is a subset of MKL2017 and only contains DNN acceleration feature, 
MKL2017 release cycle is longer then MKL2017_ML and MKL2017_ML support latest 
feature
-
-This README shows the user how to setup and install MKL2017 library with mxnet.
-
-## Build/Install MXNet with MKL:
-
-  1. Enable USE_MKL2017=1 in make/config.mk
-
-1.1 By default, MKL_2017_EXPRIEMENTAL=0. If setting 
MKL_2017_EXPRIEMENTAL=1, MKL buffer will be created and transferred between 
layers to achiever much higher performance.
-
-1.2 By default, MKLML_ROOT=/usr/local, MKL2017_ML will be used
-
-  1.2.1 when excute make, Makefile will execute "prepare_mkl.sh" to 
download the MKL2017_ML library under 
-
-  1.2.2 manually steps for download MKL2017_ML problem
-
-1.2.2.1 wget 
https://github.com/dmlc/web-data/raw/master/mxnet/mklml-release/mklml_lnx_.tgz
-
-1.2.2.2 tar zxvf mklml_lnx_.tgz
-
-1.2.2.3 cp -rf mklml_lnx_/* /
-
-  1.2.3 Set LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$MKLML_ROOT/lib
-
-1.3 If setting USE_BLAS=mkl
-
-  1.3.1 mshadow can also utilize mkl blas function in mklml package  
-
-1.4 MKL version compatibility
-
-1.3.2.1 If you already have MKL installed and MKLROOT being set in 
your system, by default, it will not attempt to download the latest mklml 
package unless you unset MKLROOT. 
-
-  2. Run 'make -jX'
-   
-  3. Navigate into the python directory
-  
-  4. Run 'sudo python setup.py install'
-
-
diff --git a/docker_multiarch/arm.crosscompile.android.mk 
b/docker_multiarch/arm.crosscompile.android.mk
index 36b8e9bed79..0302c5cf25a 100644
--- a/docker_multiarch/arm.crosscompile.android.mk
+++ b/docker_multiarch/arm.crosscompile.android.mk
@@ -82,21 +82,6 @@ USE_OPENCV = 0
 # use openmp for parallelization
 USE_OPENMP = 1
 
-# MKL ML Library for Intel CPU/Xeon Phi
-# Please refer to MKL_README.md for details
-
-# MKL ML Library folder, need to be root for /usr/local
-# Change to User Home directory for standard user
-# For USE_BLAS!=mkl only
-MKLML_ROOT=/usr/local
-
-# whether use MKL2017 library
-USE_MKL2017 = 0
-
-# whether use MKL2017 experimental feature for high performance
-# Prerequisite USE_MKL2017=1
-USE_MKL2017_EXPERIMENTAL = 0
-
 # whether use NNPACK library
 USE_NNPACK = 0
 
@@ -115,13 +100,10 @@ USE_LAPACK_PATH =
 USE_INTEL_PATH = NONE
 
 # If use MKL only for BLAS, choose static link automatically to allow python 
wrapper

[GitHub] eric-haibin-lin opened a new pull request #9882: Add force_deterministic option for sparse embedding

2018-02-25 Thread GitBox
eric-haibin-lin opened a new pull request #9882: Add force_deterministic option 
for sparse embedding
URL: https://github.com/apache/incubator-mxnet/pull/9882
 
 
   ## Description ##
   (Brief description on what this PR is about)
   reopen #9846
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin closed pull request #9846: [WIP] Fix non-determinism in sparse embedding

2018-02-25 Thread GitBox
eric-haibin-lin closed pull request #9846: [WIP] Fix non-determinism in sparse 
embedding
URL: https://github.com/apache/incubator-mxnet/pull/9846
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/tensor/indexing_op-inl.cuh 
b/src/operator/tensor/indexing_op-inl.cuh
index 4458151f178..4df1fd451ec 100644
--- a/src/operator/tensor/indexing_op-inl.cuh
+++ b/src/operator/tensor/indexing_op-inl.cuh
@@ -38,7 +38,7 @@ namespace mxnet {
 namespace op {
 const int kWarpSize = 32;
 
-template
+template
 __global__ void AddTakeGradLargeBatchKernel(DType* dst,
// If idx_start == NULL, then 
in-kernel edge
// detection is used
@@ -47,7 +47,9 @@ __global__ void AddTakeGradLargeBatchKernel(DType* dst,
const int* idx_start_size_ptr,
const IdxType *sorted, const 
IdxType *index,
const DType *src,
-   int ymax, int xmax) {
+   int ymax, int xmax,
+   // table to look up positions of 
row_ids in dst
+   const nnvm::dim_t *lookup_table) {
   // Size of the shared memory is [blockDim.x*SZ*blockDim.y]*sizeof(DType)
   extern __shared__ char sh_grad_weight_char[];
   DType* sh_grad_weight = (DType*)sh_grad_weight_char;
@@ -125,7 +127,8 @@ __global__ void AddTakeGradLargeBatchKernel(DType* dst,
 }
 
 const int start_feature = threadIdx.x + blockIdx.x * blockDim.x * SZ;
-const int dst_row = sorted_value * xmax;
+// Lookup inclusive prefix sum table if necessary
+const int dst_row = (lookup ? (lookup_table[sorted_value] - 1) : 
sorted_value) * xmax;
 
 int num_idx = idx_end - idx_begin;
 int idx0 = idx_begin + threadIdx.y*num_idx/blockDim.y;
@@ -179,7 +182,6 @@ __global__ void AddTakeGradLargeBatchKernel(DType* dst,
 }
   }
 }
-
   }
 }
 
@@ -199,6 +201,73 @@ AddTakeGradLargeBatchWorkspaceSize(size_t num_keys) {
   return (unique_bytes + counts_bytes + num_runs_bytes + temporary_bytes);
 }
 
+template
+inline void AddTakeGradLargeBatchKernelLaunch(mshadow::Tensor 
dst,
+  const mshadow::Tensor& sorted,
+  const mshadow::Tensor& index,
+  const mshadow::Tensor ,
+  IndexType* sum_counts_ptr,
+  int* num_runs_ptr,
+  const nnvm::dim_t* lookup_table) 
{
+  cudaStream_t stream = mshadow::Stream::GetStream(dst.stream_);
+  const int num_unique_est = min(dst.size(0), src.size(0));
+  const int max_nthread = 128;
+  const int num_y = max(src.size(0)/num_unique_est, 1);
+  const int block_dim_x = kWarpSize;
+  const int block_dim_y = min(num_y, max_nthread/block_dim_x);
+  const int SZ = min((src.size(1) + block_dim_x - 1) / block_dim_x, 4);
+  const int grid_dim_x = (src.size(1) + block_dim_x * SZ - 1) / (block_dim_x * 
SZ);
+  const int grid_dim_y = min(num_unique_est, mshadow::cuda::kBaseGridNum);
+  dim3 dimBlock(block_dim_x, block_dim_y);
+  dim3 dimGrid(grid_dim_x, grid_dim_y);
+  // Maximum shared memory usage: 128*4*sizeof(DType), which is 4K for 64bit 
DType elements
+  int shmem_size = dimBlock.x*SZ*dimBlock.y*sizeof(DType);
+
+  CHECK_EQ(dst.size(1), src.size(1)) << "AddTakeGradLargeBatch: shape 
mismatch";
+  CHECK_EQ(index.size(0), src.size(0)) << "AddTakeGradLargeBatch: shape 
mismatch";
+  mshadow::cuda::CheckLaunchParam(dimGrid, dimBlock, "AddTakeGradLargeBatch");
+
+  switch (SZ) {
+case 1:
+AddTakeGradLargeBatchKernel<1, lookup, DType>
+<<>>
+(dst.dptr_, sum_counts_ptr, num_runs_ptr,
+ sorted.dptr_, index.dptr_, src.dptr_,
+ static_cast(src.size(0)),
+ static_cast(src.size(1)), lookup_table);
+break;
+case 2:
+AddTakeGradLargeBatchKernel<2, lookup, DType>
+<<>>
+(dst.dptr_, sum_counts_ptr, num_runs_ptr,
+ sorted.dptr_, index.dptr_, src.dptr_,
+ static_cast(src.size(0)),
+ static_cast(src.size(1)), lookup_table);
+break;
+case 3:
+AddTakeGradLargeBatchKernel<3, lookup, DType>
+<<>>
+(dst.dptr_, sum_counts_ptr, num_runs_ptr,
+ sorted.dptr_, index.dptr_, src.dptr_,
+ 

[GitHub] eric-haibin-lin opened a new issue #9881: Inconsistent weight decay logics in multiple optimizers

2018-02-25 Thread GitBox
eric-haibin-lin opened a new issue #9881: Inconsistent weight decay logics in 
multiple optimizers
URL: https://github.com/apache/incubator-mxnet/issues/9881
 
 
   ### wd applied before clip_gradient by the optimizer
   - RMSProp
   - Adamax
   - Nadam
   - FTML
   
   ### wd applied after clip_gradient by the optimizer
   - SGD
   - Signum
   - LBSGD
   - DCASGD
   - NAG
   - SGLD
   - Adam
   
   ### wd applied after clip_gradient by the optimizer, and not used to update 
to states 
   - AdaDelta 
   - AdaGrad
   - Ftrl
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services