[GitHub] sandeep-krishnamurthy commented on issue #12152: [MXNET-696] Fix profiler executer when memonger is used

2018-08-14 Thread GitBox
sandeep-krishnamurthy commented on issue #12152: [MXNET-696] Fix profiler 
executer when memonger is used
URL: https://github.com/apache/incubator-mxnet/pull/12152#issuecomment-413101637
 
 
   @anirudhacharya - If your comments are addressed, this is good to go?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #12131: [MXNET-737][WIP] Add last batch handle for imageiter

2018-08-14 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #12131: 
[MXNET-737][WIP] Add last batch handle for imageiter
URL: https://github.com/apache/incubator-mxnet/pull/12131#discussion_r210175397
 
 

 ##
 File path: python/mxnet/image/image.py
 ##
 @@ -1149,22 +1158,44 @@ def __init__(self, batch_size, data_shape, 
label_width=1,
 else:
 self.auglist = aug_list
 self.cur = 0
+self._is_allowed_reading = True
+self._cached_data = None
+# handle the last batch
+if self.seq and last_batch == 'discard':
+new_seq_n = len(self.seq) - len(self.seq) % batch_size
+self.seq = self.seq[:new_seq_n]
+
+self.last_batch = last_batch
+self.num_image = len(self.seq) if self.seq is not None else None
 self.reset()
 
 def reset(self):
 """Resets the iterator to the beginning of the data."""
-if self.shuffle:
-random.shuffle(self.seq)
+if self.last_batch != 'roll_over' or \
+self._is_allowed_reading is True:
+if self.imgrec is not None:
+self.imgrec.reset()
+self.cur = 0
+self._is_allowed_reading = True
 
 Review comment:
   why do we need to set this again?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #12131: [MXNET-737][WIP] Add last batch handle for imageiter

2018-08-14 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #12131: 
[MXNET-737][WIP] Add last batch handle for imageiter
URL: https://github.com/apache/incubator-mxnet/pull/12131#discussion_r210176075
 
 

 ##
 File path: python/mxnet/image/image.py
 ##
 @@ -1207,8 +1237,28 @@ def next(self):
 except StopIteration:
 if not i:
 raise StopIteration
+return i
 
-return io.DataBatch([batch_data], [batch_label], batch_size - i)
+def next(self):
+"""Returns the next batch of data."""
+batch_size = self.batch_size
+c, h, w = self.data_shape
 
 Review comment:
   Are we assuming always channels_first?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #12131: [MXNET-737][WIP] Add last batch handle for imageiter

2018-08-14 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #12131: 
[MXNET-737][WIP] Add last batch handle for imageiter
URL: https://github.com/apache/incubator-mxnet/pull/12131#discussion_r210176367
 
 

 ##
 File path: tests/python/unittest/test_image.py
 ##
 @@ -130,29 +129,59 @@ def test_color_normalize(self):
 mx.nd.array(mean), mx.nd.array(std))
 assert_almost_equal(mx_result.asnumpy(), (src - mean) / std, 
atol=1e-3)
 
-
 def test_imageiter(self):
 def check_imageiter(dtype='float32'):
 im_list = [[np.random.randint(0, 5), x] for x in TestImage.IMAGES]
-test_iter = mx.image.ImageIter(2, (3, 224, 224), label_width=1, 
imglist=im_list,
-path_root='', dtype=dtype)
-for _ in range(3):
-for batch in test_iter:
-pass
-test_iter.reset()
-
-# test with list file
 fname = './data/test_imageiter.lst'
-file_list = ['\t'.join([str(k), str(np.random.randint(0, 5)), x]) \
-for k, x in enumerate(TestImage.IMAGES)]
+file_list = ['\t'.join([str(k), str(np.random.randint(0, 5)), x])
+ for k, x in enumerate(TestImage.IMAGES)]
 with open(fname, 'w') as f:
 for line in file_list:
 f.write(line + '\n')
+
+test_list = ['imglist', 'path_imglist']
+
+for test in test_list:
+imglist = im_list if test == 'imglist' else None
+path_imglist = fname if test == 'path_imglist' else None
+
+test_iter = mx.image.ImageIter(2, (3, 224, 224), 
label_width=1, imglist=imglist, 
+path_imglist=path_imglist, path_root='', dtype=dtype)
+for _ in range(3):
+for batch in test_iter:
+pass
 
 Review comment:
   can we assert the returned batch size and shape here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #12131: [MXNET-737][WIP] Add last batch handle for imageiter

2018-08-14 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #12131: 
[MXNET-737][WIP] Add last batch handle for imageiter
URL: https://github.com/apache/incubator-mxnet/pull/12131#discussion_r210175561
 
 

 ##
 File path: python/mxnet/image/image.py
 ##
 @@ -1149,22 +1158,44 @@ def __init__(self, batch_size, data_shape, 
label_width=1,
 else:
 self.auglist = aug_list
 self.cur = 0
+self._is_allowed_reading = True
+self._cached_data = None
+# handle the last batch
+if self.seq and last_batch == 'discard':
+new_seq_n = len(self.seq) - len(self.seq) % batch_size
+self.seq = self.seq[:new_seq_n]
+
+self.last_batch = last_batch
+self.num_image = len(self.seq) if self.seq is not None else None
 self.reset()
 
 def reset(self):
 """Resets the iterator to the beginning of the data."""
-if self.shuffle:
-random.shuffle(self.seq)
+if self.last_batch != 'roll_over' or \
+self._is_allowed_reading is True:
+if self.imgrec is not None:
+self.imgrec.reset()
+self.cur = 0
+self._is_allowed_reading = True
+
+def hard_reset(self):
+"""Resets the iterator and ignore roll over data"""
 if self.imgrec is not None:
 self.imgrec.reset()
 self.cur = 0
+self._is_allowed_reading = True
 
 def next_sample(self):
 """Helper function for reading in next sample."""
+if self._is_allowed_reading is False:
+raise StopIteration
 
 Review comment:
   Please add user comprehensible message when raising errors.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #12131: [MXNET-737][WIP] Add last batch handle for imageiter

2018-08-14 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #12131: 
[MXNET-737][WIP] Add last batch handle for imageiter
URL: https://github.com/apache/incubator-mxnet/pull/12131#discussion_r210175889
 
 

 ##
 File path: python/mxnet/image/image.py
 ##
 @@ -1149,22 +1158,44 @@ def __init__(self, batch_size, data_shape, 
label_width=1,
 else:
 self.auglist = aug_list
 self.cur = 0
+self._is_allowed_reading = True
+self._cached_data = None
+# handle the last batch
+if self.seq and last_batch == 'discard':
+new_seq_n = len(self.seq) - len(self.seq) % batch_size
+self.seq = self.seq[:new_seq_n]
+
+self.last_batch = last_batch
+self.num_image = len(self.seq) if self.seq is not None else None
 self.reset()
 
 def reset(self):
 """Resets the iterator to the beginning of the data."""
-if self.shuffle:
-random.shuffle(self.seq)
+if self.last_batch != 'roll_over' or \
+self._is_allowed_reading is True:
+if self.imgrec is not None:
+self.imgrec.reset()
+self.cur = 0
+self._is_allowed_reading = True
+
+def hard_reset(self):
+"""Resets the iterator and ignore roll over data"""
 if self.imgrec is not None:
 self.imgrec.reset()
 self.cur = 0
+self._is_allowed_reading = True
 
 def next_sample(self):
 """Helper function for reading in next sample."""
+if self._is_allowed_reading is False:
+raise StopIteration
 if self.seq is not None:
-if self.cur >= len(self.seq):
+if self.cur < self.num_image:
+idx = self.seq[self.cur]
+else:
+if self.last_batch != 'discard':
 
 Review comment:
   should this be != discard and == rollover?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #12173: Revert "update dmlc-core (#12129)"

2018-08-14 Thread GitBox
haojin2 commented on issue #12173: Revert "update dmlc-core (#12129)"
URL: https://github.com/apache/incubator-mxnet/pull/12173#issuecomment-413098789
 
 
   @marcoabreu The build for this PR has passed but the status is not reflected 
on this page, can you take a look?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on a change in pull request #11148: [MXNET-679] Refactor handling BLAS libraries with cmake

2018-08-14 Thread GitBox
TaoLv commented on a change in pull request #11148: [MXNET-679] Refactor 
handling BLAS libraries with cmake
URL: https://github.com/apache/incubator-mxnet/pull/11148#discussion_r210174235
 
 

 ##
 File path: src/operator/rnn_impl.h
 ##
 @@ -994,7 +998,6 @@ void GruForwardTraining(DType* ws,
   DType* bx_l = bx;
   DType* bh_l = bh;
   DType* y_tmp = x_ptr;
-  unsigned int seed_ = 17 + rand() % 4096;  // NOLINT(runtime/threadsafe_fn)
 
 Review comment:
   I think this line is copied from 
https://github.com/apache/incubator-mxnet/blob/master/src/operator/nn/dropout-inl.h#L85.
 So do you think we also need make change in dropout?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #12058: MKLDNN can be turned off with env var

2018-08-14 Thread GitBox
pengzhao-intel commented on issue #12058: MKLDNN can be turned off with env var
URL: https://github.com/apache/incubator-mxnet/pull/12058#issuecomment-413097839
 
 
   I look into the code. The message should be in the top level of OP but seem 
no proper place now.
   So, how about waiting the #12019 is merged and then we only need to change 
`InferStorageType` where we can write the message too?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #11224: ‘make lint’ is broken under python2

2018-08-14 Thread GitBox
TaoLv commented on issue #11224: ‘make lint’ is broken under python2
URL: 
https://github.com/apache/incubator-mxnet/issues/11224#issuecomment-413097338
 
 
   Thank you all for fixing this issue. @sandeep-krishnamurthy feel free to 
close.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] azai91 commented on a change in pull request #11831: [MXNET-484] MKLDNN C++ test for LRN operator

2018-08-14 Thread GitBox
azai91 commented on a change in pull request #11831: [MXNET-484] MKLDNN C++ 
test for LRN operator
URL: https://github.com/apache/incubator-mxnet/pull/11831#discussion_r210172622
 
 

 ##
 File path: tests/cpp/operator/mkldnn.cc
 ##
 @@ -1094,6 +1191,99 @@ void TestConcatOp(const OpAttrs , VerifyFunc 
verify_fn,
   }
 }
 
+// compares output of fcompute with fcomputex
+void TestOpEx(const OpAttrs _attrs, const OpAttrs _attrs) {
+  std::vector inputs(forward_attrs.num_inputs);
+  std::vector outputs(forward_attrs.num_outputs);
+  std::vector ex_outputs(forward_attrs.num_outputs);
+
+  std::vector backwards_input(backwards_attrs.num_inputs);
+  std::vector backwards_outputs(backwards_attrs.num_outputs);
+  std::vector backwards_ex_outputs(backwards_attrs.num_outputs);
+
+
+  std::vector req(forward_attrs.num_outputs);
+  std::vector back_req(backwards_attrs.num_outputs);
+  std::vector dispatches = forward_attrs.dispatches;
+
+  TestArrayShapes tas = GetTestArrayShapes();
+  std::vector pds = tas.pds;
+
+  std::vector in_arrs = 
GetTestInputArrays(forward_attrs.input_types, true);
+  std::vector> out_arrs(forward_attrs.num_outputs);
+  std::vector> 
ex_out_arrs(forward_attrs.num_outputs);
+
+  if (forward_attrs.requests.find(OpReqType::kWriteTo) != 
forward_attrs.requests.end()) {
+for (int i1 = 0; i1 < in_arrs.size(); i1++) {
+  auto in_arr = in_arrs[i1];
+
+  if (in_arr.arr.shape().ndim() != 4)
+continue;
+
+  //  cannot pool / lrn / conv if dims are not default
+  if (in_arr.arr.IsMKLDNNData())
+continue;
 
 Review comment:
   I remember now - it's cause we can't compare any operations that involves a 
spatial component since we will be comparing against fcompute that will be 
using the default format.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Faldict commented on issue #12142: Failed to import MXNet built with TensorRT

2018-08-14 Thread GitBox
Faldict commented on issue #12142: Failed to import MXNet built with TensorRT
URL: 
https://github.com/apache/incubator-mxnet/issues/12142#issuecomment-413095946
 
 
   @KellenSunderland I uninstalled protobuf 3.5.1 and rebuild the whole 
toolchain. At present, MXNet could be imported successfully. It seems that you 
should constrain the protobuf version strictly.
   
   Further more, I tried to run a tensorrt baseline. I used the test code 
`incubator-mxnet/tests/python/tensorrt/test_tensorrt_lenet5.py` but got an 
unexpected error:
   
   ```
   “python3 test_tensorrt_lenet5.py” terminated by signal SIGSEGV (Address 
boundary error)
   ``` 
   
   As I set some breakpoints, I found this error occurs when executing this 
line:
   
   ```
   executor = mx.contrib.tensorrt.tensorrt_bind(sym, ctx=mx.gpu(0), 
all_params=all_params,
data=data_size,

softmax_label=(batch_size,),
grad_req='null',
force_rebind=True)
   
   ```
   
   where the symbol and parameters are trained by running `python3 
lenet5_train.py`. So how to solve this problem?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nswamy closed pull request #9771: Modify NDArrayIter constructor to receive tuple (i.e. dict in Python)…

2018-08-14 Thread GitBox
nswamy closed pull request #9771: Modify NDArrayIter constructor to receive 
tuple (i.e. dict in Python)…
URL: https://github.com/apache/incubator-mxnet/pull/9771
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/scala-package/core/src/main/scala/ml/dmlc/mxnet/io/NDArrayIter.scala 
b/scala-package/core/src/main/scala/ml/dmlc/mxnet/io/NDArrayIter.scala
index e7dd51b190a..91d66e5c236 100644
--- a/scala-package/core/src/main/scala/ml/dmlc/mxnet/io/NDArrayIter.scala
+++ b/scala-package/core/src/main/scala/ml/dmlc/mxnet/io/NDArrayIter.scala
@@ -38,15 +38,14 @@ import scala.collection.immutable.ListMap
  * the size of data does not match batch_size. Roll over is intended
  * for training and can cause problems if used for prediction.
  */
-class NDArrayIter (data: IndexedSeq[NDArray], label: IndexedSeq[NDArray] = 
IndexedSeq.empty,
-  private val dataBatchSize: Int = 1, shuffle: Boolean = false,
-  lastBatchHandle: String = "pad",
-  dataName: String = "data", labelName: String = "label") 
extends DataIter {
+class NDArrayIter (data: IndexedSeq[(String, NDArray)], label: 
IndexedSeq[(String, NDArray)],
+  private val dataBatchSize: Int, shuffle: Boolean,
+  lastBatchHandle: String) extends DataIter {
   private val logger = LoggerFactory.getLogger(classOf[NDArrayIter])
 
 
-  private val (_dataList: IndexedSeq[NDArray],
-  _labelList: IndexedSeq[NDArray]) = {
+  private val (_dataList: IndexedSeq[(String, NDArray)],
+  _labelList: IndexedSeq[(String, NDArray)]) = {
 // data should not be null and size > 0
 require(data != null && data.size > 0,
   "data should not be null and data.size should not be zero")
@@ -59,13 +58,13 @@ class NDArrayIter (data: IndexedSeq[NDArray], label: 
IndexedSeq[NDArray] = Index
 
 // discard final part if lastBatchHandle equals discard
 if (lastBatchHandle.equals("discard")) {
-  val dataSize = data(0).shape(0)
+  val dataSize = data(0)._2.shape(0)
   require(dataBatchSize <= dataSize,
 "batch_size need to be smaller than data size when not padding.")
   val keepSize = dataSize - dataSize % dataBatchSize
-  val dataList = data.map(ndArray => {ndArray.slice(0, keepSize)})
+  val dataList = data.map { case(name, ndArray) => (name, 
{ndArray.slice(0, keepSize)}) }
   if (!label.isEmpty) {
-val labelList = label.map(ndArray => {ndArray.slice(0, keepSize)})
+val labelList = label.map { case(name, ndArray) => (name, 
{ndArray.slice(0, keepSize)}) }
 (dataList, labelList)
   } else {
 (dataList, label)
@@ -75,10 +74,25 @@ class NDArrayIter (data: IndexedSeq[NDArray], label: 
IndexedSeq[NDArray] = Index
 }
   }
 
+  def this(
+  data: IndexedSeq[NDArray],
+  label: IndexedSeq[NDArray] = IndexedSeq.empty,
+  dataBatchSize: Int = 1,
+  shuffle: Boolean = false,
+  lastBatchHandle: String = "pad",
+  dataName: String = "data",
+  labelName: String = "label") = {
+this(
+  IO.initData(data, false, dataName),
+  IO.initData(label, true, labelName),
+  dataBatchSize,
+  shuffle,
+  lastBatchHandle)
+  }
 
-  val initData: IndexedSeq[(String, NDArray)] = IO.initData(_dataList, false, 
dataName)
-  val initLabel: IndexedSeq[(String, NDArray)] = IO.initData(_labelList, true, 
labelName)
-  val numData = _dataList(0).shape(0)
+  val initData: IndexedSeq[(String, NDArray)] = _dataList
+  val initLabel: IndexedSeq[(String, NDArray)] = _labelList
+  val numData = _dataList(0)._2.shape(0)
   val numSource = initData.size
   var cursor = -dataBatchSize
 
@@ -173,7 +187,7 @@ class NDArrayIter (data: IndexedSeq[NDArray], label: 
IndexedSeq[NDArray] = Index
* @return the data of current batch
*/
   override def getData(): IndexedSeq[NDArray] = {
-_getData(_dataList)
+_getData(_dataList.map(_._2))
   }
 
   /**
@@ -181,7 +195,7 @@ class NDArrayIter (data: IndexedSeq[NDArray], label: 
IndexedSeq[NDArray] = Index
* @return the label of current batch
*/
   override def getLabel(): IndexedSeq[NDArray] = {
-_getData(_labelList)
+_getData(_labelList.map(_._2))
   }
 
   /**


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nswamy commented on issue #9771: Modify NDArrayIter constructor to receive tuple (i.e. dict in Python)…

2018-08-14 Thread GitBox
nswamy commented on issue #9771: Modify NDArrayIter constructor to receive 
tuple (i.e. dict in Python)…
URL: https://github.com/apache/incubator-mxnet/pull/9771#issuecomment-413093757
 
 
   @parallelgithub I am closing this PR since we have a fix in place. we are 
making some more changes to NDArrayIterator in this PR [DataDesc, 
NDArrayIter](https://github.com/apache/incubator-mxnet/pull/11844), would be 
great to get your feedback on that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #12157: Subgraph API for integrating accelerators with MXNet

2018-08-14 Thread GitBox
reminisce commented on a change in pull request #12157: Subgraph API for 
integrating accelerators with MXNet
URL: https://github.com/apache/incubator-mxnet/pull/12157#discussion_r210170495
 
 

 ##
 File path: src/operator/subgraph/default_subgraph_property.h
 ##
 @@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#ifndef MXNET_OPERATOR_SUBGRAPH_DEFAULT_SUBGRAPH_PROPERTY_H_
+#define MXNET_OPERATOR_SUBGRAPH_DEFAULT_SUBGRAPH_PROPERTY_H_
+
+#include 
+#include 
+#include "./common.h"
+#include "./subgraph_property.h"
+
+namespace mxnet {
+namespace op {
+
+/*
+ * This selects nodes for a subgraph that only contains operators
+ * in a given set and it visits nodes via both input and output links.
+ */
+class ContainOpSelector: public SubgraphSelector {
+ public:
+  explicit ContainOpSelector(const std::unordered_set& op_names)
+: op_names_(op_names) {}
+
+  virtual bool Select(const nnvm::Node ) {
+return !n.is_variable() && op_names_.count(n.op()->name);
+  }
+
+  virtual bool SelectInput(const nnvm::Node , const nnvm::Node _node) {
+return !new_node.is_variable() && op_names_.count(new_node.op()->name);
+  }
+
+  virtual bool SelectOutput(const nnvm::Node , const nnvm::Node _node) {
+return !new_node.is_variable() && op_names_.count(new_node.op()->name);
+  }
+ private:
+  const std::unordered_set& op_names_;
+};
+
+/*
+ * This subgraph property finds a subgraph whose nodes have only operators
+ * within a set. The operators in the subgraph will be executed by 
_default_subgraph_op.
+ */
+class DefaultSubgraphProperty: public SubgraphProperty {
+ public:
+  static SubgraphPropertyPtr Create() { return 
std::make_shared(); }
+  virtual nnvm::NodePtr CreateSubgraphNode(const nnvm::Symbol ,
+   const int subgraph_id = 0) const {
+nnvm::NodePtr n = nnvm::Node::Create();
+n->attrs.op = Op::Get("_default_subgraph_op");
+n->attrs.name = "_default_subgraph_op" + std::to_string(subgraph_id);
+n->attrs.subgraphs.push_back(std::make_shared(sym));
 
 Review comment:
   One thing you can do to enforce the consistency is defining some enums as 
indices of forward graph, backward graph, fused graph, etc in 
`attrs.subgraphs`, and always use those enums to get the desired subgraphs. It 
also increases the code readability.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin commented on a change in pull request #12157: Subgraph API for integrating accelerators with MXNet

2018-08-14 Thread GitBox
ZhennanQin commented on a change in pull request #12157: Subgraph API for 
integrating accelerators with MXNet
URL: https://github.com/apache/incubator-mxnet/pull/12157#discussion_r210169930
 
 

 ##
 File path: src/operator/subgraph/default_subgraph_property.h
 ##
 @@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#ifndef MXNET_OPERATOR_SUBGRAPH_DEFAULT_SUBGRAPH_PROPERTY_H_
+#define MXNET_OPERATOR_SUBGRAPH_DEFAULT_SUBGRAPH_PROPERTY_H_
+
+#include 
+#include 
+#include "./common.h"
+#include "./subgraph_property.h"
+
+namespace mxnet {
+namespace op {
+
+/*
+ * This selects nodes for a subgraph that only contains operators
+ * in a given set and it visits nodes via both input and output links.
+ */
+class ContainOpSelector: public SubgraphSelector {
+ public:
+  explicit ContainOpSelector(const std::unordered_set& op_names)
+: op_names_(op_names) {}
+
+  virtual bool Select(const nnvm::Node ) {
+return !n.is_variable() && op_names_.count(n.op()->name);
+  }
+
+  virtual bool SelectInput(const nnvm::Node , const nnvm::Node _node) {
+return !new_node.is_variable() && op_names_.count(new_node.op()->name);
+  }
+
+  virtual bool SelectOutput(const nnvm::Node , const nnvm::Node _node) {
+return !new_node.is_variable() && op_names_.count(new_node.op()->name);
+  }
+ private:
+  const std::unordered_set& op_names_;
+};
+
+/*
+ * This subgraph property finds a subgraph whose nodes have only operators
+ * within a set. The operators in the subgraph will be executed by 
_default_subgraph_op.
+ */
+class DefaultSubgraphProperty: public SubgraphProperty {
+ public:
+  static SubgraphPropertyPtr Create() { return 
std::make_shared(); }
+  virtual nnvm::NodePtr CreateSubgraphNode(const nnvm::Symbol ,
+   const int subgraph_id = 0) const {
+nnvm::NodePtr n = nnvm::Node::Create();
+n->attrs.op = Op::Get("_default_subgraph_op");
+n->attrs.name = "_default_subgraph_op" + std::to_string(subgraph_id);
+n->attrs.subgraphs.push_back(std::make_shared(sym));
 
 Review comment:
   We can ensure the consistency between subgraph property and subgraph 
operator, and we also need to ensure nobody changes it after subgraph 
partition. It's better to add some check for this consistency in case it breaks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] azai91 commented on issue #12058: MKLDNN can be turned off with env var

2018-08-14 Thread GitBox
azai91 commented on issue #12058: MKLDNN can be turned off with env var
URL: https://github.com/apache/incubator-mxnet/pull/12058#issuecomment-413092105
 
 
   No problem. Do you know the best place to put that info? I asked around last 
week and there doesn't seem to be a consensus.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nswamy commented on issue #11925: mxnet.ndarray.stack : PLEASE UPDATE DOC

2018-08-14 Thread GitBox
nswamy commented on issue #11925: mxnet.ndarray.stack : PLEASE UPDATE DOC
URL: 
https://github.com/apache/incubator-mxnet/issues/11925#issuecomment-413092030
 
 
   Thanks @liyujiel, he added a Python example for stack now.  Closing the issue


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nswamy closed issue #11925: mxnet.ndarray.stack : PLEASE UPDATE DOC

2018-08-14 Thread GitBox
nswamy closed issue #11925: mxnet.ndarray.stack : PLEASE UPDATE DOC
URL: https://github.com/apache/incubator-mxnet/issues/11925
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Oh233 opened a new pull request #12178: fix flasky unittest for deformable psroi pooling

2018-08-14 Thread GitBox
Oh233 opened a new pull request #12178: fix flasky unittest for deformable 
psroi pooling
URL: https://github.com/apache/incubator-mxnet/pull/12178
 
 
   ## Description ##
   This PR aims to fix the problem raised by issue #11713. 
   
   ## Checklist ##
   ### Essentials ###
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   The deformable ps roi pooling operator includes several corner cases which 
the function is not differentiable. Therefore during the unittest, we need to 
add some check to ensure the input values are within the smooth regions.
   
   ## Comments ##
   The issue and solution are discussed with @ankkhedia, 
@sandeep-krishnamurthy, @YuwenXiong and the fix is ran by @ankkhedia for about 
2500 times on his local machine.
   
   @ankkhedia could you help review it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: update ndarray stack Doc for #11925 (#12015)

2018-08-14 Thread nswamy
This is an automated email from the ASF dual-hosted git repository.

nswamy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new b95835a  update ndarray stack Doc for #11925 (#12015)
b95835a is described below

commit b95835a701479b8e8f03c5673fb8d35236f4434f
Author: Alex Li 
AuthorDate: Tue Aug 14 21:33:36 2018 -0700

update ndarray stack Doc for #11925 (#12015)

* update ndarray stack Doc
---
 python/mxnet/ndarray_doc.py | 15 +++
 1 file changed, 15 insertions(+)

diff --git a/python/mxnet/ndarray_doc.py b/python/mxnet/ndarray_doc.py
index 0c51036..9d6258a 100644
--- a/python/mxnet/ndarray_doc.py
+++ b/python/mxnet/ndarray_doc.py
@@ -105,6 +105,21 @@ class BroadcastToDoc(NDArrayDoc):
 (2L, 2L, 2L, 3L)
 """
 
+class StackDoc(NDArrayDoc):
+"""
+Example
+
+Join a sequence of arrays along a new axis.
+>>> x = mx.nd.array([1, 2])
+>>> y = mx.nd.array([3, 4])
+>>> stack(x, y)
+[[1, 2],
+ [3, 4]]
+>>> stack(x, y, axis=1)
+[[1, 3],
+ [2, 4]]
+"""
+
 class CustomDoc(NDArrayDoc):
 """
 Example



[GitHub] nswamy closed pull request #12015: update ndarray stack Doc for #11925

2018-08-14 Thread GitBox
nswamy closed pull request #12015: update ndarray stack Doc for #11925
URL: https://github.com/apache/incubator-mxnet/pull/12015
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/ndarray_doc.py b/python/mxnet/ndarray_doc.py
index 0c51036d820..9d6258a89a3 100644
--- a/python/mxnet/ndarray_doc.py
+++ b/python/mxnet/ndarray_doc.py
@@ -105,6 +105,21 @@ class BroadcastToDoc(NDArrayDoc):
 (2L, 2L, 2L, 3L)
 """
 
+class StackDoc(NDArrayDoc):
+"""
+Example
+
+Join a sequence of arrays along a new axis.
+>>> x = mx.nd.array([1, 2])
+>>> y = mx.nd.array([3, 4])
+>>> stack(x, y)
+[[1, 2],
+ [3, 4]]
+>>> stack(x, y, axis=1)
+[[1, 3],
+ [2, 4]]
+"""
+
 class CustomDoc(NDArrayDoc):
 """
 Example


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #12157: Subgraph API for integrating accelerators with MXNet

2018-08-14 Thread GitBox
reminisce commented on a change in pull request #12157: Subgraph API for 
integrating accelerators with MXNet
URL: https://github.com/apache/incubator-mxnet/pull/12157#discussion_r210168425
 
 

 ##
 File path: src/operator/subgraph/default_subgraph_property.h
 ##
 @@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#ifndef MXNET_OPERATOR_SUBGRAPH_DEFAULT_SUBGRAPH_PROPERTY_H_
+#define MXNET_OPERATOR_SUBGRAPH_DEFAULT_SUBGRAPH_PROPERTY_H_
+
+#include 
+#include 
+#include "./common.h"
+#include "./subgraph_property.h"
+
+namespace mxnet {
+namespace op {
+
+/*
+ * This selects nodes for a subgraph that only contains operators
+ * in a given set and it visits nodes via both input and output links.
+ */
+class ContainOpSelector: public SubgraphSelector {
+ public:
+  explicit ContainOpSelector(const std::unordered_set& op_names)
+: op_names_(op_names) {}
+
+  virtual bool Select(const nnvm::Node ) {
+return !n.is_variable() && op_names_.count(n.op()->name);
+  }
+
+  virtual bool SelectInput(const nnvm::Node , const nnvm::Node _node) {
+return !new_node.is_variable() && op_names_.count(new_node.op()->name);
+  }
+
+  virtual bool SelectOutput(const nnvm::Node , const nnvm::Node _node) {
+return !new_node.is_variable() && op_names_.count(new_node.op()->name);
+  }
+ private:
+  const std::unordered_set& op_names_;
+};
+
+/*
+ * This subgraph property finds a subgraph whose nodes have only operators
+ * within a set. The operators in the subgraph will be executed by 
_default_subgraph_op.
+ */
+class DefaultSubgraphProperty: public SubgraphProperty {
+ public:
+  static SubgraphPropertyPtr Create() { return 
std::make_shared(); }
+  virtual nnvm::NodePtr CreateSubgraphNode(const nnvm::Symbol ,
+   const int subgraph_id = 0) const {
+nnvm::NodePtr n = nnvm::Node::Create();
+n->attrs.op = Op::Get("_default_subgraph_op");
+n->attrs.name = "_default_subgraph_op" + std::to_string(subgraph_id);
+n->attrs.subgraphs.push_back(std::make_shared(sym));
 
 Review comment:
   As far as I know, there is no fixed convention of doing this. @zheng-da 
   I don't think you need to worry about this if you can ensure the consistency 
between your specific subgraph property and subgraph operator.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #12058: MKLDNN can be turned off with env var

2018-08-14 Thread GitBox
pengzhao-intel commented on issue #12058: MKLDNN can be turned off with env var
URL: https://github.com/apache/incubator-mxnet/pull/12058#issuecomment-413090931
 
 
   Looks good. 
   
   One tip:  Could we add an info message to let the user know the MKLDNN is 
disabled now by "MXNET_MKLDNN_ENABLED=0" and it can be switched on with env 
setting again?
   
   I afraid the user will forget this env setting in some bash file and then 
can't get the correct performance.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin commented on a change in pull request #12157: Subgraph API for integrating accelerators with MXNet

2018-08-14 Thread GitBox
ZhennanQin commented on a change in pull request #12157: Subgraph API for 
integrating accelerators with MXNet
URL: https://github.com/apache/incubator-mxnet/pull/12157#discussion_r210168099
 
 

 ##
 File path: src/operator/subgraph/default_subgraph_property.h
 ##
 @@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#ifndef MXNET_OPERATOR_SUBGRAPH_DEFAULT_SUBGRAPH_PROPERTY_H_
+#define MXNET_OPERATOR_SUBGRAPH_DEFAULT_SUBGRAPH_PROPERTY_H_
+
+#include 
+#include 
+#include "./common.h"
+#include "./subgraph_property.h"
+
+namespace mxnet {
+namespace op {
+
+/*
+ * This selects nodes for a subgraph that only contains operators
+ * in a given set and it visits nodes via both input and output links.
+ */
+class ContainOpSelector: public SubgraphSelector {
+ public:
+  explicit ContainOpSelector(const std::unordered_set& op_names)
+: op_names_(op_names) {}
+
+  virtual bool Select(const nnvm::Node ) {
+return !n.is_variable() && op_names_.count(n.op()->name);
+  }
+
+  virtual bool SelectInput(const nnvm::Node , const nnvm::Node _node) {
+return !new_node.is_variable() && op_names_.count(new_node.op()->name);
+  }
+
+  virtual bool SelectOutput(const nnvm::Node , const nnvm::Node _node) {
+return !new_node.is_variable() && op_names_.count(new_node.op()->name);
+  }
+ private:
+  const std::unordered_set& op_names_;
+};
+
+/*
+ * This subgraph property finds a subgraph whose nodes have only operators
+ * within a set. The operators in the subgraph will be executed by 
_default_subgraph_op.
+ */
+class DefaultSubgraphProperty: public SubgraphProperty {
+ public:
+  static SubgraphPropertyPtr Create() { return 
std::make_shared(); }
+  virtual nnvm::NodePtr CreateSubgraphNode(const nnvm::Symbol ,
+   const int subgraph_id = 0) const {
+nnvm::NodePtr n = nnvm::Node::Create();
+n->attrs.op = Op::Get("_default_subgraph_op");
+n->attrs.name = "_default_subgraph_op" + std::to_string(subgraph_id);
+n->attrs.subgraphs.push_back(std::make_shared(sym));
 
 Review comment:
   I heard that control dependence will use attrs.subgraphs as well. I'm not 
sure how this attribute will be used for dependence purpose.  Do we have any 
instruction to define the sequence for attrs.subgraphs?Eg. always put subgraph 
symbol in index 0, and following dependence purpose node.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vishaalkapoor commented on issue #12165: [MXAPPS-581] Enable remaining tests in CI.

2018-08-14 Thread GitBox
vishaalkapoor commented on issue #12165: [MXAPPS-581] Enable remaining tests in 
CI.
URL: https://github.com/apache/incubator-mxnet/pull/12165#issuecomment-413090449
 
 
   Hi @indhub, would you mind taking a look? This is the last PR for the 
Straight Dope CI. Thank you!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #12157: Subgraph API for integrating accelerators with MXNet

2018-08-14 Thread GitBox
reminisce commented on a change in pull request #12157: Subgraph API for 
integrating accelerators with MXNet
URL: https://github.com/apache/incubator-mxnet/pull/12157#discussion_r210167394
 
 

 ##
 File path: src/operator/subgraph/default_subgraph_property.h
 ##
 @@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#ifndef MXNET_OPERATOR_SUBGRAPH_DEFAULT_SUBGRAPH_PROPERTY_H_
+#define MXNET_OPERATOR_SUBGRAPH_DEFAULT_SUBGRAPH_PROPERTY_H_
+
+#include 
+#include 
+#include "./common.h"
+#include "./subgraph_property.h"
+
+namespace mxnet {
+namespace op {
+
+/*
+ * This selects nodes for a subgraph that only contains operators
+ * in a given set and it visits nodes via both input and output links.
+ */
+class ContainOpSelector: public SubgraphSelector {
+ public:
+  explicit ContainOpSelector(const std::unordered_set& op_names)
+: op_names_(op_names) {}
+
+  virtual bool Select(const nnvm::Node ) {
+return !n.is_variable() && op_names_.count(n.op()->name);
+  }
+
+  virtual bool SelectInput(const nnvm::Node , const nnvm::Node _node) {
+return !new_node.is_variable() && op_names_.count(new_node.op()->name);
+  }
+
+  virtual bool SelectOutput(const nnvm::Node , const nnvm::Node _node) {
+return !new_node.is_variable() && op_names_.count(new_node.op()->name);
+  }
+ private:
+  const std::unordered_set& op_names_;
+};
+
+/*
+ * This subgraph property finds a subgraph whose nodes have only operators
+ * within a set. The operators in the subgraph will be executed by 
_default_subgraph_op.
+ */
+class DefaultSubgraphProperty: public SubgraphProperty {
+ public:
+  static SubgraphPropertyPtr Create() { return 
std::make_shared(); }
+  virtual nnvm::NodePtr CreateSubgraphNode(const nnvm::Symbol ,
+   const int subgraph_id = 0) const {
+nnvm::NodePtr n = nnvm::Node::Create();
+n->attrs.op = Op::Get("_default_subgraph_op");
+n->attrs.name = "_default_subgraph_op" + std::to_string(subgraph_id);
+n->attrs.subgraphs.push_back(std::make_shared(sym));
 
 Review comment:
   `n` is a newly created node and its `attrs.subgraphs` is empty before 
`push_back` is called. It's actually up to you to decide which index is for the 
`sym` in your specific subgraph property. Just make sure you get the correct 
subgraph in your subgraph operator implementation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin commented on a change in pull request #12157: Subgraph API for integrating accelerators with MXNet

2018-08-14 Thread GitBox
ZhennanQin commented on a change in pull request #12157: Subgraph API for 
integrating accelerators with MXNet
URL: https://github.com/apache/incubator-mxnet/pull/12157#discussion_r209838212
 
 

 ##
 File path: src/operator/subgraph/default_subgraph_property.h
 ##
 @@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#ifndef MXNET_OPERATOR_SUBGRAPH_DEFAULT_SUBGRAPH_PROPERTY_H_
+#define MXNET_OPERATOR_SUBGRAPH_DEFAULT_SUBGRAPH_PROPERTY_H_
+
+#include 
+#include 
+#include "./common.h"
+#include "./subgraph_property.h"
+
+namespace mxnet {
+namespace op {
+
+/*
+ * This selects nodes for a subgraph that only contains operators
+ * in a given set and it visits nodes via both input and output links.
+ */
+class ContainOpSelector: public SubgraphSelector {
+ public:
+  explicit ContainOpSelector(const std::unordered_set& op_names)
+: op_names_(op_names) {}
+
+  virtual bool Select(const nnvm::Node ) {
+return !n.is_variable() && op_names_.count(n.op()->name);
+  }
+
+  virtual bool SelectInput(const nnvm::Node , const nnvm::Node _node) {
+return !new_node.is_variable() && op_names_.count(new_node.op()->name);
+  }
+
+  virtual bool SelectOutput(const nnvm::Node , const nnvm::Node _node) {
+return !new_node.is_variable() && op_names_.count(new_node.op()->name);
+  }
+ private:
+  const std::unordered_set& op_names_;
+};
+
+/*
+ * This subgraph property finds a subgraph whose nodes have only operators
+ * within a set. The operators in the subgraph will be executed by 
_default_subgraph_op.
+ */
+class DefaultSubgraphProperty: public SubgraphProperty {
+ public:
+  static SubgraphPropertyPtr Create() { return 
std::make_shared(); }
+  virtual nnvm::NodePtr CreateSubgraphNode(const nnvm::Symbol ,
+   const int subgraph_id = 0) const {
+nnvm::NodePtr n = nnvm::Node::Create();
+n->attrs.op = Op::Get("_default_subgraph_op");
+n->attrs.name = "_default_subgraph_op" + std::to_string(subgraph_id);
+n->attrs.subgraphs.push_back(std::make_shared(sym));
 
 Review comment:
   subgraph symbol is pushed back into n->attrs.subgraphs, how can we guarantee 
it is at index 0? Because for DefaultSubgraphOpNumInputs(), it always try to 
get symbol from index 0.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vdantu commented on issue #12169: Remove fixed seed for test_huber_loss test

2018-08-14 Thread GitBox
vdantu commented on issue #12169: Remove fixed seed for test_huber_loss test
URL: https://github.com/apache/incubator-mxnet/pull/12169#issuecomment-413088172
 
 
   @haojin2 : 
   Yes. I ran it on a C5 instance. 
   ```
   $ MXNET_TEST_COUNT=1 nosetests --logging-level=DEBUG --verbose -s 
test_loss:test_huber_loss
   ..
   [DEBUG] 9983 of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=1965318993 to reproduce.
   [DEBUG] 9984 of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=1194083721 to reproduce.
   [DEBUG] 9985 of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=1678853894 to reproduce.
   [DEBUG] 9986 of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=337733731 to reproduce.
   [DEBUG] 9987 of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=398932915 to reproduce.
   [DEBUG] 9988 of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=1080350796 to reproduce.
   [DEBUG] 9989 of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=1684094617 to reproduce.
   [DEBUG] 9990 of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=975416777 to reproduce.
   [DEBUG] 9991 of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=567315453 to reproduce.
   [DEBUG] 9992 of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=1846054431 to reproduce.
   [DEBUG] 9993 of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=1292035281 to reproduce.
   [DEBUG] 9994 of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=210087911 to reproduce.
   [DEBUG] 9995 of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=380889040 to reproduce.
   [DEBUG] 9996 of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=75634236 to reproduce.
   [DEBUG] 9997 of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=194151385 to reproduce.
   [DEBUG] 9998 of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=405701400 to reproduce.
   [DEBUG]  of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=282875827 to reproduce.
   [DEBUG] 1 of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=1726976487 to reproduce.
   ok
   
   --
   Ran 1 test in 5911.351s
   
   OK
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szhengac opened a new pull request #12177: Add worker_fn argument to multiworker function

2018-08-14 Thread GitBox
szhengac opened a new pull request #12177: Add worker_fn argument to 
multiworker function
URL: https://github.com/apache/incubator-mxnet/pull/12177
 
 
   ## Description ##
   This PR adds argument worker_fn to _MultiWorkerIter. The default value will 
be worker_loop, so existing examples will not be affected. This allows us to 
accept outputs with user defined format from sampler, improving the flexibility 
of the dataloader. This will be used in Gluon NLP 
https://github.com/dmlc/gluon-nlp/pull/280. 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - Add worker_fn argument to _MultiWorkerIter. 
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nswamy commented on a change in pull request #12166: Module predict API can accept NDArray as input

2018-08-14 Thread GitBox
nswamy commented on a change in pull request #12166: Module predict API can 
accept NDArray as input
URL: https://github.com/apache/incubator-mxnet/pull/12166#discussion_r210164680
 
 

 ##
 File path: python/mxnet/module/base_module.py
 ##
 @@ -333,7 +334,7 @@ def predict(self, eval_data, num_batch=None, 
merge_batches=True, reset=True,
 
 Parameters
 --
-eval_data : DataIter
+eval_data : DataIter or NDArray or ndarray
 
 Review comment:
   got that thanks!. I think it is be useful.
   I was suggesting to be explicit `evalData: DataIter or NDArray or numpy 
array`. I should have used a few more words to describe what i was asking :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] azai91 commented on a change in pull request #12166: Module predict API can accept NDArray as input

2018-08-14 Thread GitBox
azai91 commented on a change in pull request #12166: Module predict API can 
accept NDArray as input
URL: https://github.com/apache/incubator-mxnet/pull/12166#discussion_r210163595
 
 

 ##
 File path: python/mxnet/module/base_module.py
 ##
 @@ -333,7 +334,7 @@ def predict(self, eval_data, num_batch=None, 
merge_batches=True, reset=True,
 
 Parameters
 --
-eval_data : DataIter
+eval_data : DataIter or NDArray or ndarray
 
 Review comment:
   yes, the current API makes it difficult for a developer to get a model up 
and running. given a `predict` API, I think it is reasonable for most 
developers to assume that it would accept a ndarray of some sort. Also our 
documentation currently for the API is not too instructive: 
https://mxnet.apache.org/api/python/module/module.html (the example on the top 
of the page says that it accepts a nd_iter). if you go to the documetation for 
predict it links to the DataIter class 
(https://mxnet.apache.org/api/python/io/io.html#mxnet.io.DataIter) without 
showing how to use it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #12157: Subgraph API for integrating accelerators with MXNet

2018-08-14 Thread GitBox
pengzhao-intel commented on issue #12157: Subgraph API for integrating 
accelerators with MXNet
URL: https://github.com/apache/incubator-mxnet/pull/12157#issuecomment-413080951
 
 
   Thanks @reminisce it's really helpful for us :)
   
   @zhennanqin @lvtao, please take a look the compatibility of current code 
with our interface :) 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook commented on issue #12176: MXPredReshape bug: need to reshape softmax_label

2018-08-14 Thread GitBox
chinakook commented on issue #12176: MXPredReshape bug: need to reshape 
softmax_label
URL: https://github.com/apache/incubator-mxnet/pull/12176#issuecomment-413079986
 
 
   In the inference stage, it's no need to use labels or SoftmaxOutput.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] shravankumar147 commented on issue #12114: Do we have pyspark support in mxnet.

2018-08-14 Thread GitBox
shravankumar147 commented on issue #12114: Do we have pyspark support in mxnet.
URL: 
https://github.com/apache/incubator-mxnet/issues/12114#issuecomment-413073776
 
 
   Thanks, I will be looking forward.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy closed issue #11224: ‘make lint’ is broken under python2

2018-08-14 Thread GitBox
sandeep-krishnamurthy closed issue #11224: ‘make lint’ is broken under python2
URL: https://github.com/apache/incubator-mxnet/issues/11224
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chsin opened a new pull request #12176: MXPredReshape bug: need to reshape softmax_label

2018-08-14 Thread GitBox
chsin opened a new pull request #12176: MXPredReshape bug: need to reshape 
softmax_label
URL: https://github.com/apache/incubator-mxnet/pull/12176
 
 
   ## Description ##
   
   The fix in #11493 allows reshape only for networks that don't use 
SoftmaxOutput, so none of the pretrained image classification networks, e.g. 
inception, can be reshaped. The current test did not catch this because it only 
used gluon blocks. The reason SoftmaxOutput is weird is because it requires the 
label "softmax_label" to be reshaped even though "softmax_label" is only used 
as an input for training and should not even be looked at for prediction, 
[which has been mentioned in other context 
before](https://stackoverflow.com/questions/44947104/mxnet-label-shapes-dont-match-names-specified-by-label-names).
 I don't know what's the best way to deal with SoftmaxOutput's label reshape 
requirement because I can't find a way to get the label names. I am sending 
this pull request with an edit that allows resnet and inception to be reshaped, 
along with a test that covers this edge case, but I hope someone can comment on 
a more robust solution.
   
   I found this issue when I made the following edits to 
[image-classification-predict.cc:223](https://github.com/apache/incubator-mxnet/blob/master/example/image-classification/predict-cpp/image-classification-predict.cc#L223).
 Before this change, the following would print out -1 and I traced it to 
`c_predict_api.cc:304` where `newShape.Size() != arr.shape().Size()` for when 
`arg_names[I]` is `softmax_label`: 
   ```c++
 // Create Predictor
 PredictorHandle old_hnd = nullptr;
 const mx_uint old_input_shape_data[4] = { 2,
   static_cast(channels),
   static_cast(height),
   static_cast(width) };
 MXPredCreate(static_cast(json_data.GetBuffer()),
  static_cast(param_data.GetBuffer()),
  static_cast(param_data.GetLength()),
  dev_type,
  dev_id,
  num_input_nodes,
  input_keys,
  input_shape_indptr,
  old_input_shape_data,
  _hnd);
 assert(old_hnd);
   
 int e = MXPredReshape(num_input_nodes,
   input_keys,
   input_shape_indptr,
   input_shape_data,
   old_hnd,
   _hnd);
 printf("%d\n", e);
   ```
   
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: RAT check readme updated (#12170)

2018-08-14 Thread nswamy
This is an automated email from the ASF dual-hosted git repository.

nswamy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 8506783  RAT check readme updated (#12170)
8506783 is described below

commit 85067835e92525b498dc703711aa7411fa3e4043
Author: Roshani Nagmote 
AuthorDate: Tue Aug 14 18:48:26 2018 -0700

RAT check readme updated (#12170)
---
 tests/nightly/apache_rat_license_check/README.md | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/tests/nightly/apache_rat_license_check/README.md 
b/tests/nightly/apache_rat_license_check/README.md
index 04def91..e8578a8 100755
--- a/tests/nightly/apache_rat_license_check/README.md
+++ b/tests/nightly/apache_rat_license_check/README.md
@@ -14,7 +14,7 @@ The following commands can be used to run a Apache RAT check 
locally -
 
 Docker based 1-click-method:
 ```
-ci/build.py --platform ubuntu_rat /work/runtime_functions.sh 
nightly_test_rat_check
+ci/build.py -p ubuntu_rat nightly_test_rat_check
 ```
 
 Manual method:
@@ -25,8 +25,8 @@ sudo apt-get install maven -y #>/dev/null
 #install svn
 sudo apt-get install subversion -y #>/dev/null
 
-#download RAT
-svn co http://svn.apache.org/repos/asf/creadur/rat/trunk/ #>/dev/null
+#download RAT 0.12 version
+svn co 
http://svn.apache.org/repos/asf/creadur/rat/tags/apache-rat-project-0.12-RC3/ 
#>/dev/null
 
 #cd into correct directory
 cd trunk
@@ -38,5 +38,5 @@ mvn install #>/dev/null
 cd apache-rat/target
 
 #run Apache RAT check on the src
-java -jar apache-rat-0.13-SNAPSHOT.jar -E  -d 

+java -jar apache-rat-0.12.jar -E  -d 

 ```



[GitHub] nswamy closed pull request #12170: Apache RAT check readme updated

2018-08-14 Thread GitBox
nswamy closed pull request #12170: Apache RAT check readme updated
URL: https://github.com/apache/incubator-mxnet/pull/12170
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/tests/nightly/apache_rat_license_check/README.md 
b/tests/nightly/apache_rat_license_check/README.md
index 04def917636..e8578a85722 100755
--- a/tests/nightly/apache_rat_license_check/README.md
+++ b/tests/nightly/apache_rat_license_check/README.md
@@ -14,7 +14,7 @@ The following commands can be used to run a Apache RAT check 
locally -
 
 Docker based 1-click-method:
 ```
-ci/build.py --platform ubuntu_rat /work/runtime_functions.sh 
nightly_test_rat_check
+ci/build.py -p ubuntu_rat nightly_test_rat_check
 ```
 
 Manual method:
@@ -25,8 +25,8 @@ sudo apt-get install maven -y #>/dev/null
 #install svn
 sudo apt-get install subversion -y #>/dev/null
 
-#download RAT
-svn co http://svn.apache.org/repos/asf/creadur/rat/trunk/ #>/dev/null
+#download RAT 0.12 version
+svn co 
http://svn.apache.org/repos/asf/creadur/rat/tags/apache-rat-project-0.12-RC3/ 
#>/dev/null
 
 #cd into correct directory
 cd trunk
@@ -38,5 +38,5 @@ mvn install #>/dev/null
 cd apache-rat/target
 
 #run Apache RAT check on the src
-java -jar apache-rat-0.13-SNAPSHOT.jar -E  -d 

+java -jar apache-rat-0.12.jar -E  -d 

 ```


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nswamy commented on a change in pull request #12166: Module predict API can accept NDArray as input

2018-08-14 Thread GitBox
nswamy commented on a change in pull request #12166: Module predict API can 
accept NDArray as input
URL: https://github.com/apache/incubator-mxnet/pull/12166#discussion_r210146170
 
 

 ##
 File path: python/mxnet/module/base_module.py
 ##
 @@ -333,7 +334,7 @@ def predict(self, eval_data, num_batch=None, 
merge_batches=True, reset=True,
 
 Parameters
 --
-eval_data : DataIter
+eval_data : DataIter or NDArray or ndarray
 
 Review comment:
   DataIter or NDArray or numpy array?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest opened a new pull request #12175: Update codeowner

2018-08-14 Thread GitBox
apeforest opened a new pull request #12175: Update codeowner
URL: https://github.com/apache/incubator-mxnet/pull/12175
 
 
   ## Description ##
   Add myself to some of the backend modules


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest opened a new pull request #12174: [MXNET-806] Report error when shape mismatch in "where" operator

2018-08-14 Thread GitBox
apeforest opened a new pull request #12174: [MXNET-806] Report error when shape 
mismatch in "where" operator
URL: https://github.com/apache/incubator-mxnet/pull/12174
 
 
   ## Description ##
   ```mx.sym.where(cond, x, y)``` still functions, given shape-mismatched cond, 
whereas ```mx.nd.where(cond, x, y)``` will throw exception. We should make sure 
the behavior is the same.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [X] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [X] Changes are complete (i.e. I finished coding on this PR)
   - [X] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness
   test_operator.py:test_where():test_invalid_shape()
   - [ ] Code is well-documented: 
   - This change is consistent with the current API documentation. Therefore, 
no additional documentation should be needed.
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [X] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [X] where operator with mismatched shape should throw exception
   
   ## Comments ##


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] samskalicky commented on issue #12091: [MXNET-792] Fix for issue #9816 with dropout operator and RNG

2018-08-14 Thread GitBox
samskalicky commented on issue #12091: [MXNET-792] Fix for issue #9816 with 
dropout operator and RNG
URL: https://github.com/apache/incubator-mxnet/pull/12091#issuecomment-413056672
 
 
   @eric-haibin-lin I did some more investigation and it seems that the CPU RNG 
ranges from (0,1] rather than the [0,1) as initially thought. I was able to get 
random values of 1.0 using seed 976064129 for the test_operator.py:test_dropout 
on the CPU. Meaning that if dropout=0, then pkeep=1 (goal is to keep 
everything, no dropout) and the value will get dropped with the random value of 
1.0 when using less-than thresholding. Changing the code to use <= for GPU and 
< for CPU doesnt work. 
   
   If dropout=1, then pkeep=0. This triggers the 2nd term in the mask 
computation: (1.0f/pkeep). In this case this term is now NaN (divide by zero) 
propagating out to the output values. 
   
   The current code that is checked in where <= is used for both CPU and GPU is 
what is currently valid and currently passing for both CPU and GPU test_dropout 
evaluations.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hcho3 opened a new pull request #12173: Revert "update dmlc-core (#12129)"

2018-08-14 Thread GitBox
hcho3 opened a new pull request #12173: Revert "update dmlc-core (#12129)"
URL: https://github.com/apache/incubator-mxnet/pull/12173
 
 
   ## Description ##
   Reverts apache/incubator-mxnet#12129
   
   This is to fix double free memory error in #12139. The memory error has been 
inside CSVIter but remained undetected until a recent modification of CSVParser.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Revert dmlc-core to last working point
   
   ## Comments ##
   - This is a temporary measure. We'll need to come back to CSVIter and 
address memory error. Address Sanitizer reveals double free issues in CSVIter.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #12169: Remove fixed seed for test_huber_loss test

2018-08-14 Thread GitBox
haojin2 commented on issue #12169: Remove fixed seed for test_huber_loss test
URL: https://github.com/apache/incubator-mxnet/pull/12169#issuecomment-413055097
 
 
   Have you also run on CPU?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hcho3 closed pull request #12172: Revert "update dmlc-core for security reason"

2018-08-14 Thread GitBox
hcho3 closed pull request #12172: Revert "update dmlc-core for security reason"
URL: https://github.com/apache/incubator-mxnet/pull/12172
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/3rdparty/dmlc-core b/3rdparty/dmlc-core
index 958c22b32c1..649be18a8c5 16
--- a/3rdparty/dmlc-core
+++ b/3rdparty/dmlc-core
@@ -1 +1 @@
-Subproject commit 958c22b32c116ec967a9247d09eddb9c21ea6d4f
+Subproject commit 649be18a8c55c48517861d67158a45dec54992ee


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sad- commented on issue #11858: Update contribute.md (Fix links to subscribe for users and contributors)

2018-08-14 Thread GitBox
sad- commented on issue #11858: Update contribute.md (Fix links to subscribe 
for users and contributors)
URL: https://github.com/apache/incubator-mxnet/pull/11858#issuecomment-413054247
 
 
   @marcoabreu i see you tried closing and reopening and that still did not 
trigger the ci. Any idea why that did not work? Did you mean closing out this 
one entirely and opening a different one?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hcho3 opened a new pull request #12172: Revert "update dmlc-core for security reason"

2018-08-14 Thread GitBox
hcho3 opened a new pull request #12172: Revert "update dmlc-core for security 
reason"
URL: https://github.com/apache/incubator-mxnet/pull/12172
 
 
   Reverts apache/incubator-mxnet#12129
   
   This is to fix double free memory error in #12139.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on issue #12137: [MXNET-696] Fix undefined name errors

2018-08-14 Thread GitBox
vandanavk commented on issue #12137: [MXNET-696] Fix undefined name errors
URL: https://github.com/apache/incubator-mxnet/pull/12137#issuecomment-413052969
 
 
   @anirudhacharya I don't think so. It may take considerable time and space to 
test all the examples in CI build.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] safrooze edited a comment on issue #12116: Excessive memory allocation without static_alloc

2018-08-14 Thread GitBox
safrooze edited a comment on issue #12116: Excessive memory allocation without 
static_alloc
URL: 
https://github.com/apache/incubator-mxnet/issues/12116#issuecomment-413052714
 
 
   The network does indeed run out of memory with larger loop count.
   ```
   mxnet.base.MXNetError: [23:50:33] 
src/storage/./pooled_storage_manager.h:119: cudaMalloc failed: out of memory
   ```
   I also tried setting `MXNET_GPU_MEM_POOL_RESERVE=100` and that has an 
interesting behavior: The peak memory usage doesn't change, but at the point 
where typically memory would stabilize, it resets back to ~4GB and climbs back 
up and again resets back and continues this pattern. Needless to say, the 
performance is also a lot slower (~4x) because of continuous mem allocations.
   
   I should mention that inference is composed of two hybridized networks. For 
each inference instance, the first network is called once and then the next 
network is called several times with fixed input shapes. The peak memory usage 
is a function of number of times the second network is called (i.e. the number 
of loop iterations). Without setting `MXNET_GPU_MEM_POOL_RESERVE`, if the 
network doesn't run out of memory for each inference instance, the memory 
utilization (i.e. buffer pool size) stabilizes and stays constant for 
subsequent inference runs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] safrooze commented on issue #12116: Excessive memory allocation without static_alloc

2018-08-14 Thread GitBox
safrooze commented on issue #12116: Excessive memory allocation without 
static_alloc
URL: 
https://github.com/apache/incubator-mxnet/issues/12116#issuecomment-413052714
 
 
   The network does indeed run out of memory with larger loop count.
   ```
   mxnet.base.MXNetError: [23:50:33] 
src/storage/./pooled_storage_manager.h:119: cudaMalloc failed: out of memory
   ```
   I also tried setting `MXNET_GPU_MEM_POOL_RESERVE=100` and that has an 
interesting behavior: The peak memory usage doesn't change, but at the point 
where typically memory would stabilize, it resets back to ~4GB and climbs back 
up and again resets back and continues this pattern.
   
   I should mention that inference is composed of two hybridized networks. For 
each inference instance, the first network is called once and then the next 
network is called several times with fixed input shapes. The peak memory usage 
is a function of number of times the second network is called (i.e. the number 
of loop iterations). Without setting `MXNET_GPU_MEM_POOL_RESERVE`, if the 
network doesn't run out of memory for each inference instance, the memory 
utilization (i.e. buffer pool size) stabilizes and stays constant for 
subsequent inference runs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha edited a comment on issue #12153: documentation changes. added full reference

2018-08-14 Thread GitBox
szha edited a comment on issue #12153: documentation changes. added full 
reference
URL: https://github.com/apache/incubator-mxnet/pull/12153#issuecomment-413051268
 
 
   @yuxiangw it's not possible unfortunately. Try adding `# coding: utf-8` at 
the top of python/mxnet/optimizer.py file


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #12153: documentation changes. added full reference

2018-08-14 Thread GitBox
szha commented on issue #12153: documentation changes. added full reference
URL: https://github.com/apache/incubator-mxnet/pull/12153#issuecomment-413051268
 
 
   @yuxiangw it's not possible unfortunately. Try adding `# coding=utf-8` at 
the top of python/mxnet/optimizer.py file


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #12116: Excessive memory allocation without static_alloc

2018-08-14 Thread GitBox
szha commented on issue #12116: Excessive memory allocation without static_alloc
URL: 
https://github.com/apache/incubator-mxnet/issues/12116#issuecomment-413050809
 
 
   You can do `export MXNET_GPU_MEM_POOL_RESERVE=100` to disable the memory 
pool.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] access2rohit commented on issue #12164: Removed fixed seed for test_nadam.

2018-08-14 Thread GitBox
access2rohit commented on issue #12164: Removed fixed seed for test_nadam.
URL: https://github.com/apache/incubator-mxnet/pull/12164#issuecomment-413050610
 
 
   @eric-haibin-lin 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on issue #8191: RNN BucketSentenceIter - Must load all data into memory?

2018-08-14 Thread GitBox
vandanavk commented on issue #8191: RNN BucketSentenceIter - Must load all data 
into memory?
URL: 
https://github.com/apache/incubator-mxnet/issues/8191#issuecomment-413050652
 
 
   @wcollins-ebsco were you able to find a way to do this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on issue #12161: [WIP] A solution to prevent zombie containers locally and in CI

2018-08-14 Thread GitBox
larroy commented on issue #12161: [WIP] A solution to prevent zombie containers 
locally and in CI
URL: https://github.com/apache/incubator-mxnet/pull/12161#issuecomment-413050071
 
 
   @marcoabreu I don't think there would be any problem if it's executed by CI, 
what are you worried about specifically?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] andrewfayres commented on issue #10867: Scala Module API resize is leaking memory on the native size.

2018-08-14 Thread GitBox
andrewfayres commented on issue #10867: Scala Module API resize is leaking 
memory on the native size. 
URL: 
https://github.com/apache/incubator-mxnet/issues/10867#issuecomment-413049656
 
 
   @jessebrizzi I've got what I believe to be a working fix in my 
[repo](https://github.com/andrewfayres/incubator-mxnet/tree/reshape_bug). I 
need to do some more thorough testing and add some automated testing to this 
before submitting a PR but my preliminary testing is looking good. Feel free to 
take a look and let me know if you find any issues.
   
   I'll work on moving all of this over to the native code as soon as I get a 
few other things off my backlog.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new issue #12171: LBSGD doc not rendering correctly

2018-08-14 Thread GitBox
eric-haibin-lin opened a new issue #12171: LBSGD doc not rendering correctly
URL: https://github.com/apache/incubator-mxnet/issues/12171
 
 
   https://user-images.githubusercontent.com/5545640/44124041-07d4bf4a-9fe0-11e8-8463-f963e3e4af83.png;>
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 commented on issue #12140: Impossible to provide arguments to random_normal in scala ?

2018-08-14 Thread GitBox
lanking520 commented on issue #12140: Impossible to provide arguments to 
random_normal in scala ?
URL: 
https://github.com/apache/incubator-mxnet/issues/12140#issuecomment-413047911
 
 
   Actually if you are interested, you can be the contributor to get it out for 
Scala package. It needs some wrappers similar to python side and expose the 
following apis:
   ```
   _random_exponential
   _random_gamma
   _random_generalized_negative_binomial
   _random_negative_binomial
   _random_normal
   _random_poisson
   _random_uniform
   ```
   These can be code-generated from Macro side. Please let me know if you are 
interested


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ankkhedia commented on issue #3833: Is it possible for Penn Treebank Language model using MXnet In R apply on new input?

2018-08-14 Thread GitBox
ankkhedia commented on issue #3833: Is it possible for Penn Treebank Language 
model using MXnet In R apply on new input?
URL: 
https://github.com/apache/incubator-mxnet/issues/3833#issuecomment-413047166
 
 
   Hi @SINsing Thanks for trying out MXNetR. Could you please provide more 
context of the question. 
   How does your new input data looks like and what do you exactly want to do 
so that we may help you over the issue.
   
   @nswamy @sandeep-krishnamurthy Could you please tag this issue as Pending 
Requester Info


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ankkhedia commented on issue #12002: How do I properly dimensionalize my array and tune `rnn.graph.unroll` to make the LSTM work for this multidimensional sequence

2018-08-14 Thread GitBox
ankkhedia commented on issue #12002: How do I properly dimensionalize my array 
and tune `rnn.graph.unroll` to make the LSTM work for this multidimensional 
sequence
URL: 
https://github.com/apache/incubator-mxnet/issues/12002#issuecomment-413045951
 
 
   Hi @alexmosc Thanks for using MXNetR. I can help you with the question if I 
understand the exact use case and has some sort of sample dataset.
   I want to understand what exactly you are trying to learn. When unroll 
config is set to seq-to-one , then there is one output for each sequence. So, 
if you have standard n_samples*seq_len training examples your label should be a 
vector of size n_samples.[Example- sentiment analysis 
http://dmlc.ml/rstats/2017/10/11/rnn-bucket-mxnet-R.html]
   
   It works very differently when unroll config is set to one to one. Here 
training data arrays and training label arrays should exactly be of same size. 
In other words , the input as well as outputs are sequence and we learn on that 
data.[Example time series - 
https://jeremiedb.github.io/mxnet_R_bucketing/TimeSeries_CPU]
   
   So, if you could give me a better idea of what exactly is your train data 
and labels are and what you exactly want to learn in the model I can try 
helping you.
   
   @nswamy @sandeep-krishnamurthy Could your please tag the issue as Pending 
Requester Info?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Roshrini opened a new pull request #12170: Apache RAT check readme updated

2018-08-14 Thread GitBox
Roshrini opened a new pull request #12170: Apache RAT check readme updated
URL: https://github.com/apache/incubator-mxnet/pull/12170
 
 
   ## Description ##
   The readme was outdated and had references to RAT 0.13 version which is 
currently broken. 
   https://github.com/apache/incubator-mxnet/pull/12148
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on issue #11480: Image classfication example has wrong accuracy metric.

2018-08-14 Thread GitBox
vandanavk commented on issue #11480: Image classfication example has wrong 
accuracy metric.
URL: 
https://github.com/apache/incubator-mxnet/issues/11480#issuecomment-413044469
 
 
   @hxhxhx88 Upon further investigation, it was found that this observation is 
expected behavior. 
   
   "INFO:root:Epoch[1] Train-accuracy=" is not the epoch accuracy - the log is 
misleading (Ref: https://github.com/apache/incubator-mxnet/pull/10437). The 
plan is to remove this print statement altogether.
   
   The log for batch is based on a user-specified value which prints a log at 
regular intervals (`--disp-batches` in fit.py).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vdantu opened a new pull request #12169: Remove fixed seed for test_huber_loss test

2018-08-14 Thread GitBox
vdantu opened a new pull request #12169: Remove fixed seed for test_huber_loss 
test
URL: https://github.com/apache/incubator-mxnet/pull/12169
 
 
   ## Description ##
   (Brief description on what this PR is about)
   Fix for issue #11696 
   Removed the fixed seed for the test "test_huber_loss". Ran this test 10,000 
times on a GPU host and didn't see any issue. 
   
   Ran:
   ```
$ MXNET_TEST_COUNT=1 nosetests --logging-level=DEBUG --verbose -s 
test_operator_gpu:test_huber_loss
   ...
   [DEBUG] 9996 of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=966105224 to reproduce.
   [DEBUG] 9997 of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=1565596372 to reproduce.
   [DEBUG] 9998 of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=1471737541 to reproduce.
   [DEBUG]  of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=1224967521 to reproduce.
   [DEBUG] 1 of 1: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=1901892319 to reproduce.
   ok
   
   --
   Ran 1 test in 10227.998s
   
   OK
   $
   ```
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yuxiangw commented on issue #12153: documentation changes. added full reference

2018-08-14 Thread GitBox
yuxiangw commented on issue #12153: documentation changes. added full reference
URL: https://github.com/apache/incubator-mxnet/pull/12153#issuecomment-413038251
 
 
   @szha I don't know what's going on but I updated only the doc string. Could 
you ignore the jenkins and merge it? Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #12116: Excessive memory allocation without static_alloc

2018-08-14 Thread GitBox
piiswrong commented on issue #12116: Excessive memory allocation without 
static_alloc
URL: 
https://github.com/apache/incubator-mxnet/issues/12116#issuecomment-413038076
 
 
   Can you demonstrate a case where it actually fails with OOM?
   We have a memory pool that caches freed memory. So the memory usage you see 
in nvidia-smi may not be the same with actual usage.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] azai91 commented on issue #12166: Module predict API can accept NDArray as input

2018-08-14 Thread GitBox
azai91 commented on issue #12166: Module predict API can accept NDArray as input
URL: https://github.com/apache/incubator-mxnet/pull/12166#issuecomment-413037058
 
 
   @eric-haibin-lin 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on issue #12118: fix potential floating number overflow, enable float16

2018-08-14 Thread GitBox
larroy commented on issue #12118: fix potential floating number overflow, 
enable float16
URL: https://github.com/apache/incubator-mxnet/pull/12118#issuecomment-413036415
 
 
   http://pubs.opengroup.org/onlinepubs/7908799/xsh/limits.h.html
   Ok, seems negative range is not well specified.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 commented on a change in pull request #11844: [MXNET-689] add DataDesc type for the Scala Package

2018-08-14 Thread GitBox
lanking520 commented on a change in pull request #11844: [MXNET-689] add 
DataDesc type for the Scala Package
URL: https://github.com/apache/incubator-mxnet/pull/11844#discussion_r210122099
 
 

 ##
 File path: scala-package/core/src/main/scala/org/apache/mxnet/IO.scala
 ##
 @@ -332,8 +379,9 @@ abstract class DataPack() extends Iterable[DataBatch] {
 
 // Named data desc description contains name, shape, type and other extended 
attributes.
 case class DataDesc(name: String, shape: Shape,
-dtype: DType = Base.MX_REAL_TYPE, layout: String = "NCHW") 
{
-  require(shape.length == layout.length, ("number of dimensions in shape :%d 
with" +
+dtype: DType = Base.MX_REAL_TYPE, layout: String = 
Layout.UNDEFINED) {
 
 Review comment:
   It break the Backwards compatibility, user need to define `DataDesc` as `new 
DataDesc` instead of `DataDesc` directly


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #12151: fix a minor bug in while_loop

2018-08-14 Thread GitBox
zheng-da commented on issue #12151: fix a minor bug in while_loop
URL: https://github.com/apache/incubator-mxnet/pull/12151#issuecomment-413033314
 
 
   @szha a test is added to trigger the bug.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #11983: Add handling for grad req type other than kNullOp for indices

2018-08-14 Thread GitBox
haojin2 commented on issue #11983: Add handling for grad req type other than 
kNullOp for indices
URL: https://github.com/apache/incubator-mxnet/pull/11983#issuecomment-413033126
 
 
   @eric-haibin-lin Done with the unit test, I still have to borrow 
@junrushao1994 's example.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #12152: [MXNET-696] Fix profiler executer when memonger is used

2018-08-14 Thread GitBox
anirudhacharya commented on a change in pull request #12152: [MXNET-696] Fix 
profiler executer when memonger is used
URL: https://github.com/apache/incubator-mxnet/pull/12152#discussion_r210118296
 
 

 ##
 File path: example/profiler/README.md
 ##
 @@ -5,7 +5,9 @@ Please refer to [this 
link](http://mxnet.incubator.apache.org/faq/perf.html?high
 for visualizing profiling results and make sure that you have installed a 
version of MXNet compiled
 with `USE_PROFILER=1`.
 
-- profiler_executor.py. To run this example, simply type `python 
profiler_executor.py` in terminal.
+- profiler_executor.py. To run this example,
+1. clone mxnet-memonger (git clone 
https://github.com/dmlc/mxnet-memonger.git). Add path to mxnet-memonger to 
PYTHONPATH
 
 Review comment:
   then please change it accordingly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on a change in pull request #12152: [MXNET-696] Fix profiler executer when memonger is used

2018-08-14 Thread GitBox
vandanavk commented on a change in pull request #12152: [MXNET-696] Fix 
profiler executer when memonger is used
URL: https://github.com/apache/incubator-mxnet/pull/12152#discussion_r210117696
 
 

 ##
 File path: example/profiler/README.md
 ##
 @@ -5,7 +5,9 @@ Please refer to [this 
link](http://mxnet.incubator.apache.org/faq/perf.html?high
 for visualizing profiling results and make sure that you have installed a 
version of MXNet compiled
 with `USE_PROFILER=1`.
 
-- profiler_executor.py. To run this example, simply type `python 
profiler_executor.py` in terminal.
+- profiler_executor.py. To run this example,
+1. clone mxnet-memonger (git clone 
https://github.com/dmlc/mxnet-memonger.git). Add path to mxnet-memonger to 
PYTHONPATH
 
 Review comment:
   As of now, this is the way to use mxnet-memonger 
(https://github.com/dmlc/mxnet-memonger/issues/4). Alternative is to add it to 
3rdparty folder maybe?
   
   Yes, I meant 'export PYTHONPATH=$PYTHONPATH:/path/to/mxnet-memonger'. 
Something like this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on a change in pull request #12152: [MXNET-696] Fix profiler executer when memonger is used

2018-08-14 Thread GitBox
vandanavk commented on a change in pull request #12152: [MXNET-696] Fix 
profiler executer when memonger is used
URL: https://github.com/apache/incubator-mxnet/pull/12152#discussion_r210117830
 
 

 ##
 File path: example/profiler/README.md
 ##
 @@ -5,7 +5,9 @@ Please refer to [this 
link](http://mxnet.incubator.apache.org/faq/perf.html?high
 for visualizing profiling results and make sure that you have installed a 
version of MXNet compiled
 with `USE_PROFILER=1`.
 
-- profiler_executor.py. To run this example, simply type `python 
profiler_executor.py` in terminal.
+- profiler_executor.py. To run this example,
+1. clone mxnet-memonger (git clone 
https://github.com/dmlc/mxnet-memonger.git). Add path to mxnet-memonger to 
PYTHONPATH
+2. type `python profiler_executor.py` in terminal.
 
 Review comment:
   Will make this change and resubmit. Thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #12015: update ndarray stack Doc for #11925

2018-08-14 Thread GitBox
szha commented on issue #12015: update ndarray stack Doc for #11925
URL: https://github.com/apache/incubator-mxnet/pull/12015#issuecomment-413019028
 
 
   @liyujiel can you do a rebase on the current master?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cclauss commented on a change in pull request #12137: [MXNET-696] Fix undefined name errors

2018-08-14 Thread GitBox
cclauss commented on a change in pull request #12137: [MXNET-696] Fix undefined 
name errors
URL: https://github.com/apache/incubator-mxnet/pull/12137#discussion_r210104553
 
 

 ##
 File path: example/deep-embedded-clustering/model.py
 ##
 @@ -22,7 +22,7 @@
 import numpy as np
 try:
 import cPickle as pickle
 
 Review comment:
   This is correct usage...  We first try to use __cPickle__ which is compiled 
C code in Python 2 and is nice and fast but if we fail to import that (because 
it is not pip installed on Python 2 or because we are running on Python 3) then 
we use the normal standard library pickle.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] liyujiel commented on issue #12015: update ndarray stack Doc for #11925

2018-08-14 Thread GitBox
liyujiel commented on issue #12015: update ndarray stack Doc for #11925
URL: https://github.com/apache/incubator-mxnet/pull/12015#issuecomment-413016022
 
 
   @szha 
   
   Get error: *** Error in `python3.6': double free or corruption (fasttop): 
0x7f58cc000950 ***
   
   I think there is a problem with CI. Same problem: tensorflow/tensorflow#6968 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #12152: [MXNET-696] Fix profiler executer when memonger is used

2018-08-14 Thread GitBox
anirudhacharya commented on a change in pull request #12152: [MXNET-696] Fix 
profiler executer when memonger is used
URL: https://github.com/apache/incubator-mxnet/pull/12152#discussion_r210102664
 
 

 ##
 File path: example/profiler/README.md
 ##
 @@ -5,7 +5,9 @@ Please refer to [this 
link](http://mxnet.incubator.apache.org/faq/perf.html?high
 for visualizing profiling results and make sure that you have installed a 
version of MXNet compiled
 with `USE_PROFILER=1`.
 
-- profiler_executor.py. To run this example, simply type `python 
profiler_executor.py` in terminal.
+- profiler_executor.py. To run this example,
+1. clone mxnet-memonger (git clone 
https://github.com/dmlc/mxnet-memonger.git). Add path to mxnet-memonger to 
PYTHONPATH
 
 Review comment:
   why do we need to clone the ``mxnet-memonger`` repo? 
   
   And in this line - "Add path to mxnet-memonger to PYTHONPATH" do you mean, 
"Add ``mxnet-memonger`` folder to PYTHONPATH"


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #12152: [MXNET-696] Fix profiler executer when memonger is used

2018-08-14 Thread GitBox
anirudhacharya commented on a change in pull request #12152: [MXNET-696] Fix 
profiler executer when memonger is used
URL: https://github.com/apache/incubator-mxnet/pull/12152#discussion_r210102097
 
 

 ##
 File path: example/profiler/README.md
 ##
 @@ -5,7 +5,9 @@ Please refer to [this 
link](http://mxnet.incubator.apache.org/faq/perf.html?high
 for visualizing profiling results and make sure that you have installed a 
version of MXNet compiled
 with `USE_PROFILER=1`.
 
-- profiler_executor.py. To run this example, simply type `python 
profiler_executor.py` in terminal.
+- profiler_executor.py. To run this example,
+1. clone mxnet-memonger (git clone 
https://github.com/dmlc/mxnet-memonger.git). Add path to mxnet-memonger to 
PYTHONPATH
+2. type `python profiler_executor.py` in terminal.
 
 Review comment:
   Format these lines like this - 
   
   - To run ``profiler_executor.py`` example,
 - clone mxnet-memonger (git clone 
https://github.com/dmlc/mxnet-memonger.git). Add path to mxnet-memonger to 
PYTHONPATH
 - type `python profiler_executor.py` in terminal.
 It will generate a json file named `profile_executor_5iter.json`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest edited a comment on issue #12168: Error in Operator implementation guide

2018-08-14 Thread GitBox
apeforest edited a comment on issue #12168: Error in Operator implementation 
guide
URL: 
https://github.com/apache/incubator-mxnet/issues/12168#issuecomment-413012933
 
 
   @mxnet-label-bot [Doc]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest commented on issue #12168: Error in Operator implementation guide

2018-08-14 Thread GitBox
apeforest commented on issue #12168: Error in Operator implementation guide
URL: 
https://github.com/apache/incubator-mxnet/issues/12168#issuecomment-413012933
 
 
   @mxnet-label-bot [Documentation]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest opened a new issue #12168: Error in Operator implementation guide

2018-08-14 Thread GitBox
apeforest opened a new issue #12168: Error in Operator implementation guide
URL: https://github.com/apache/incubator-mxnet/issues/12168
 
 
   ## Description
   Error in the operator implementation guide:
   https://mxnet.incubator.apache.org/faq/add_op_in_backend.html
   
   It's correct in the markdown file: 
https://github.com/apache/incubator-mxnet/blob/master/docs/faq/add_op_in_backend.md
   
   ## Error Message:
   "Note that forward and backward functions are registered with attribute key 
FCompute, rather than FCompute."


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #12137: [MXNET-696] Fix undefined name errors

2018-08-14 Thread GitBox
anirudhacharya commented on a change in pull request #12137: [MXNET-696] Fix 
undefined name errors
URL: https://github.com/apache/incubator-mxnet/pull/12137#discussion_r210093713
 
 

 ##
 File path: example/deep-embedded-clustering/model.py
 ##
 @@ -22,7 +22,7 @@
 import numpy as np
 try:
 import cPickle as pickle
 
 Review comment:
   why use cPickle? why not just pickle. In fact cPickle is not used in 
python3. most functions of cPickle is there in pickle


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nswamy commented on a change in pull request #11844: [MXNET-689] add DataDesc type for the Scala Package

2018-08-14 Thread GitBox
nswamy commented on a change in pull request #11844: [MXNET-689] add DataDesc 
type for the Scala Package
URL: https://github.com/apache/incubator-mxnet/pull/11844#discussion_r210092258
 
 

 ##
 File path: 
scala-package/core/src/main/scala/org/apache/mxnet/io/NDArrayIter.scala
 ##
 @@ -42,27 +43,40 @@ import scala.collection.immutable.ListMap
 class NDArrayIter(data: IndexedSeq[(String, NDArray)],
   label: IndexedSeq[(String, NDArray)],
   private val dataBatchSize: Int, shuffle: Boolean,
-  lastBatchHandle: String) extends DataIter {
-
+  lastBatchHandle: String,
+  dataDType: DType, labelDType: DType,
 
 Review comment:
   like this better.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yzhliu commented on a change in pull request #11844: [MXNET-689] add DataDesc type for the Scala Package

2018-08-14 Thread GitBox
yzhliu commented on a change in pull request #11844: [MXNET-689] add DataDesc 
type for the Scala Package
URL: https://github.com/apache/incubator-mxnet/pull/11844#discussion_r210064760
 
 

 ##
 File path: scala-package/core/src/main/scala/org/apache/mxnet/IO.scala
 ##
 @@ -352,7 +402,16 @@ object DataDesc {
* for each data-parallelism device.
*/
   def getBatchAxis(layout: Option[String]): Int = {
-layout.map(_.indexOf('N')).getOrElse(0)
+if (layout.isEmpty|| layout.get == Layout.UNDEFINED) {
+  logger.info("Found Undefined Layout, will use default index 0")
 
 Review comment:
   logger.warn


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yzhliu commented on a change in pull request #11844: [MXNET-689] add DataDesc type for the Scala Package

2018-08-14 Thread GitBox
yzhliu commented on a change in pull request #11844: [MXNET-689] add DataDesc 
type for the Scala Package
URL: https://github.com/apache/incubator-mxnet/pull/11844#discussion_r210087276
 
 

 ##
 File path: 
scala-package/examples/src/main/scala/org/apache/mxnetexamples/rnn/BucketIo.scala
 ##
 @@ -94,8 +96,22 @@ object BucketIo {
   class BucketSentenceIter(
   path: String, vocab: Map[String, Int], var buckets: IndexedSeq[Int],
   _batchSize: Int, private val initStates: IndexedSeq[(String, (Int, 
Int))],
-  seperateChar: String = "  ", text2Id: Text2Id = defaultText2Id,
-  readContent: ReadContent = defaultReadContent) extends DataIter {
+  seperateChar: String, text2Id: Text2Id,
+  readContent: ReadContent,
+  dataLayout: String,
+  labelLayout: String,
+  dataDType : DType,
+  labelDType: DType) extends DataIter {
+
+// scalastyle:off
+def this(path: String, vocab: Map[String, Int], buckets: IndexedSeq[Int],
+_batchSize: Int, initStates: IndexedSeq[(String, (Int, Int))],
+seperateChar: String = "  ", text2Id: Text2Id = defaultText2Id,
+readContent: ReadContent = defaultReadContent) {
 
 Review comment:
   code-style: add indent for args


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yzhliu commented on a change in pull request #11844: [MXNET-689] add DataDesc type for the Scala Package

2018-08-14 Thread GitBox
yzhliu commented on a change in pull request #11844: [MXNET-689] add DataDesc 
type for the Scala Package
URL: https://github.com/apache/incubator-mxnet/pull/11844#discussion_r210060938
 
 

 ##
 File path: scala-package/core/src/main/scala/org/apache/mxnet/IO.scala
 ##
 @@ -332,8 +379,9 @@ abstract class DataPack() extends Iterable[DataBatch] {
 
 // Named data desc description contains name, shape, type and other extended 
attributes.
 case class DataDesc(name: String, shape: Shape,
-dtype: DType = Base.MX_REAL_TYPE, layout: String = "NCHW") 
{
-  require(shape.length == layout.length, ("number of dimensions in shape :%d 
with" +
+dtype: DType = Base.MX_REAL_TYPE, layout: String = 
Layout.UNDEFINED) {
 
 Review comment:
   can we add a constructor to allow java users easily create DataDesc without 
specifying `dtype` and `layout`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yzhliu commented on a change in pull request #11844: [MXNET-689] add DataDesc type for the Scala Package

2018-08-14 Thread GitBox
yzhliu commented on a change in pull request #11844: [MXNET-689] add DataDesc 
type for the Scala Package
URL: https://github.com/apache/incubator-mxnet/pull/11844#discussion_r210066179
 
 

 ##
 File path: 
scala-package/core/src/main/scala/org/apache/mxnet/io/NDArrayIter.scala
 ##
 @@ -285,12 +317,37 @@ object NDArrayIter {
   this
 }
 
+/**
+  * Set the dtype.
+  * @param dataDType The dtype of the data, default is Float32
+  * @param labelDType The dtype of the label, default is Int32
+  * @return this
+  */
+def setDType(dataDType: DType, labelDType: DType): Builder = {
 
 Review comment:
   suggest to separate `setDataDtype` and `setLabelDtype`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on issue #11224: ‘make lint’ is broken under python2

2018-08-14 Thread GitBox
vandanavk commented on issue #11224: ‘make lint’ is broken under python2
URL: 
https://github.com/apache/incubator-mxnet/issues/11224#issuecomment-413001434
 
 
   @TaoLv dmlc-core has been pulled in. The fix is in incubator-mxnet now.
   
   @sandeep-krishnamurthy Could you close this issue?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] azai91 opened a new pull request #12167: Add test to check that binded is not set when exception thrown

2018-08-14 Thread GitBox
azai91 opened a new pull request #12167: Add test to check that binded is not 
set when exception thrown
URL: https://github.com/apache/incubator-mxnet/pull/12167
 
 
   ## Description ##
   Added test for the previous PR 
https://github.com/apache/incubator-mxnet/pull/12155 that set binded flag after 
setup complete.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Add test to check that binded is set correctly.
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 edited a comment on issue #12140: Impossible to provide arguments to random_normal in scala ?

2018-08-14 Thread GitBox
lanking520 edited a comment on issue #12140: Impossible to provide arguments to 
random_normal in scala ?
URL: 
https://github.com/apache/incubator-mxnet/issues/12140#issuecomment-412979383
 
 
   @mdespriee 
   
https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.random_normal
   
https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.random.normal
   
   I got your point. Scala now did not have Random Module, will think of 
supporting that!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] andrewfayres commented on a change in pull request #11844: [MXNET-689] add DataDesc type for the Scala Package

2018-08-14 Thread GitBox
andrewfayres commented on a change in pull request #11844: [MXNET-689] add 
DataDesc type for the Scala Package
URL: https://github.com/apache/incubator-mxnet/pull/11844#discussion_r210085855
 
 

 ##
 File path: 
scala-package/core/src/main/scala/org/apache/mxnet/io/NDArrayIter.scala
 ##
 @@ -42,27 +43,40 @@ import scala.collection.immutable.ListMap
 class NDArrayIter(data: IndexedSeq[(String, NDArray)],
   label: IndexedSeq[(String, NDArray)],
   private val dataBatchSize: Int, shuffle: Boolean,
-  lastBatchHandle: String) extends DataIter {
-
+  lastBatchHandle: String,
+  dataDType: DType, labelDType: DType,
 
 Review comment:
   Can we put the DType and Label in the same IndexedSeq as the NDArray? It 
would help make this a little less prone to bugs because they two would be 
coupled together.
   
   Something like:
   ```scala
   class NDArrayIter(data: IndexedSeq[(DataDesc, NDArray)],
   label: IndexSeq[(DataDesc, NDArray)],
   val dataBatchSize: Int,
   lastBatchHandle: String)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest commented on issue #11084: Undefined Behavior of mx.sym.where with shape-mismatched cond

2018-08-14 Thread GitBox
apeforest commented on issue #11084: Undefined Behavior of mx.sym.where with 
shape-mismatched cond
URL: 
https://github.com/apache/incubator-mxnet/issues/11084#issuecomment-412993771
 
 
   I have created a JIRA https://issues.apache.org/jira/browse/MXNET-806 is 
created to track this bug and am working on it this week.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest edited a comment on issue #11084: Undefined Behavior of mx.sym.where with shape-mismatched cond

2018-08-14 Thread GitBox
apeforest edited a comment on issue #11084: Undefined Behavior of mx.sym.where 
with shape-mismatched cond
URL: 
https://github.com/apache/incubator-mxnet/issues/11084#issuecomment-412993771
 
 
   I have created a JIRA https://issues.apache.org/jira/browse/MXNET-806 to 
track this bug and am working on it this week.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #12160: Remove conflicting llvm OpenMP from cmake builds

2018-08-14 Thread GitBox
szha commented on issue #12160: Remove conflicting llvm OpenMP from cmake builds
URL: https://github.com/apache/incubator-mxnet/pull/12160#issuecomment-412993609
 
 
   you may want to check this on mac, where the "special" clang doesn't provide 
-fopenmp option


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kalpitdixit edited a comment on issue #9171: MXNet: Using FusedRNNCell with its "bidirectional" flag turned True, can lead to hanging of training run.

2018-08-14 Thread GitBox
kalpitdixit edited a comment on issue #9171: MXNet: Using FusedRNNCell with its 
"bidirectional" flag turned True, can lead to hanging of training run.
URL: 
https://github.com/apache/incubator-mxnet/issues/9171#issuecomment-412992466
 
 
   @vandanavk 
   Re-ran my code on the latest version of MXNet. This issues does not happen 
any longer.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kalpitdixit closed issue #9171: MXNet: Using FusedRNNCell with its "bidirectional" flag turned True, can lead to hanging of training run.

2018-08-14 Thread GitBox
kalpitdixit closed issue #9171: MXNet: Using FusedRNNCell with its 
"bidirectional" flag turned True, can lead to hanging of training run.
URL: https://github.com/apache/incubator-mxnet/issues/9171
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kalpitdixit commented on issue #9171: MXNet: Using FusedRNNCell with its "bidirectional" flag turned True, can lead to hanging of training run.

2018-08-14 Thread GitBox
kalpitdixit commented on issue #9171: MXNet: Using FusedRNNCell with its 
"bidirectional" flag turned True, can lead to hanging of training run.
URL: 
https://github.com/apache/incubator-mxnet/issues/9171#issuecomment-412992466
 
 
   Re-ran my code on the latest version of MXNet. This issues does not happen 
any longer.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 commented on a change in pull request #11844: [MXNET-689] add DataDesc type for the Scala Package

2018-08-14 Thread GitBox
lanking520 commented on a change in pull request #11844: [MXNET-689] add 
DataDesc type for the Scala Package
URL: https://github.com/apache/incubator-mxnet/pull/11844#discussion_r210079765
 
 

 ##
 File path: 
scala-package/core/src/main/scala/org/apache/mxnet/io/PrefetchingIter.scala
 ##
 @@ -68,6 +70,42 @@ class PrefetchingIter(
 }
   }
 
+  private val _provideDataDesc: IndexedSeq[DataDesc] = {
+if (dataNames == null) {
+  iters.map(_.provideDataDesc).foldLeft(IndexedSeq[DataDesc]()) { (acc, 
elem) =>
+acc ++ elem
+  }
+} else {
+  iters.zipWithIndex.map(tu => (tu._1.provideDataDesc, tu._2))
+.map(m =>
+  m._1.map(t =>
+new DataDesc(dataNames(m._2)(t.name), t.shape, t.dtype, t.layout)
+  )
+)
+.foldLeft(IndexedSeq[DataDesc]()) { (acc, elem) =>
+  acc ++ elem
+}
+}
+  }
+
+  private val _provideLabelDesc: IndexedSeq[DataDesc] = {
+if (dataNames == null) {
+  iters.map(_.provideLabelDesc).foldLeft(IndexedSeq[DataDesc]()) { (acc, 
elem) =>
+acc ++ elem
+  }
+} else {
+  iters.zipWithIndex.map(tu => (tu._1.provideDataDesc, tu._2))
 
 Review comment:
   Nice!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >