[GitHub] pengzhao-intel commented on a change in pull request #11129: [MXNET-497] fix bugs in MKLDNN operators to handle the kAddTo request

2018-06-25 Thread GitBox
pengzhao-intel commented on a change in pull request #11129: [MXNET-497] fix 
bugs in MKLDNN operators to handle the kAddTo request
URL: https://github.com/apache/incubator-mxnet/pull/11129#discussion_r198020569
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_base.cc
 ##
 @@ -141,13 +212,16 @@ void CommitOutput(const NDArray , const 
mkldnn_output_t ) {
   if (res.first == CopyBack) {
 const_cast(arr).CopyFrom(*res.second);
   } else if (res.first == AddBack) {
+auto res_memory = res.second;
+auto target_pd = arr.GetMKLDNNData()->get_primitive_desc();
 auto mem = arr.GetMKLDNNData(res.second->get_primitive_desc());
-CHECK(mem != nullptr);
-// We have to allocate new memory for the sum result.
-auto sum_res = TmpMemMgr::Get()->Alloc(
-res.second->get_primitive_desc());
-op::MKLDNNSum(*res.second, *mem, *sum_res);
-const_cast(arr).CopyFrom(*sum_res);
+if (mem == nullptr) {
+  auto tmp_memory = TmpMemMgr::Get()->Alloc(target_pd);
+  MKLDNNCopy(*res_memory, tmp_memory);
+  res_memory = tmp_memory;
 
 Review comment:
   As my understanding, `MKLDNNCopy` already reordered the `res_memory` with 
`tmp_memory`, so why we need to assign `tmp_memory` to `res_memory` again?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 commented on issue #9989: Cannot train example gluon style transfer

2018-06-25 Thread GitBox
zhanghang1989 commented on issue #9989: Cannot train example gluon style 
transfer
URL: 
https://github.com/apache/incubator-mxnet/issues/9989#issuecomment-400175517
 
 
   The main problem stops hybridizing is the Gram Matrix calculation, which 
reads the shape. 
https://github.com/zhanghang1989/MXNet-Gluon-Style-Transfer/blob/master/net.py#L159-L164


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jeremiedb commented on issue #11374: [MXNET-563] Refactor R optimizers to fix memory leak

2018-06-25 Thread GitBox
jeremiedb commented on issue #11374: [MXNET-563] Refactor R optimizers to fix 
memory leak
URL: https://github.com/apache/incubator-mxnet/pull/11374#issuecomment-400175274
 
 
   @anirudhacharya Sure, I'll add tests. 
   Would be great if you could jump in as well. I was expecting to add the 
missing Adagrad and Adadelta optimizers within a week in order to match 
existing functionnalities as soon as possible. Would you be disposed looking at 
the non-mutatble NDArrays which was actually the root cause leading to refactor 
optimizers into symbolic execution? Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
reminisce commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r198010854
 
 

 ##
 File path: src/operator/contrib/tensorrt-inl.h
 ##
 @@ -0,0 +1,140 @@
+#ifndef MXNET_OPERATOR_CONTRIB_TENSORRT_INL_H_
+#define MXNET_OPERATOR_CONTRIB_TENSORRT_INL_H_
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2018 by Contributors
+ * \file tensorrt-inl.h
+ * \brief TensorRT Operator
+ * \author Marek Kolodziej, Clement Fuji Tsang
+*/
+
+#if MXNET_USE_TENSORRT
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "../operator_common.h"
+#include "../../common/utils.h"
+#include "../../common/serialization.h"
+#include "../../executor/exec_pass.h"
+#include "../../executor/graph_executor.h"
+#include "../../executor/onnx_to_tensorrt.h"
+
+namespace mxnet {
+namespace op {
+
+using namespace nnvm;
+using namespace ::onnx;
+using int64 = ::google::protobuf::int64;
+
+namespace tensorrt {
+  enum class TypeIO { Inputs = 0, Outputs = 1 };
+  using NameToIdx_t = std::map;
+  using InferenceTuple_t = std::tuple;
+  using InferenceMap_t = std::map;
+}  // namespace tensorrt
+
+using trt_name_to_idx = std::map;
+
+struct TRTParam : public dmlc::Parameter {
+  std::string serialized_onnx_graph;
+  std::string serialized_input_map;
+  std::string serialized_output_map;
+  tensorrt::NameToIdx_t input_map;
+  tensorrt::InferenceMap_t output_map;
+  ::onnx::ModelProto onnx_pb_graph;
+
+  TRTParam() {}
+
+  TRTParam(const ::onnx::ModelProto& onnx_graph,
+   const tensorrt::InferenceMap_t& input_map,
+   const tensorrt::NameToIdx_t& output_map) {
+common::Serialize(input_map, _input_map);
+common::Serialize(output_map, _output_map);
+onnx_graph.SerializeToString(_onnx_graph);
+  }
+
+DMLC_DECLARE_PARAMETER(TRTParam) {
+DMLC_DECLARE_FIELD(serialized_onnx_graph)
+.describe("Serialized ONNX graph");
+DMLC_DECLARE_FIELD(serialized_input_map)
+.describe("Map from inputs to topological order as input.");
+DMLC_DECLARE_FIELD(serialized_output_map)
+.describe("Map from outputs to order in g.outputs.");
+  }
+};
+
+struct TRTEngineParam {
+  nvinfer1::IExecutionContext* trt_executor;
+  std::vector > binding_map;
+};
+
+OpStatePtr TRTCreateState(const nnvm::NodeAttrs& attrs, Context ctx,
+  const std::vector& ishape,
+  const std::vector& itype);
+
+template
+void TRTCompute(const OpStatePtr& state, const OpContext& ctx,
 
 Review comment:
   IMHO, it's framework's job to throw error messages like this. Registering a 
CPU version of stateful FCompute for TRT doesn't sound semantically correct, 
even though it would print an error message in the Forward function. If the 
framework's error message is not informative enough, we can always improve it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nicklhy opened a new issue #11400: how to train models with multiple gpus in C++

2018-06-25 Thread GitBox
nicklhy opened a new issue #11400: how to train models with multiple gpus in C++
URL: https://github.com/apache/incubator-mxnet/issues/11400
 
 
   The `mx.mod.Module` provides a convenient high level api for model training 
in python. But due to some reasons, I need to train my models in pure C++ 
environment. I was wondering if it is also possible to support multiple gpu 
device with the interfaces in `cpp-package/include/mxnet-cpp`. Currently, I can 
only find the `Executor` in `cpp-package/include/mxnet-cpp/executor.h` which 
can set a single context variable each time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on a change in pull request #11389: [MXNET-566] Fix flaky test_operator_gpu.test_sparse_dot

2018-06-25 Thread GitBox
haojin2 commented on a change in pull request #11389: [MXNET-566] Fix flaky 
test_operator_gpu.test_sparse_dot
URL: https://github.com/apache/incubator-mxnet/pull/11389#discussion_r198005802
 
 

 ##
 File path: src/operator/tensor/dot-inl.cuh
 ##
 @@ -1053,7 +1053,7 @@ inline void DotDnsCsrDnsImpl(const OpContext& ctx, const 
gpu& gpu_dev,
   TBlob csr_indices = rhs.aux_data(csr::kIdx);
   TBlob csr_indptr = rhs.aux_data(csr::kIndPtr);
   if (!rhs.storage_initialized()) {
-FillZerosCsrImpl(s, *ret);
 
 Review comment:
   Maybe it's failing in set_aux_shape() here: 
https://github.com/apache/incubator-mxnet/blob/b2ccd34ad2801b6c87c957c28ad718562a4c5b6e/src/operator/tensor/init_op.h#L359?
 I can do a check quickly and let you know the result.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #11330: [MXNET-537] add_n(dense, csr, dense) = dense and add_n([dense, csr, rsp]*, dense, [dense, csr, rsp]*) = dense on CPU & GPU

2018-06-25 Thread GitBox
eric-haibin-lin commented on a change in pull request #11330: [MXNET-537] 
add_n(dense, csr, dense) = dense and add_n([dense, csr, rsp]*, dense, [dense, 
csr, rsp]*) = dense on CPU & GPU
URL: https://github.com/apache/incubator-mxnet/pull/11330#discussion_r198001502
 
 

 ##
 File path: src/ndarray/ndarray_function.cu
 ##
 @@ -185,6 +187,101 @@ void ElementwiseSumRspImpl(mshadow::Stream* s,
   });
 }
 
+void ElementwiseSumDnsCsrDnsImpl(mshadow::Stream* s,
+ const Resource& rsc,
+ const std::vector& nds,
+ NDArray* out) {
+  using namespace mxnet::op;
+  using namespace mxnet::op::mxnet_op;
+  const TBlob& out_data = out->data();
+  MSHADOW_TYPE_SWITCH(out->dtype(), DType, {  // data type
+Kernel::Launch(
+  s, out_data.Size(), out_data.dptr(), kWriteTo, 
nds[0].data().dptr(),
+  nds[2].data().dptr());
+const TBlob& csr_data = nds[1].data();
+const TBlob& csr_indices = nds[1].aux_data(csr::kIdx);
+const TBlob& csr_indptr = nds[1].aux_data(csr::kIndPtr);
+const nnvm::dim_t num_rows = nds[1].shape()[0];
+const nnvm::dim_t num_cols = nds[1].shape()[1];
+MSHADOW_IDX_TYPE_SWITCH(csr_indices.type_flag_, IType, {  // indices type
+  MSHADOW_IDX_TYPE_SWITCH(csr_indptr.type_flag_, CType, {  // indptr type
+if (nds[1].storage_initialized()) {
+  Kernel, 
gpu>::Launch(
+s, 32 * num_rows, out_data.dptr(), out_data.dptr(),
+csr_data.dptr(), csr_indices.dptr(),
+csr_indptr.dptr(), num_rows, num_cols);
+}
+  });
+});
+  });
+}
+
+void ElementwiseSumContainsDnsImpl(mshadow::Stream* s,
+ const Resource& rsc,
+ const std::vector& nds,
+ NDArray* out) {
+  using namespace mxnet::op;
+  using namespace mxnet::op::mxnet_op;
+  const TBlob& out_data = out->data();
+  MSHADOW_TYPE_SWITCH(out->dtype(), DType, {  // data type
+for (size_t i = 0; i < nds.size(); ++i) {
+  const NDArray& nd = nds[i];
+  const nnvm::dim_t num_rows = nd.shape()[0];
+  const nnvm::dim_t num_cols = nd.shape()[1];
+  const TBlob& nd_data = nd.data();
+
+  if (i == 0) {
+if (nd.storage_type() == kDefaultStorage) {
+  Kernel, gpu>::Launch(
+s, out_data.Size(), out_data.dptr(), nd_data.dptr());
+  continue;
+} else {
+  Kernel::Launch(s, out_data.Size(), 
out_data.dptr());
+}
+  }
+
+  switch (nd.storage_type()) {
+case kDefaultStorage: {
+  Kernel, gpu>::Launch(
+s, out_data.Size(), out_data.dptr(), out_data.dptr(),
+nd_data.dptr());
+  break;
+}
+case kCSRStorage: {
+  const TBlob& nd_indices = nd.aux_data(csr::kIdx);
+  const TBlob& nd_indptr = nd.aux_data(csr::kIndPtr);
+  MSHADOW_IDX_TYPE_SWITCH(nd_indices.type_flag_, IType, {  // indices 
type
+MSHADOW_IDX_TYPE_SWITCH(nd_indptr.type_flag_, CType, {  // indptr 
type
+  if (nd.storage_initialized()) {
+Kernel, gpu>::Launch(
+  s, 32 * num_rows, out_data.dptr(), 
out_data.dptr(),
 
 Review comment:
   Suggest use a const var with meaningful name instead of 32


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mkolod commented on issue #11380: Add ability to query cuDNN BatchNorm min. epsilon. Allow ONNX importer to use cuDNN BN if chosen eps >= cuDNN min. eps.

2018-06-25 Thread GitBox
mkolod commented on issue #11380: Add ability to query cuDNN BatchNorm min. 
epsilon. Allow ONNX importer to use cuDNN BN if chosen eps >= cuDNN min. eps.
URL: https://github.com/apache/incubator-mxnet/pull/11380#issuecomment-400164515
 
 
   @marcoabreu It seems like all tests pass on all plaforms, except for 
Windows-GPU, which is failing on all tests with `CUDA: unspecified launch 
failure`. It seems like the issue may be with the Windows runner.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] samhodge commented on issue #9989: Cannot train example gluon style transfer

2018-06-25 Thread GitBox
samhodge commented on issue #9989: Cannot train example gluon style transfer
URL: 
https://github.com/apache/incubator-mxnet/issues/9989#issuecomment-400163481
 
 
   Thanks for the swift response @zhanghang1989 
   
   I was hoping to serialise the model and run in C++ as a hybrid model.
   
   Do you have any suggestions as how to run the model in C++, besides pybind11?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #11389: [MXNET-566] Fix flaky test_operator_gpu.test_sparse_dot

2018-06-25 Thread GitBox
eric-haibin-lin commented on a change in pull request #11389: [MXNET-566] Fix 
flaky test_operator_gpu.test_sparse_dot
URL: https://github.com/apache/incubator-mxnet/pull/11389#discussion_r198000365
 
 

 ##
 File path: src/operator/tensor/dot-inl.cuh
 ##
 @@ -1053,7 +1053,7 @@ inline void DotDnsCsrDnsImpl(const OpContext& ctx, const 
gpu& gpu_dev,
   TBlob csr_indices = rhs.aux_data(csr::kIdx);
   TBlob csr_indptr = rhs.aux_data(csr::kIndPtr);
   if (!rhs.storage_initialized()) {
-FillZerosCsrImpl(s, *ret);
 
 Review comment:
   Why is there segmentation fault instead of log fatal err msg? The 
CheckAndAllocData function used by FillZerosCsrImpl does check stype: 
https://github.com/apache/incubator-mxnet/blob/master/include/mxnet/ndarray.h#L621
 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on issue #11359: Flaky test test_io:test_ImageRecordIter_seed_augmentation

2018-06-25 Thread GitBox
larroy commented on issue #11359: Flaky test 
test_io:test_ImageRecordIter_seed_augmentation
URL: 
https://github.com/apache/incubator-mxnet/issues/11359#issuecomment-400157621
 
 
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-11382/6/pipeline


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ijkguo commented on issue #11373: update rcnn example

2018-06-25 Thread GitBox
ijkguo commented on issue #11373: update rcnn example
URL: https://github.com/apache/incubator-mxnet/pull/11373#issuecomment-400157323
 
 
   @ZiyueHuang MutableModule now gone.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy edited a comment on issue #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
larroy edited a comment on issue #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#issuecomment-400152945
 
 
   Hi @gigasquid   CI runs stuff in docker containers. There's the Dockerfile 
which defines the platform, in your case you should use ubuntu_cpu 
   
   The steps to add your code to CI would be the following:
   
   1. Add your clojure dependencies (jvm, lein etc) to  
`ci/docker/install/ubuntu_clojure.sh` (check how it's done with scala in 
`ci/docker/install/ubuntu_scala.sh`  These are used to build the container
   
   2. You need to add one function to
   `ci/docker/runtime_functions.sh` that builds your package. This is the entry 
point that is triggered inside the docker container.
   
   and then add a stage into the Jenkinsfile:
   
   See for example this PR which adds an additional Android stage:
   
   
https://github.com/apache/incubator-mxnet/pull/11382/files#diff-58231b16fdee45a03a4ee3cf94a9f2c3L486
   
   To test locally run:
   
   ci/build.py --platform ubuntu_cpu --shm-size 500m /work/runtime_functions.sh 
unittest_ubuntu_cpu_scala
   
   but just use the function for clojure that was just created.
   
   ci/build.py --platform ubuntu_cpu -i  will put you inside the container, 
so you can check what steps you need to take and add to the 
runtime_functions.sh script.  
   
   You can also run build.py from osx with docker, you might need to increase 
the memory in preferences. Let me know if you have any issues. 
   
   
   PD: to install docker in an ubuntu machine you can do the following:
   ```
   function install_docker_ubuntu() {
 #apt-get -y install docker docker.io
 export DEBIAN_FRONTEND=noninteractive
 curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add 
-
 add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
  stable"
 apt-get update
 apt-get -y install docker-ce
 # Nvidia docker
 wget -P /tmp 
https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
 dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker*.deb
   
 # Restart docker
 service docker restart
   
 # Add ubuntu to docker group
 usermod -a -G docker ubuntu
   }
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy edited a comment on issue #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
larroy edited a comment on issue #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#issuecomment-400152945
 
 
   Hi @gigasquid   CI runs stuff in docker containers.
   
   The steps to add your code to CI would be the following:
   
   1. Add your clojure dependencies to  `ci/docker/install/ubuntu_clojure.sh`   
  (check how it's done with scala)
   
   2. You need to add one function to
   `ci/docker/runtime_functions.sh` that builds your package
   
   and then add a stage into the Jenkinsfile:
   
   See for example this PR which adds an additional Android stage:
   
   
https://github.com/apache/incubator-mxnet/pull/11382/files#diff-58231b16fdee45a03a4ee3cf94a9f2c3L486
   
   To test locally run:
   
   ci/build.py --platform ubuntu_cpu --shm-size 500m /work/runtime_functions.sh 
unittest_ubuntu_cpu_scala
   
   but just use the function for clojure that was just created.
   
   ci/build.py --platform ubuntu_cpu -i  will put you inside the container, 
so you can check what steps you need to take and add to the 
runtime_functions.sh script.  
   
   Let me know if you have any issues. 
   
   
   PD: to install docker in an ubuntu machine you can do the following:
   ```
   function install_docker_ubuntu() {
 #apt-get -y install docker docker.io
 export DEBIAN_FRONTEND=noninteractive
 curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add 
-
 add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
  stable"
 apt-get update
 apt-get -y install docker-ce
 # Nvidia docker
 wget -P /tmp 
https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
 dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker*.deb
   
 # Restart docker
 service docker restart
   
 # Add ubuntu to docker group
 usermod -a -G docker ubuntu
   }
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on issue #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
larroy commented on issue #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#issuecomment-400152945
 
 
   Hi @gigasquid   CI runs stuff in docker containers.
   
   The steps to add your code to CI would be the following:
   
   1. Add your clojure dependencies to  `ci/docker/install/ubuntu_clojure.sh`   
  (check how it's done with scala)
   
   2. You need to add one function to
   `ci/docker/runtime_functions.sh` that builds your package
   
   and then add a stage into the Jenkinsfile:
   
   See for example this PR which adds an additional Android stage:
   
   
https://github.com/apache/incubator-mxnet/pull/11382/files#diff-58231b16fdee45a03a4ee3cf94a9f2c3L486
   
   To test locally run:
   
   ci/build.py --platform ubuntu_cpu --shm-size 500m /work/runtime_functions.sh 
unittest_ubuntu_cpu_scala
   
   but just use the function for clojure that was just created.
   
   ci/build.py --platform ubuntu_cpu -i  will put you inside the container, 
so you can check what steps you need to take and add to the 
runtime_functions.sh script.  
   
   Let me know if you have any issues. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2018-06-25 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 987250c  Bump the publish timestamp.
987250c is described below

commit 987250c67fabe01012540acd9181ec4dd0992730
Author: mxnet-ci 
AuthorDate: Tue Jun 26 01:37:16 2018 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..6164f23
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue Jun 26 01:37:16 UTC 2018



[GitHub] azai91 commented on issue #11129: [MXNET-497] fix bugs in MKLDNN operators to handle the kAddTo request

2018-06-25 Thread GitBox
azai91 commented on issue #11129: [MXNET-497] fix bugs in MKLDNN operators to 
handle the kAddTo request
URL: https://github.com/apache/incubator-mxnet/pull/11129#issuecomment-400145869
 
 
   @zheng-da @pengzhao-intel updated PR. please take a look when you have time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #11399: [WIP] Add Fused Vanilla RNN and dropout

2018-06-25 Thread GitBox
TaoLv commented on issue #11399: [WIP] Add Fused Vanilla RNN and dropout
URL: https://github.com/apache/incubator-mxnet/pull/11399#issuecomment-400145788
 
 
   Please remove [WIP] from the title and add the JIRA number to it. 
https://issues.apache.org/jira/browse/MXNET-107


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lihaofd opened a new pull request #11399: [WIP] Add Fused Vanilla RNN and dropout

2018-06-25 Thread GitBox
lihaofd opened a new pull request #11399: [WIP] Add Fused Vanilla RNN and 
dropout
URL: https://github.com/apache/incubator-mxnet/pull/11399
 
 
   ## Description ##
   In this PR, it creates Fused Vanilla RNN(tanh/relu) operator and dropout of 
GRU/LSTM/vRNN for CPU.
   @pengzhao-intel, @TaoLv 
   
   ## Feature changes ##
   ### New features ###
   - Single layer/Multiple layer and unidirectional/bidirectional Vanilla 
RNN(tanh/relu), including both forward and backward computation.
   - Support dropout of GRU/LSTM/vRNN
   
   ### Unit-test changes ###
   - Create new testcase in tests/python/unittests/test_operator.py.
   - update testcase in example/rnn/bucketing/cudnn_rnn_bucketing.py
   - Check consistency with original RNNCell implementation.
   
   ### Performance ###
   We have tested performance of FusedRNN and NonFused RNNCell on our local 
Skylake-8180 with 2 Sockets and 56 cores. Use MKL as blas lib in this 
performance test.
   Test input size is from DS2 default parameters(seq_length = 300, batch_size 
= 20, input_size = 800, hidden_size = 800).
   
   Layer=1 bidirectional = False
   
   | API | Inference time(fwd, samples/sec)  |  Training time(fwd + bwd, 
samples/sec)   |
   |    | :-:  | :: 
  |
   | rnn.RNNCell - NoFusedRNN(Tanh, CPU) | 492.61   |  
198.02|
   | this PR - FusedRNN(Tanh, CPU)  | 952.38  | 
 318.98 |
   | speedup | 1.93x  |  1.61x  
  | 
   
   
   | API | Inference time(fwd, samples/sec)  |  Training time(fwd + bwd, 
samples/sec)   |
   |    | :-:  | :: 
  |
   | rnn.RNNCell - NoFusedRNN(Relu, CPU) | 277.78   |  
104.17|
   | this PR - FusedRNN(Relu, CPU)  | 740.74  | 
 177 |
   | speedup | 2.67x  |  1.7x   
 | 
   
   Layer=5 bidirectional = True
   
   | API | Inference time(fwd, samples/sec)  |  Training time(fwd + bwd, 
samples/sec)   |
   |    | :-:  | :: 
  |
   | rnn.RNNCell - NoFusedRNN(Tanh, CPU) | 38.91   |  22.73 
   |
   | rnn.RNNCell (Tanh, cuda)  | 47.85   |  26.95   
 |
   | rnn.RNNCell (Tanh, cudnn)  | 208.33   |  81.63 
  |
   | this PR - FusedRNN(Tanh, CPU)  | 104.17  | 
 34.01 |
   | speedup -this PR/RNNCell (Tanh, CPU)  | 267.7% 
 |  149.7%| 
   | speedup -this PR/RNNCell  (Tanh, cuda)| 217.7% 
 |  126.2%| 
   | speedup -this PR/RNNCell  (Tanh, cudnn)| 50%   
   |  41.7%   | 
   
   
   | API | Inference time(fwd, samples/sec)  |  Training time(fwd + bwd, 
samples/sec)   |
   |    | :-:  | :: 
  |
   | rnn.RNNCell - NoFusedRNN(Relu, CPU) | 40.73   |  22.6  
  |
   | rnn.RNNCell (Relu, cuda)  | 52.91   |  26.81   
 |
   | rnn.RNNCell (Relu, cudnn)  | 206.83   |  82.64 
 |
   | this PR - FusedRNN(Relu, CPU)  | 134.23  | 
 35.97 |
   | speedup -this PR/RNNCell (Relu, CPU)  | 329.5% 
 |  159.2%| 
   | speedup -this PR/RNNCell  (Relu, cuda)| 253.7% 
 |  134.2%| 
   | speedup -this PR/RNNCell  (Relu, cudnn)| 64.9% 
 |  43.5%   | 
   
   ### Convergency Curve ###
   We have tested Convergency of FusedGRU/LSTM(dropout = 0.5) on our 
CPU-Skylake-8180 with 2 Sockets and 56 cores and GPU-P100  by using 
example/rnn/bucketing/cudnn_rnn_bucketing.py
   Test input size is layer = 3, batch_size = 32, num-embed = 800, num-hidden = 
800, num-epochs 20
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:

[GitHub] gigasquid commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
gigasquid commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197981747
 
 

 ##
 File path: 
contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj
 ##
 @@ -0,0 +1,112 @@
+;;
+;; Licensed to the Apache Software Foundation (ASF) under one or more
+;; contributor license agreements.  See the NOTICE file distributed with
+;; this work for additional information regarding copyright ownership.
+;; The ASF licenses this file to You under the Apache License, Version 2.0
+;; (the "License"); you may not use this file except in compliance with
+;; the License.  You may obtain a copy of the License at
+;;
+;;http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+;;
+
+(ns cnn-text-classification.classifier
+  (:require [cnn-text-classification.data-helper :as data-helper]
+[org.apache.clojure-mxnet.eval-metric :as eval-metric]
+[org.apache.clojure-mxnet.io :as mx-io]
+[org.apache.clojure-mxnet.module :as m]
+[org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.optimizer :as optimizer]
+[org.apache.clojure-mxnet.symbol :as sym]
+[org.apache.clojure-mxnet.context :as context])
+  (:gen-class))
+
+(def mr-dataset-path "data/mr-data") ;; the MR polarity dataset path
+(def glove-file-path "data/glove/glove.6B.50d.txt")
+
+(defn shuffle-data [test-num {:keys [data label sentence-count sentence-size 
embedding-size]}]
+  (println "Shuffling the data and splitting into training and test sets")
+  (println {:sentence-count sentence-count
+:sentence-size sentence-size
+:embedding-size embedding-size})
+  (let [shuffled (shuffle (map (fn [d l] [d l]) data label))
+train-num (- (count shuffled) test-num)
+training (into [] (take train-num shuffled))
+test (into [] (drop train-num shuffled))]
+{:training {:data  (ndarray/array (into [] (flatten (mapv (fn [v] (first 
v)) training)))
+  [train-num 1 sentence-size 
embedding-size]) ;; has to be channel x y
+:label (ndarray/array (into [] (flatten (mapv (fn [v] (last v) 
) training)))
+  [train-num])}
+ :test {:data  (ndarray/array (into [] (flatten (mapv (fn [v] (first v)) 
test)))
+  [test-num 1 sentence-size embedding-size]) 
;; has to be channel x y
+:label (ndarray/array (into [] (flatten (mapv (fn [v] (last v) ) 
test)))
+  [test-num])}}))
+
+;;; convnet with multiple filter sizes
+;; from Convolutional Neural Networks for Sentence Classification by Yoon Kim
+(defn get-multi-filter-convnet [num-embed sentence-size batch-size]
+  (let [filter-list [3 4 5]
+num-filter 100
+num-label 2
+dropout 0.5
+input-x (sym/variable "data")
+polled-outputs (mapv (fn [filter-size]
+   (as-> (sym/convolution {:data input-x
+   :kernel [filter-size 
num-embed]
+   :num-filter 
num-filter}) data
+ (sym/activation {:data data :act-type "relu"})
+ (sym/pooling {:data data
+   :pool-type "max"
 
 Review comment:
   not sure exactly what you are proposing..


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] toddsundsted opened a new issue #11398: Floating Point Exception after Array Creation

2018-06-25 Thread GitBox
toddsundsted opened a new issue #11398: Floating Point Exception after Array 
Creation
URL: https://github.com/apache/incubator-mxnet/issues/11398
 
 
   ## Description
   Expressions like `nd.random_uniform(shape=[5, 5, -2])` and 
`nd.random_uniform(shape=[5, 5, 0])` cause the runtime to crash (the former 
with `std::bad_alloc`, the latter with `Floating point exception: 8`. It's a 
problem in versions 1.1.0 to 1.1.3 (master).
   
   ## Environment info (Required)
   ```
   --Python Info--
   Version  : 3.6.4
   Compiler : GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)
   Build: ('default', 'Jan 16 2018 12:04:33')
   Arch : ('64bit', '')
   Pip Info---
   Version  : 10.0.1
   Directory: /Users/tsundsted/miniconda3/lib/python3.6/site-packages/pip
   --MXNet Info---
   Version  : 1.1.0
   Directory: /Users/tsundsted/miniconda3/lib/python3.6/site-packages/mxnet
   Hashtag not found. Not installed from pre-built package.
   --System Info--
   Platform : Darwin-16.7.0-x86_64-i386-64bit
   system   : Darwin
   node : Todds-MacBook-Pro.local
   release  : 16.7.0
   version  : Darwin Kernel Version 16.7.0: Fri Apr 27 17:59:46 PDT 2018; 
root:xnu-3789.73.13~1/RELEASE_X86_64
   --Hardware Info--
   machine  : x86_64
   processor: i386
   b'machdep.cpu.extfeatures: SYSCALL XD 1GBPAGE EM64T LAHF LZCNT RDTSCP TSCI'
   b'machdep.cpu.leaf7_features: SMEP ERMS RDWRFSGS TSC_THREAD_OFFSET BMI1 AVX2 
BMI2 INVPCID FPU_CSDS'
   b'machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE 
MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE SSE3 PCLMULQDQ 
DTES64 MON DSCPL VMX SMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC 
MOVBE POPCNT AES PCID XSAVE OSXSAVE SEGLIM64 TSCTMR AVX1.0 RDRAND F16C'
   b'machdep.cpu.brand_string: Intel(R) Core(TM) i7-4960HQ CPU @ 2.60GHz'
   --Network Test--
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0176 
sec, LOAD: 0.4752 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.4092 sec, LOAD: 
0.6132 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 3.4790 sec, LOAD: 
0.7560 sec.
   Timing for FashionMNIST: 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz,
 DNS: 0.1225 sec, LOAD: 0.8626 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0179 sec, LOAD: 
0.4299 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0157 sec, 
LOAD: 0.3484 sec.
   ```
   
   I'm using Python.
   
   ## Error Message:
   ```
   $ python
   Python 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 12:04:33)
   [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin
   Type "help", "copyright", "credits" or "license" for more information.
   >>> import mxnet as mx
   >>> import mxnet.ndarray as nd
   >>> nd.random_uniform(shape=[5, 5, -2])
   Traceback (most recent call last):
 File "", line 1, in 
 File 
"/Users/tsundsted/miniconda3/lib/python3.6/site-packages/mxnet/ndarray/ndarray.py",
 line 182, in __repr__
   return '\n%s\n<%s %s @%s>' % (str(self.asnumpy()),
 File 
"/Users/tsundsted/miniconda3/lib/python3.6/site-packages/mxnet/ndarray/ndarray.py",
 line 1793, in asnumpy
   libc++abi.dylib: terminating with uncaught exception of type std::bad_alloc: 
std::bad_alloc
   Abort trap: 6
   ```
   ```
   $ python
   Python 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 12:04:33)
   [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin
   Type "help", "copyright", "credits" or "license" for more information.
   >>> import mxnet as mx
   >>> import mxnet.ndarray as nd
   >>> nd.random_uniform(shape=[5, 5, 0])
   Floating point exception: 8
   ```
   ## Minimum reproducible example
   Any shape with a non-positive dimension size: for example, 
`nd.random_uniform(shape=[5, 5, -2])` and `nd.random_uniform(shape=[5, 5, 0])`.
   
   ## Steps to reproduce
   1. run python
   2. `import mxnet as mx`
   3. `import mxnet.ndarray as nd`
   4. `nd.random_uniform(shape=[5, 5, 0])`
   
   ## What have you tried to solve it?
   Created https://github.com/apache/incubator-mxnet/pull/11397
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
gigasquid commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197981330
 
 

 ##
 File path: 
contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj
 ##
 @@ -0,0 +1,112 @@
+;;
+;; Licensed to the Apache Software Foundation (ASF) under one or more
+;; contributor license agreements.  See the NOTICE file distributed with
+;; this work for additional information regarding copyright ownership.
+;; The ASF licenses this file to You under the Apache License, Version 2.0
+;; (the "License"); you may not use this file except in compliance with
+;; the License.  You may obtain a copy of the License at
+;;
+;;http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+;;
+
+(ns cnn-text-classification.classifier
+  (:require [cnn-text-classification.data-helper :as data-helper]
+[org.apache.clojure-mxnet.eval-metric :as eval-metric]
+[org.apache.clojure-mxnet.io :as mx-io]
+[org.apache.clojure-mxnet.module :as m]
+[org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.optimizer :as optimizer]
+[org.apache.clojure-mxnet.symbol :as sym]
+[org.apache.clojure-mxnet.context :as context])
+  (:gen-class))
+
+(def mr-dataset-path "data/mr-data") ;; the MR polarity dataset path
+(def glove-file-path "data/glove/glove.6B.50d.txt")
+
+(defn shuffle-data [test-num {:keys [data label sentence-count sentence-size 
embedding-size]}]
+  (println "Shuffling the data and splitting into training and test sets")
+  (println {:sentence-count sentence-count
+:sentence-size sentence-size
+:embedding-size embedding-size})
+  (let [shuffled (shuffle (map (fn [d l] [d l]) data label))
+train-num (- (count shuffled) test-num)
+training (into [] (take train-num shuffled))
+test (into [] (drop train-num shuffled))]
+{:training {:data  (ndarray/array (into [] (flatten (mapv (fn [v] (first 
v)) training)))
 
 Review comment:
   I think `#()` here is preferable. Sometimes, when I spent too much time 
translating Scala code to Clojure, my brain got a bit fuzzy - will fix   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
gigasquid commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197980344
 
 

 ##
 File path: contrib/clojure-package/examples/cnn-text-classification/README.md
 ##
 @@ -0,0 +1,38 @@
+# cnn-text-classification
+
+An example of text classification using CNN
+
+To use you must download the MR polarity dataset and put it in the path 
specified in the mr-dataset-path
+The dataset can be obtained here: 
[https://github.com/yoonkim/CNN_sentence](https://github.com/yoonkim/CNN_sentence).
 The two files `rt-polarity.neg`
+and `rt-polarity.pos` must be put in a directory. For example, 
`data/mr-data/rt-polarity.neg`.
+
+You also must download the glove word embeddings. The suggested one to use is 
the smaller 50 dimension one
 
 Review comment:
   Word2vec is available in the demo as well - but I haven't been able to test 
that yet. I can put that on the Needs Help page 
https://cwiki.apache.org/confluence/display/MXNET/Clojure+Package+Contribution+Needs


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
gigasquid commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197979848
 
 

 ##
 File path: contrib/clojure-package/examples/cnn-text-classification/README.md
 ##
 @@ -0,0 +1,38 @@
+# cnn-text-classification
+
+An example of text classification using CNN
+
+To use you must download the MR polarity dataset and put it in the path 
specified in the mr-dataset-path
+The dataset can be obtained here: 
[https://github.com/yoonkim/CNN_sentence](https://github.com/yoonkim/CNN_sentence).
 The two files `rt-polarity.neg`
+and `rt-polarity.pos` must be put in a directory. For example, 
`data/mr-data/rt-polarity.neg`.
+
+You also must download the glove word embeddings. The suggested one to use is 
the smaller 50 dimension one
 
 Review comment:
   It was smaller and could fit into my laptop memory :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
gigasquid commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197979725
 
 

 ##
 File path: contrib/clojure-package/README.md
 ##
 @@ -0,0 +1,203 @@
+# Clojure MXNet
+
+A clojure package to the MXNet Deep Learning library
+
+## Introduction
+
+MXNet is a first class, modern deep learning library that AWS has officially 
picked as its chosen library. It supports multiple languages on a first class 
basis and is incubating as an Apache project.
+
+The motivation for creating a Clojure package is to be able to open the deep 
learning library to the Clojure ecosystem and build bridges for future 
development and innovation for the community. It provides all the needed tools 
including low level and high level apis, dynamic graphs, and things like GAN 
and natural language support.
+
+For high leverage, the Clojure package has been built on the existing Scala 
package using interop. This has allowed rapid development and close parity with 
the Scala functionality. This also leaves the door open to directly developing 
code against the jni-bindings with Clojure in the future in an incremental 
fashion, using the test suites as a refactoring guide.
+
+## Current State and Plans
+
+The Clojure package is nearing the end of its first development milestone 
which is to achieve a close parity with the Scala package and to potentially be 
included into the main project for official Clojure language support.
+
+What is needed now is alpha testing on both OSX and Linux to discover any 
bugs, rough edges, and generally harden it before an official PR is opened on 
the main project.
+
+Help with this effort is greatly appreciated and contributors will be 
recognized in the project README.
+
+Testing instructions can be found in the Testing.md
+
+## Getting Started
+
+The following systems are supported:
+
+- OSX cpu
+- Linux cpu
+- Linux gpu
+
+There are two ways of getting going. The first way is the easiest and that is 
to use the pre-built jars from Maven. The second way is to build from source. 
In both cases, you will need to load the prereqs and dependencies, (like 
opencv).
+
+It's been tested on AWS Deep Learning AMI and OSX High Sierra 10.13.4
+
+
+### Prerequisites
+
+**If you are using the AWS Deep Learning Ubuntu or Linux AMI you should be 
good to go without doing anything on this step.**
+
+
+Follow the instructions from 
https://mxnet.incubator.apache.org/install/osx_setup.html or 
https://mxnet.incubator.apache.org/install/ubuntu_setup.html
+about _Prepare Environment for GPU Installation_
+and _Install MXNet dependencies_
+
+
+ Cloning the repo and running from source
+
+To use the prebuilt jars, you will need to replace the native version of the 
line in the project dependencies with your configuration.
+
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-gpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-cpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.2.0"]`
+
+
+```clojure
+
+(ns tutorial.ndarray
+  (:require [org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.context :as context]))
+
+;;Create NDArray
+(def a (ndarray/zeros [100 50])) ;;all zero arrray of dimension 100 x 50
+(def b (ndarray/ones [256 32 128 1])) ;; all one array of dimension
+(def c (ndarray/array [1 2 3 4 5 6] [2 3])) ;; array with contents of a shape 
2 x 3
+
+;;; There are also ways to convert to a vec or get the shape as an object or 
vec
+(ndarray/->vec c) ;=> [1.0 2.0 3.0 4.0 5.0 6.0]
+```
+
+See the examples/tutorial section for more.
+
+
+The jars from maven with the needed MXNet native binaries in it. On startup, 
the native libraries are extracted from the jar and copied into a temporary 
location on your path. On termination, they are deleted.
+
+If you want details on the flags (opencv verison and cuda version of the 
jars), they are documented here 
https://cwiki.apache.org/confluence/display/MXNET/MXNet-Scala+Release+Process
+
+
+### Build from MXNET Source
+
+Checkout the latest sha from the main package
+
+`git clone --recursive https://github.com/dmlc/mxnet ~/mxnet`
+`cd ~/mxnet`
+
+
+`git checkout tags/1.2.0 -b release-1.2.0`
+
+`git submodule update --init --recursive`
+
+Sometimes it useful to use this script to clean hard
+https://gist.github.com/nicktoumpelis/11214362
+
+
+Go here to do the base package installation 
https://mxnet.incubator.apache.org/install/index.html
+
+ Run `make scalapkg` then `make scalainstall`
+
+then replace the correct jar for your architecture in the project.clj, example 
`[ml.dmlc.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.0.1-SNAPSHOT"]`
+
+ Test your installation
+
+To test your installation, you should run `lein test`. This will run the test 
suite (CPU) for the clojure package.
+
+
+ Generation of NDArray and Symbol apis
+
+The bulk of the ndarray and symbol apis are generated via java reflection 

[GitHub] toddsundsted opened a new pull request #11397: Check Shape

2018-06-25 Thread GitBox
toddsundsted opened a new pull request #11397: Check Shape
URL: https://github.com/apache/incubator-mxnet/pull/11397
 
 
   ## Description ##
   The `NDArray` constructors do not ensure that shape dimensions are all 
positive numbers. In Python, at least, expressions like 
`nd.random_uniform(shape=[5, 5, -2])` and `nd.random_uniform(shape=[5, 5, 0])` 
cause the runtime to crash.
   
   ## Checklist ##
   ### Essentials ###
   - [X] Changes are complete (i.e. I finished coding on this PR)
   - [X] Code is well-documented: 
   - [X] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - Create a method `CheckShape()` and invoke it in every constructor.
   
   ## Comments ##
   I didn't see unit tests for `NDArray`. I'd be happy to use or create unit 
tests, if that is desired.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
gigasquid commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197979462
 
 

 ##
 File path: contrib/clojure-package/README.md
 ##
 @@ -0,0 +1,203 @@
+# Clojure MXNet
+
+A clojure package to the MXNet Deep Learning library
+
+## Introduction
+
+MXNet is a first class, modern deep learning library that AWS has officially 
picked as its chosen library. It supports multiple languages on a first class 
basis and is incubating as an Apache project.
+
+The motivation for creating a Clojure package is to be able to open the deep 
learning library to the Clojure ecosystem and build bridges for future 
development and innovation for the community. It provides all the needed tools 
including low level and high level apis, dynamic graphs, and things like GAN 
and natural language support.
+
+For high leverage, the Clojure package has been built on the existing Scala 
package using interop. This has allowed rapid development and close parity with 
the Scala functionality. This also leaves the door open to directly developing 
code against the jni-bindings with Clojure in the future in an incremental 
fashion, using the test suites as a refactoring guide.
+
+## Current State and Plans
+
+The Clojure package is nearing the end of its first development milestone 
which is to achieve a close parity with the Scala package and to potentially be 
included into the main project for official Clojure language support.
+
+What is needed now is alpha testing on both OSX and Linux to discover any 
bugs, rough edges, and generally harden it before an official PR is opened on 
the main project.
+
+Help with this effort is greatly appreciated and contributors will be 
recognized in the project README.
+
+Testing instructions can be found in the Testing.md
+
+## Getting Started
+
+The following systems are supported:
+
+- OSX cpu
+- Linux cpu
+- Linux gpu
+
+There are two ways of getting going. The first way is the easiest and that is 
to use the pre-built jars from Maven. The second way is to build from source. 
In both cases, you will need to load the prereqs and dependencies, (like 
opencv).
+
+It's been tested on AWS Deep Learning AMI and OSX High Sierra 10.13.4
+
+
+### Prerequisites
+
+**If you are using the AWS Deep Learning Ubuntu or Linux AMI you should be 
good to go without doing anything on this step.**
+
+
+Follow the instructions from 
https://mxnet.incubator.apache.org/install/osx_setup.html or 
https://mxnet.incubator.apache.org/install/ubuntu_setup.html
+about _Prepare Environment for GPU Installation_
+and _Install MXNet dependencies_
+
+
+ Cloning the repo and running from source
+
+To use the prebuilt jars, you will need to replace the native version of the 
line in the project dependencies with your configuration.
+
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-gpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-cpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.2.0"]`
+
+
+```clojure
+
+(ns tutorial.ndarray
+  (:require [org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.context :as context]))
+
+;;Create NDArray
+(def a (ndarray/zeros [100 50])) ;;all zero arrray of dimension 100 x 50
+(def b (ndarray/ones [256 32 128 1])) ;; all one array of dimension
+(def c (ndarray/array [1 2 3 4 5 6] [2 3])) ;; array with contents of a shape 
2 x 3
+
+;;; There are also ways to convert to a vec or get the shape as an object or 
vec
+(ndarray/->vec c) ;=> [1.0 2.0 3.0 4.0 5.0 6.0]
+```
+
+See the examples/tutorial section for more.
+
+
+The jars from maven with the needed MXNet native binaries in it. On startup, 
the native libraries are extracted from the jar and copied into a temporary 
location on your path. On termination, they are deleted.
+
+If you want details on the flags (opencv verison and cuda version of the 
jars), they are documented here 
https://cwiki.apache.org/confluence/display/MXNET/MXNet-Scala+Release+Process
+
+
+### Build from MXNET Source
+
+Checkout the latest sha from the main package
+
+`git clone --recursive https://github.com/dmlc/mxnet ~/mxnet`
+`cd ~/mxnet`
+
+
+`git checkout tags/1.2.0 -b release-1.2.0`
+
+`git submodule update --init --recursive`
+
+Sometimes it useful to use this script to clean hard
+https://gist.github.com/nicktoumpelis/11214362
+
+
+Go here to do the base package installation 
https://mxnet.incubator.apache.org/install/index.html
+
+ Run `make scalapkg` then `make scalainstall`
+
+then replace the correct jar for your architecture in the project.clj, example 
`[ml.dmlc.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.0.1-SNAPSHOT"]`
+
+ Test your installation
+
+To test your installation, you should run `lein test`. This will run the test 
suite (CPU) for the clojure package.
+
+
+ Generation of NDArray and Symbol apis
+
+The bulk of the ndarray and symbol apis are generated via java reflection 

[GitHub] spidyDev commented on issue #9857: C++ test Core dump DROPOUT_PERF.TimingGPU

2018-06-25 Thread GitBox
spidyDev commented on issue #9857: C++ test Core dump DROPOUT_PERF.TimingGPU
URL: 
https://github.com/apache/incubator-mxnet/issues/9857#issuecomment-400130297
 
 
   @marcoabreu  I ran this test ~1000 times, couldnt replicate the failure. Can 
we close this issue ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong closed pull request #10931: [MXNET-349] Histogram Operator

2018-06-25 Thread GitBox
piiswrong closed pull request #10931: [MXNET-349] Histogram Operator
URL: https://github.com/apache/incubator-mxnet/pull/10931
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/ndarray/ndarray.py b/python/mxnet/ndarray/ndarray.py
index f017d7e65e7..002ce3ebbc2 100644
--- a/python/mxnet/ndarray/ndarray.py
+++ b/python/mxnet/ndarray/ndarray.py
@@ -46,7 +46,7 @@
"ones", "add", "arange", "eye", "divide", "equal", "full", 
"greater", "greater_equal",
"imdecode", "lesser", "lesser_equal", "logical_and", "logical_or", 
"logical_xor",
"maximum", "minimum", "moveaxis", "modulo", "multiply", 
"not_equal", "onehot_encode",
-   "power", "subtract", "true_divide", "waitall", "_new_empty_handle"]
+   "power", "subtract", "true_divide", "waitall", "_new_empty_handle", 
"histogram"]
 
 _STORAGE_TYPE_UNDEFINED = -1
 _STORAGE_TYPE_DEFAULT = 0
@@ -3740,3 +3740,36 @@ def empty(shape, ctx=None, dtype=None):
 if dtype is None:
 dtype = mx_real_t
 return NDArray(handle=_new_alloc_handle(shape, ctx, False, dtype))
+
+
+# pylint: disable= redefined-builtin
+def histogram(a, bins=10, range=None):
+"""Compute the histogram of the input data.
+
+Parameters
+--
+a : NDArray
+Input data. The histogram is computed over the flattened array.
+bins : int or sequence of scalars
+If bins is an int, it defines the number of equal-width bins in the
+given range (10, by default). If bins is a sequence, it defines the 
bin edges,
+including the rightmost edge, allowing for non-uniform bin widths.
+range : (float, float), optional
+The lower and upper range of the bins. If not provided, range is 
simply (a.min(), a.max()).
+Values outside the range are ignored. The first element of the range 
must be less than or
+equal to the second. range affects the automatic bin computation as 
well, the range will
+be equally divided by the number of bins.
+"""
+
+# pylint: disable= no-member, protected-access
+if isinstance(bins, NDArray):
+return _internal._histogram(data=a, bins=bins)
+elif isinstance(bins, integer_types):
+if range is None:
+warnings.warn("range is not specified, using numpy's result "
+  "to ensure consistency with numpy")
+res, bin_bounds = np.histogram(a.asnumpy(), bins=bins)
+return array(res), array(bin_bounds)
+return _internal._histogram(data=a, bin_cnt=bins, range=range)
+raise ValueError("bins argument should be either an integer or an NDArray")
+# pylint: enable= no-member, protected-access, redefined-builtin
diff --git a/python/mxnet/symbol/symbol.py b/python/mxnet/symbol/symbol.py
index 7e5b52770fe..c5e2f5cb77d 100644
--- a/python/mxnet/symbol/symbol.py
+++ b/python/mxnet/symbol/symbol.py
@@ -34,7 +34,7 @@
 
 from ..attribute import AttrScope
 from ..base import _LIB, numeric_types, c_array, c_array_buf, c_str, 
c_str_array, c_handle_array
-from ..base import mx_uint, py_str, string_types
+from ..base import mx_uint, py_str, string_types, integer_types
 from ..base import NDArrayHandle, ExecutorHandle, SymbolHandle
 from ..base import check_call, MXNetError, NotImplementedForSymbol
 from ..context import Context, current_context
@@ -47,7 +47,8 @@
 from ._internal import SymbolBase, _set_symbol_class
 
 __all__ = ["Symbol", "var", "Variable", "Group", "load", "load_json",
-   "pow", "maximum", "minimum", "hypot", "eye", "zeros", "ones", 
"full", "arange"]
+   "pow", "maximum", "minimum", "hypot", "eye", "zeros", "ones", 
"full", "arange",
+   "histogram"]
 
 
 class Symbol(SymbolBase):
@@ -2864,4 +2865,29 @@ def arange(start, stop=None, step=1.0, repeat=1, 
name=None, dtype=None):
 return _internal._arange(start=start, stop=stop, step=step, repeat=repeat,
  name=name, dtype=dtype)
 
+def histogram(a, bins=10, range=None, **kwargs):
+"""Compute the histogram of the input data.
+
+Parameters
+--
+a : NDArray
+Input data. The histogram is computed over the flattened array.
+bins : int or sequence of scalars
+If bins is an int, it defines the number of equal-width bins in the
+given range (10, by default). If bins is a sequence, it defines the 
bin edges,
+including the rightmost edge, allowing for non-uniform bin widths.
+range : (float, float), required if bins is an integer
+The lower and upper range of the bins. If not provided, range is 
simply (a.min(), a.max()).
+Values outside the range are ignored. The first element of the range 
must be less than or
+equal to the 

[incubator-mxnet] branch master updated: [MXNET-349] Histogram Operator (#10931)

2018-06-25 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new ed7e360  [MXNET-349] Histogram Operator (#10931)
ed7e360 is described below

commit ed7e3602a8046646582c0c681b70d9556f5fa0a4
Author: Hao Jin 
AuthorDate: Mon Jun 25 16:45:32 2018 -0700

[MXNET-349] Histogram Operator (#10931)

* implementation of histogram operator

* address code reviews and code re-design

* add exception for invalid inputs

* address code reviews

* add symbol and symbolic forward check for histogram
---
 python/mxnet/ndarray/ndarray.py  |  35 +-
 python/mxnet/symbol/symbol.py|  30 -
 src/common/cuda_utils.h  |  30 +
 src/operator/tensor/histogram-inl.h  | 172 +++
 src/operator/tensor/histogram.cc | 159 +
 src/operator/tensor/histogram.cu | 111 +
 src/operator/tensor/util/tensor_util-inl.cuh |   4 +-
 tests/python/unittest/test_operator.py   |  34 ++
 8 files changed, 571 insertions(+), 4 deletions(-)

diff --git a/python/mxnet/ndarray/ndarray.py b/python/mxnet/ndarray/ndarray.py
index f017d7e..002ce3e 100644
--- a/python/mxnet/ndarray/ndarray.py
+++ b/python/mxnet/ndarray/ndarray.py
@@ -46,7 +46,7 @@ __all__ = ["NDArray", "concatenate", "_DTYPE_NP_TO_MX", 
"_DTYPE_MX_TO_NP", "_GRA
"ones", "add", "arange", "eye", "divide", "equal", "full", 
"greater", "greater_equal",
"imdecode", "lesser", "lesser_equal", "logical_and", "logical_or", 
"logical_xor",
"maximum", "minimum", "moveaxis", "modulo", "multiply", 
"not_equal", "onehot_encode",
-   "power", "subtract", "true_divide", "waitall", "_new_empty_handle"]
+   "power", "subtract", "true_divide", "waitall", "_new_empty_handle", 
"histogram"]
 
 _STORAGE_TYPE_UNDEFINED = -1
 _STORAGE_TYPE_DEFAULT = 0
@@ -3740,3 +3740,36 @@ def empty(shape, ctx=None, dtype=None):
 if dtype is None:
 dtype = mx_real_t
 return NDArray(handle=_new_alloc_handle(shape, ctx, False, dtype))
+
+
+# pylint: disable= redefined-builtin
+def histogram(a, bins=10, range=None):
+"""Compute the histogram of the input data.
+
+Parameters
+--
+a : NDArray
+Input data. The histogram is computed over the flattened array.
+bins : int or sequence of scalars
+If bins is an int, it defines the number of equal-width bins in the
+given range (10, by default). If bins is a sequence, it defines the 
bin edges,
+including the rightmost edge, allowing for non-uniform bin widths.
+range : (float, float), optional
+The lower and upper range of the bins. If not provided, range is 
simply (a.min(), a.max()).
+Values outside the range are ignored. The first element of the range 
must be less than or
+equal to the second. range affects the automatic bin computation as 
well, the range will
+be equally divided by the number of bins.
+"""
+
+# pylint: disable= no-member, protected-access
+if isinstance(bins, NDArray):
+return _internal._histogram(data=a, bins=bins)
+elif isinstance(bins, integer_types):
+if range is None:
+warnings.warn("range is not specified, using numpy's result "
+  "to ensure consistency with numpy")
+res, bin_bounds = np.histogram(a.asnumpy(), bins=bins)
+return array(res), array(bin_bounds)
+return _internal._histogram(data=a, bin_cnt=bins, range=range)
+raise ValueError("bins argument should be either an integer or an NDArray")
+# pylint: enable= no-member, protected-access, redefined-builtin
diff --git a/python/mxnet/symbol/symbol.py b/python/mxnet/symbol/symbol.py
index 7e5b527..c5e2f5c 100644
--- a/python/mxnet/symbol/symbol.py
+++ b/python/mxnet/symbol/symbol.py
@@ -34,7 +34,7 @@ import numpy as _numpy
 
 from ..attribute import AttrScope
 from ..base import _LIB, numeric_types, c_array, c_array_buf, c_str, 
c_str_array, c_handle_array
-from ..base import mx_uint, py_str, string_types
+from ..base import mx_uint, py_str, string_types, integer_types
 from ..base import NDArrayHandle, ExecutorHandle, SymbolHandle
 from ..base import check_call, MXNetError, NotImplementedForSymbol
 from ..context import Context, current_context
@@ -47,7 +47,8 @@ from . import op
 from ._internal import SymbolBase, _set_symbol_class
 
 __all__ = ["Symbol", "var", "Variable", "Group", "load", "load_json",
-   "pow", "maximum", "minimum", "hypot", "eye", "zeros", "ones", 
"full", "arange"]
+   "pow", "maximum", "minimum", "hypot", "eye", "zeros", "ones", 
"full", "arange",
+   "histogram"]
 
 
 class Symbol(SymbolBase):
@@ -2864,4 +2865,29 

[GitHub] azai91 edited a comment on issue #11371: [MXNET-486] Create CPP test for concat MKLDNN operator

2018-06-25 Thread GitBox
azai91 edited a comment on issue #11371: [MXNET-486] Create CPP test for concat 
MKLDNN operator
URL: https://github.com/apache/incubator-mxnet/pull/11371#issuecomment-400086783
 
 
   @zheng-da @pengzhao-intel please review when you have time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Fix flaky test test_operator.test_binary_op due to numerical errors (#11259)

2018-06-25 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 21ff36b  Fix flaky test test_operator.test_binary_op due to numerical 
errors (#11259)
21ff36b is described below

commit 21ff36b06bf47ff2ac4145ce60ec1fe5dd14ce1d
Author: Pedro Larroy <928489+lar...@users.noreply.github.com>
AuthorDate: Mon Jun 25 16:35:23 2018 -0700

Fix flaky test test_operator.test_binary_op due to numerical errors (#11259)

Use float64 computations as the reference numpy implementation operates in 
double and not float.
f64(f32(f64(.))) % f64(f32(f64(.))) is not the same as f64(.) % f64(.) due 
to limited precission.

fixes #9853
---
 tests/python/unittest/test_operator.py | 50 --
 1 file changed, 36 insertions(+), 14 deletions(-)

diff --git a/tests/python/unittest/test_operator.py 
b/tests/python/unittest/test_operator.py
index fbd3886..287d830 100644
--- a/tests/python/unittest/test_operator.py
+++ b/tests/python/unittest/test_operator.py
@@ -1550,6 +1550,7 @@ def gen_broadcast_data_int(idx):
 def gen_binary_data(dummy):
 ndim = np.random.randint(1, 6)
 shape = np.random.randint(1, 6, size=(ndim,))
+#print("gen shape {}".format(shape))
 return [np.random.random(shape), np.random.random(shape)]
 
 
@@ -1562,27 +1563,46 @@ def check_binary_op_forward(symbol, baseline, gen_data, 
rtol=1e-3, atol=1e-5, mx
 sample_num = 200
 for i in range(sample_num):
 d = gen_data(i)
-x = baseline(d[0], d[1])
 y = symbol.bind(default_context(), args={'a': mx.nd.array(d[0]), 'b': 
mx.nd.array(d[1])})
 y.forward(is_train=True)
 y = y.outputs[0].asnumpy()
+x = baseline(d[0], d[1]).astype(y.dtype)
+
+#np.set_printoptions(precision=20)
+
+a = d[0]
+b = d[1]
+#print("a: {} {}".format(a.dtype, a))
+#print("a: {} {}".format(b.dtype, b))
+
+#print("x: {} {}".format(x.dtype, x))
+#print("y: {} {}".format(y.dtype, y))
 if mx_nd_func is not None:
 d0 = mx.nd.array(d[0], dtype=d[0].dtype)
 d1 = mx.nd.array(d[1], dtype=d[1].dtype)
 assert_almost_equal(y, mx_nd_func(d0, d1).asnumpy(), rtol=rtol, 
atol=atol)
 idx = np.abs(x-y) > atol+rtol*np.abs(x)
 if idx.any():
-print('found precision problem')
+import binascii
+np.set_printoptions(precision=20)
+logging.error('found precision problem:')
 d[0] = np.broadcast_to(d[0], x.shape)
 d[1] = np.broadcast_to(d[1], x.shape)
-print('a: {}'.format(d[0][idx]))
-print('b: {}'.format(d[1][idx]))
-import struct
-print('a hex: {}'.format(struct.pack('d', 
d[0][idx]).encode('hex')))
-print('b hex: {}'.format(struct.pack('d', np.broadcast_to(d[1], 
x.shape)[idx]).encode('hex')))
-print('in baseline(a, b): {}'.format(x[idx]))
-print('in symbol(a, b): {}'.format(y[idx]))
-print('diff: {}'.format(np.abs(x-y)[idx] - 
atol-rtol*np.abs(x)[idx]))
+logging.error('input a: {}'.format(d[0][idx]))
+logging.error('input b: {}'.format(d[1][idx]))
+logging.error("output x: {} {}".format(x.dtype, x))
+logging.error("output y: {} {}".format(y.dtype, y))
+def ftohex(xs):
+import struct
+return list(map(lambda x: binascii.hexlify(struct.pack('d', 
x)), xs.flatten()))
+logging.error('output x in baseline(a, b): {}'.format(x[idx]))
+logging.error('output y in symbol(a, b): {}'.format(y[idx]))
+logging.error('output x in baseline(a,b) hex: 
{}'.format(ftohex(x[idx])))
+logging.error('output y in symbol(a,b) hex: 
{}'.format(ftohex(y[idx])))
+logging.error('input a hex: {}'.format(ftohex(d[0][idx])))
+logging.error('input a hex: {}'.format(ftohex(d[1][idx])))
+
+logging.error('diff: {}'.format(np.abs(x-y)[idx] - 
atol-rtol*np.abs(x)[idx]))
 assert_allclose(y, x, rtol=rtol, atol=atol)
 
 
@@ -1641,10 +1661,13 @@ def test_binary_op():
 check_binary_op_backward(c, lambda g_out, a, b: (g_out / b, - g_out * 
a / (b * b)), gen_binary_data)
 
 def test_bmod(a, b):
-c = a % b
+# Python and numpy operate only in double so to avoid numerical errors 
we have to use
+# doubles as well. This was a flaky test before when using float32. 
seed 1688524483, 1768433044
+#c = a % b
+c = mx.sym.cast(a, dtype='float64') % mx.sym.cast(b, dtype='float64')
 # '%' is sensitive to the precision of the calculation.  Force numpy 
to match mxnet's float32.
-# Issue exposed with seed 1768433044
-

[GitHub] szha closed issue #9853: Flaky test_operator.test_binary_op

2018-06-25 Thread GitBox
szha closed issue #9853: Flaky test_operator.test_binary_op
URL: https://github.com/apache/incubator-mxnet/issues/9853
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed pull request #11259: [MXNET-184] Fix flaky test test_operator.test_binary_op due to numerical errors

2018-06-25 Thread GitBox
szha closed pull request #11259: [MXNET-184] Fix flaky test 
test_operator.test_binary_op due to numerical errors
URL: https://github.com/apache/incubator-mxnet/pull/11259
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/tests/python/unittest/test_operator.py 
b/tests/python/unittest/test_operator.py
index 67426693436..c1f6ba0dcf9 100644
--- a/tests/python/unittest/test_operator.py
+++ b/tests/python/unittest/test_operator.py
@@ -60,14 +60,14 @@ def check_rnn_consistency(cell1, cell2, T, N, I, H, 
grad_req):
 
 dy = mx.random.uniform(shape=mod1.get_outputs()[0].shape)
 mod1.backward(out_grads=[dy])
-mod2.backward(out_grads=[dy])
+mod2.backward(out_grads=[dy])
 if grad_req != 'null':
 assert_allclose(mod1.get_input_grads()[0].asnumpy(), 
mod2.get_input_grads()[0].asnumpy(), rtol=1e-2, atol=1e-4)
 else:
 assert(mod1.get_input_grads()[0] == None)
 assert(mod2.get_input_grads()[0] == None)
-
-
+
+
 
 @with_seed()
 def test_lstm_sym():
@@ -77,7 +77,7 @@ def test_lstm_sym():
 stack.add(mx.rnn.LSTMCell(H, prefix='l0_'))
 stack.add(mx.rnn.LSTMCell(H, prefix='l1_'))
 stack.add(mx.rnn.LSTMCell(H, prefix='l2_'))
-
+
 check_rnn_consistency(fused, stack, T, N, I, H, 'write')
 check_rnn_consistency(fused, stack, T, N, I, H, 'add')
 check_rnn_consistency(fused, stack, T, N, I, H, 'null')
@@ -120,21 +120,21 @@ def test_gru_sym():
 @with_seed()
 def test_gru_bidirectional():
 T, N, I, H = 5, 20, 800, 800
-
+
 fused = mx.rnn.FusedRNNCell(H, num_layers=2, mode='gru',
 bidirectional=True, get_next_state=True, 
prefix='')
-
+
 stack = mx.rnn.SequentialRNNCell()
 stack.add(mx.rnn.BidirectionalCell(
 mx.rnn.GRUCell(H, prefix='l0_'),
 mx.rnn.GRUCell(H, prefix='r0_'),
-output_prefix='bi_gru_0_'))
-
+output_prefix='bi_gru_0_'))
+
 stack.add(mx.rnn.BidirectionalCell(
 mx.rnn.GRUCell(H, prefix='l1_'),
 mx.rnn.GRUCell(H, prefix='r1_'),
 output_prefix='bi_gru_1_'))
-
+
 check_rnn_consistency(fused, stack, T, N, I, H, 'write')
 check_rnn_consistency(fused, stack, T, N, I, H, 'add')
 check_rnn_consistency(fused, stack, T, N, I, H, 'null')
@@ -1553,6 +1553,7 @@ def gen_broadcast_data_int(idx):
 def gen_binary_data(dummy):
 ndim = np.random.randint(1, 6)
 shape = np.random.randint(1, 6, size=(ndim,))
+#print("gen shape {}".format(shape))
 return [np.random.random(shape), np.random.random(shape)]
 
 
@@ -1565,27 +1566,46 @@ def check_binary_op_forward(symbol, baseline, gen_data, 
rtol=1e-3, atol=1e-5, mx
 sample_num = 200
 for i in range(sample_num):
 d = gen_data(i)
-x = baseline(d[0], d[1])
 y = symbol.bind(default_context(), args={'a': mx.nd.array(d[0]), 'b': 
mx.nd.array(d[1])})
 y.forward(is_train=True)
 y = y.outputs[0].asnumpy()
+x = baseline(d[0], d[1]).astype(y.dtype)
+
+#np.set_printoptions(precision=20)
+
+a = d[0]
+b = d[1]
+#print("a: {} {}".format(a.dtype, a))
+#print("a: {} {}".format(b.dtype, b))
+
+#print("x: {} {}".format(x.dtype, x))
+#print("y: {} {}".format(y.dtype, y))
 if mx_nd_func is not None:
 d0 = mx.nd.array(d[0], dtype=d[0].dtype)
 d1 = mx.nd.array(d[1], dtype=d[1].dtype)
 assert_almost_equal(y, mx_nd_func(d0, d1).asnumpy(), rtol=rtol, 
atol=atol)
 idx = np.abs(x-y) > atol+rtol*np.abs(x)
 if idx.any():
-print('found precision problem')
+import binascii
+np.set_printoptions(precision=20)
+logging.error('found precision problem:')
 d[0] = np.broadcast_to(d[0], x.shape)
 d[1] = np.broadcast_to(d[1], x.shape)
-print('a: {}'.format(d[0][idx]))
-print('b: {}'.format(d[1][idx]))
-import struct
-print('a hex: {}'.format(struct.pack('d', 
d[0][idx]).encode('hex')))
-print('b hex: {}'.format(struct.pack('d', np.broadcast_to(d[1], 
x.shape)[idx]).encode('hex')))
-print('in baseline(a, b): {}'.format(x[idx]))
-print('in symbol(a, b): {}'.format(y[idx]))
-print('diff: {}'.format(np.abs(x-y)[idx] - 
atol-rtol*np.abs(x)[idx]))
+logging.error('input a: {}'.format(d[0][idx]))
+logging.error('input b: {}'.format(d[1][idx]))
+logging.error("output x: {} {}".format(x.dtype, x))
+logging.error("output y: {} {}".format(y.dtype, y))
+def ftohex(xs):
+import struct
+return 

[GitHub] gigasquid commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
gigasquid commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197973457
 
 

 ##
 File path: contrib/clojure-package/README.md
 ##
 @@ -0,0 +1,203 @@
+# Clojure MXNet
+
+A clojure package to the MXNet Deep Learning library
+
+## Introduction
+
+MXNet is a first class, modern deep learning library that AWS has officially 
picked as its chosen library. It supports multiple languages on a first class 
basis and is incubating as an Apache project.
+
+The motivation for creating a Clojure package is to be able to open the deep 
learning library to the Clojure ecosystem and build bridges for future 
development and innovation for the community. It provides all the needed tools 
including low level and high level apis, dynamic graphs, and things like GAN 
and natural language support.
+
+For high leverage, the Clojure package has been built on the existing Scala 
package using interop. This has allowed rapid development and close parity with 
the Scala functionality. This also leaves the door open to directly developing 
code against the jni-bindings with Clojure in the future in an incremental 
fashion, using the test suites as a refactoring guide.
+
+## Current State and Plans
+
+The Clojure package is nearing the end of its first development milestone 
which is to achieve a close parity with the Scala package and to potentially be 
included into the main project for official Clojure language support.
+
+What is needed now is alpha testing on both OSX and Linux to discover any 
bugs, rough edges, and generally harden it before an official PR is opened on 
the main project.
+
+Help with this effort is greatly appreciated and contributors will be 
recognized in the project README.
+
+Testing instructions can be found in the Testing.md
+
+## Getting Started
+
+The following systems are supported:
+
+- OSX cpu
+- Linux cpu
+- Linux gpu
+
+There are two ways of getting going. The first way is the easiest and that is 
to use the pre-built jars from Maven. The second way is to build from source. 
In both cases, you will need to load the prereqs and dependencies, (like 
opencv).
+
+It's been tested on AWS Deep Learning AMI and OSX High Sierra 10.13.4
+
+
+### Prerequisites
+
+**If you are using the AWS Deep Learning Ubuntu or Linux AMI you should be 
good to go without doing anything on this step.**
+
+
+Follow the instructions from 
https://mxnet.incubator.apache.org/install/osx_setup.html or 
https://mxnet.incubator.apache.org/install/ubuntu_setup.html
+about _Prepare Environment for GPU Installation_
+and _Install MXNet dependencies_
+
+
+ Cloning the repo and running from source
+
+To use the prebuilt jars, you will need to replace the native version of the 
line in the project dependencies with your configuration.
+
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-gpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-cpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.2.0"]`
+
+
+```clojure
+
+(ns tutorial.ndarray
+  (:require [org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.context :as context]))
+
+;;Create NDArray
+(def a (ndarray/zeros [100 50])) ;;all zero arrray of dimension 100 x 50
+(def b (ndarray/ones [256 32 128 1])) ;; all one array of dimension
+(def c (ndarray/array [1 2 3 4 5 6] [2 3])) ;; array with contents of a shape 
2 x 3
+
+;;; There are also ways to convert to a vec or get the shape as an object or 
vec
+(ndarray/->vec c) ;=> [1.0 2.0 3.0 4.0 5.0 6.0]
+```
+
+See the examples/tutorial section for more.
+
+
+The jars from maven with the needed MXNet native binaries in it. On startup, 
the native libraries are extracted from the jar and copied into a temporary 
location on your path. On termination, they are deleted.
+
+If you want details on the flags (opencv verison and cuda version of the 
jars), they are documented here 
https://cwiki.apache.org/confluence/display/MXNET/MXNet-Scala+Release+Process
+
+
+### Build from MXNET Source
+
+Checkout the latest sha from the main package
+
+`git clone --recursive https://github.com/dmlc/mxnet ~/mxnet`
+`cd ~/mxnet`
+
+
+`git checkout tags/1.2.0 -b release-1.2.0`
+
+`git submodule update --init --recursive`
+
+Sometimes it useful to use this script to clean hard
+https://gist.github.com/nicktoumpelis/11214362
+
+
+Go here to do the base package installation 
https://mxnet.incubator.apache.org/install/index.html
+
+ Run `make scalapkg` then `make scalainstall`
+
+then replace the correct jar for your architecture in the project.clj, example 
`[ml.dmlc.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.0.1-SNAPSHOT"]`
+
+ Test your installation
+
+To test your installation, you should run `lein test`. This will run the test 
suite (CPU) for the clojure package.
+
+
+ Generation of NDArray and Symbol apis
+
+The bulk of the ndarray and symbol apis are generated via java reflection 

[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197972947
 
 

 ##
 File path: Makefile
 ##
 @@ -94,6 +94,14 @@ else
 endif
 CFLAGS += -I$(TPARTYDIR)/mshadow/ -I$(TPARTYDIR)/dmlc-core/include -fPIC 
-I$(NNVM_PATH)/include -I$(DLPACK_PATH)/include -I$(TPARTYDIR)/tvm/include 
-Iinclude $(MSHADOW_CFLAGS)
 LDFLAGS = -pthread $(MSHADOW_LDFLAGS) $(DMLC_LDFLAGS)
+
+
+ifeq ($(USE_TENSORRT), 1)
 
 Review comment:
   @KellenSunderland I agree. Should the CMake build be part of the initial PR 
or a subsequent one?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197972245
 
 

 ##
 File path: src/executor/tensorrt_pass.cc
 ##
 @@ -0,0 +1,583 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2018 by Contributors
+ * \file tensorrt_pass.cc
+ * \brief Replace TRT compatible subgraphs by TRT engines
+ * \author Clement Fuji Tsang
+ */
+
+#if MXNET_USE_TENSORRT
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "./onnx_to_tensorrt.h"
+#include "./exec_pass.h"
+#include "../operator/contrib/nnvm_to_onnx-inl.h"
+
+namespace mxnet {
+namespace exec {
+
+using NodePtr = nnvm::NodePtr;
+
+/*!
+ * \brief Custom graph class, which will contain bi-directional nodes
+ * we need to compute DFS and reverse DFS for graph partitioning
+ */
+class BidirectionalGraph {
+ public:
+  struct Node {
+nnvm::Node* nnvmptr;
+std::vector inputs;
+std::vector outputs;
+  };
+  std::vector nodes;
+  std::unordered_map nnvm2nid;
+  std::vector outputs;
+  static const std::unordered_set unconditionalTRTop;
+
+  explicit BidirectionalGraph(const Graph ) {
+auto& idx = g.indexed_graph();
+auto num_nodes = idx.num_nodes();
+nodes.reserve(num_nodes);
+nnvm2nid.reserve(num_nodes);
+outputs.reserve(idx.outputs().size());
+DFSVisit(g.outputs, [this](const nnvm::NodePtr& n) {
+  BidirectionalGraph::Node new_node;
+  new_node.nnvmptr = n.get();
+  nnvm2nid[n.get()] = static_cast(nodes.size());
+  nodes.emplace_back(std::move(new_node));
+});
+for (const auto& it : nnvm2nid) {
+  nnvm::Node* nnvmnode = it.first;
+  uint32_t nid = it.second;
+  for (auto& n : nnvmnode->inputs) {
+uint32_t input_nid = nnvm2nid[n.node.get()];
+nodes[input_nid].outputs.emplace_back([nid]);
+nodes[nid].inputs.emplace_back([input_nid]);
+  }
+}
+for (auto& e : g.outputs) {
+  uint32_t nid = nnvm2nid[e.node.get()];
+  outputs.emplace_back([nid]);
+}
+  }
+
+  template 
+  void DFS(const std::vector& heads, bool reverse, FVisit fvisit) {
+std::unordered_set visited;
+std::deque stack(heads.begin(), heads.end());
+visited.reserve(heads.size());
+while (!stack.empty()) {
+  Node* vertex = stack.back();
+  stack.pop_back();
+  if (visited.count(vertex) == 0) {
+visited.insert(vertex);
+fvisit(vertex);
+std::vector nexts = reverse ? vertex->inputs : vertex->outputs;
+for (Node* node : nexts) {
+  if (visited.count(node) == 0) {
+stack.emplace_back(node);
+  }
+}
+  }
+}
+  }
+
+  using t_pairset = std::pair, 
std::unordered_set>;
+  using t_pairvec = std::pair, std::vector>;
+  using t_uncomp_map = std::unordered_map>;
+
+  std::unordered_set naive_grow_subgraph(Node* head,
+std::unordered_set* 
set_unused,
+t_uncomp_map* uncomp_map) {
+std::unordered_set subgraph;
+std::unordered_set uncomp_set;
+std::deque stack;
+stack.emplace_back(head);
+while (!stack.empty()) {
+  Node* vertex = stack.back();
+  stack.pop_back();
+  if (set_unused->count(vertex) && !uncomp_set.count(vertex)) {
+set_unused->erase(vertex);
+subgraph.insert(vertex);
+uncomp_set.insert((*uncomp_map)[vertex].begin(), 
(*uncomp_map)[vertex].end());
+for (Node* input : vertex->inputs) {
+  if (set_unused->count(input) && !uncomp_set.count(input)) {
+stack.emplace_back(input);
+  }
+}
+for (Node* output : vertex->outputs) {
+  if (set_unused->count(output) && !uncomp_set.count(output)) {
+stack.emplace_back(output);
+  }
+}
+  }
+}
+return subgraph;
+  }
+
+  std::vector> get_subsets(
+std::unordered_map* const params_map) {
+std::vector> subgraphs;
+std::unordered_set set_nonTRTnodes;
+std::unordered_set set_allnodes(nodes.size());
+std::vector separation_sets;
+for (Node& node : nodes) 

[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197972135
 
 

 ##
 File path: src/executor/tensorrt_pass.cc
 ##
 @@ -0,0 +1,583 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2018 by Contributors
+ * \file tensorrt_pass.cc
+ * \brief Replace TRT compatible subgraphs by TRT engines
+ * \author Clement Fuji Tsang
+ */
+
+#if MXNET_USE_TENSORRT
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "./onnx_to_tensorrt.h"
+#include "./exec_pass.h"
+#include "../operator/contrib/nnvm_to_onnx-inl.h"
+
+namespace mxnet {
+namespace exec {
+
+using NodePtr = nnvm::NodePtr;
+
+/*!
+ * \brief Custom graph class, which will contain bi-directional nodes
+ * we need to compute DFS and reverse DFS for graph partitioning
+ */
+class BidirectionalGraph {
+ public:
+  struct Node {
+nnvm::Node* nnvmptr;
+std::vector inputs;
+std::vector outputs;
+  };
+  std::vector nodes;
+  std::unordered_map nnvm2nid;
+  std::vector outputs;
+  static const std::unordered_set unconditionalTRTop;
+
+  explicit BidirectionalGraph(const Graph ) {
+auto& idx = g.indexed_graph();
+auto num_nodes = idx.num_nodes();
+nodes.reserve(num_nodes);
+nnvm2nid.reserve(num_nodes);
+outputs.reserve(idx.outputs().size());
+DFSVisit(g.outputs, [this](const nnvm::NodePtr& n) {
+  BidirectionalGraph::Node new_node;
+  new_node.nnvmptr = n.get();
+  nnvm2nid[n.get()] = static_cast(nodes.size());
+  nodes.emplace_back(std::move(new_node));
+});
+for (const auto& it : nnvm2nid) {
+  nnvm::Node* nnvmnode = it.first;
+  uint32_t nid = it.second;
+  for (auto& n : nnvmnode->inputs) {
+uint32_t input_nid = nnvm2nid[n.node.get()];
+nodes[input_nid].outputs.emplace_back([nid]);
+nodes[nid].inputs.emplace_back([input_nid]);
+  }
+}
+for (auto& e : g.outputs) {
+  uint32_t nid = nnvm2nid[e.node.get()];
+  outputs.emplace_back([nid]);
+}
+  }
+
+  template 
+  void DFS(const std::vector& heads, bool reverse, FVisit fvisit) {
+std::unordered_set visited;
+std::deque stack(heads.begin(), heads.end());
+visited.reserve(heads.size());
+while (!stack.empty()) {
+  Node* vertex = stack.back();
+  stack.pop_back();
+  if (visited.count(vertex) == 0) {
+visited.insert(vertex);
+fvisit(vertex);
+std::vector nexts = reverse ? vertex->inputs : vertex->outputs;
+for (Node* node : nexts) {
+  if (visited.count(node) == 0) {
+stack.emplace_back(node);
+  }
+}
+  }
+}
+  }
+
+  using t_pairset = std::pair, 
std::unordered_set>;
+  using t_pairvec = std::pair, std::vector>;
+  using t_uncomp_map = std::unordered_map>;
+
+  std::unordered_set naive_grow_subgraph(Node* head,
+std::unordered_set* 
set_unused,
+t_uncomp_map* uncomp_map) {
+std::unordered_set subgraph;
+std::unordered_set uncomp_set;
+std::deque stack;
+stack.emplace_back(head);
+while (!stack.empty()) {
+  Node* vertex = stack.back();
+  stack.pop_back();
+  if (set_unused->count(vertex) && !uncomp_set.count(vertex)) {
+set_unused->erase(vertex);
+subgraph.insert(vertex);
+uncomp_set.insert((*uncomp_map)[vertex].begin(), 
(*uncomp_map)[vertex].end());
+for (Node* input : vertex->inputs) {
+  if (set_unused->count(input) && !uncomp_set.count(input)) {
+stack.emplace_back(input);
+  }
+}
+for (Node* output : vertex->outputs) {
+  if (set_unused->count(output) && !uncomp_set.count(output)) {
+stack.emplace_back(output);
+  }
+}
+  }
+}
+return subgraph;
+  }
+
+  std::vector> get_subsets(
+std::unordered_map* const params_map) {
+std::vector> subgraphs;
+std::unordered_set set_nonTRTnodes;
+std::unordered_set set_allnodes(nodes.size());
+std::vector separation_sets;
+for (Node& node : nodes) 

[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197971193
 
 

 ##
 File path: src/executor/tensorrt_pass.cc
 ##
 @@ -0,0 +1,583 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2018 by Contributors
+ * \file tensorrt_pass.cc
+ * \brief Replace TRT compatible subgraphs by TRT engines
+ * \author Clement Fuji Tsang
+ */
+
+#if MXNET_USE_TENSORRT
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "./onnx_to_tensorrt.h"
+#include "./exec_pass.h"
+#include "../operator/contrib/nnvm_to_onnx-inl.h"
+
+namespace mxnet {
+namespace exec {
+
+using NodePtr = nnvm::NodePtr;
+
+/*!
+ * \brief Custom graph class, which will contain bi-directional nodes
+ * we need to compute DFS and reverse DFS for graph partitioning
+ */
+class BidirectionalGraph {
+ public:
+  struct Node {
+nnvm::Node* nnvmptr;
+std::vector inputs;
+std::vector outputs;
+  };
+  std::vector nodes;
+  std::unordered_map nnvm2nid;
+  std::vector outputs;
+  static const std::unordered_set unconditionalTRTop;
+
+  explicit BidirectionalGraph(const Graph ) {
+auto& idx = g.indexed_graph();
+auto num_nodes = idx.num_nodes();
+nodes.reserve(num_nodes);
+nnvm2nid.reserve(num_nodes);
+outputs.reserve(idx.outputs().size());
+DFSVisit(g.outputs, [this](const nnvm::NodePtr& n) {
+  BidirectionalGraph::Node new_node;
+  new_node.nnvmptr = n.get();
+  nnvm2nid[n.get()] = static_cast(nodes.size());
+  nodes.emplace_back(std::move(new_node));
+});
+for (const auto& it : nnvm2nid) {
+  nnvm::Node* nnvmnode = it.first;
+  uint32_t nid = it.second;
+  for (auto& n : nnvmnode->inputs) {
+uint32_t input_nid = nnvm2nid[n.node.get()];
+nodes[input_nid].outputs.emplace_back([nid]);
+nodes[nid].inputs.emplace_back([input_nid]);
+  }
+}
+for (auto& e : g.outputs) {
+  uint32_t nid = nnvm2nid[e.node.get()];
+  outputs.emplace_back([nid]);
+}
+  }
+
+  template 
+  void DFS(const std::vector& heads, bool reverse, FVisit fvisit) {
+std::unordered_set visited;
+std::deque stack(heads.begin(), heads.end());
+visited.reserve(heads.size());
+while (!stack.empty()) {
+  Node* vertex = stack.back();
+  stack.pop_back();
+  if (visited.count(vertex) == 0) {
+visited.insert(vertex);
+fvisit(vertex);
+std::vector nexts = reverse ? vertex->inputs : vertex->outputs;
+for (Node* node : nexts) {
+  if (visited.count(node) == 0) {
+stack.emplace_back(node);
+  }
+}
+  }
+}
+  }
+
+  using t_pairset = std::pair, 
std::unordered_set>;
+  using t_pairvec = std::pair, std::vector>;
+  using t_uncomp_map = std::unordered_map>;
+
+  std::unordered_set naive_grow_subgraph(Node* head,
+std::unordered_set* 
set_unused,
+t_uncomp_map* uncomp_map) {
+std::unordered_set subgraph;
+std::unordered_set uncomp_set;
+std::deque stack;
+stack.emplace_back(head);
+while (!stack.empty()) {
+  Node* vertex = stack.back();
+  stack.pop_back();
+  if (set_unused->count(vertex) && !uncomp_set.count(vertex)) {
+set_unused->erase(vertex);
+subgraph.insert(vertex);
+uncomp_set.insert((*uncomp_map)[vertex].begin(), 
(*uncomp_map)[vertex].end());
+for (Node* input : vertex->inputs) {
+  if (set_unused->count(input) && !uncomp_set.count(input)) {
+stack.emplace_back(input);
+  }
+}
+for (Node* output : vertex->outputs) {
+  if (set_unused->count(output) && !uncomp_set.count(output)) {
+stack.emplace_back(output);
+  }
+}
+  }
+}
+return subgraph;
+  }
+
+  std::vector> get_subsets(
+std::unordered_map* const params_map) {
+std::vector> subgraphs;
+std::unordered_set set_nonTRTnodes;
+std::unordered_set set_allnodes(nodes.size());
+std::vector separation_sets;
+for (Node& node : nodes) 

[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197969467
 
 

 ##
 File path: include/mxnet/executor.h
 ##
 @@ -152,19 +152,19 @@ class Executor {
   static Executor* SimpleBind(nnvm::Symbol symbol,
   const Context& default_ctx,
   const std::map& group2ctx,
-  const std::vector& in_arg_ctxes,
-  const std::vector& arg_grad_ctxes,
-  const std::vector& aux_state_ctxes,
-  const std::unordered_map& 
arg_shape_map,
-  const std::unordered_map& 
arg_dtype_map,
-  const std::unordered_map& 
arg_stype_map,
-  const std::vector& grad_req_types,
-  const std::unordered_set& 
param_names,
+  std::vector* in_arg_ctxes,
 
 Review comment:
   @reminisce  Because if things are to be mutated, they need to be pointers, 
not non-const references (per the linter rules). Given your earlier comments 
about SimpleBindEx rather than modifying SimpleBind, this will be addressed 
there rather than modifying it here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197969124
 
 

 ##
 File path: include/mxnet/executor.h
 ##
 @@ -152,19 +152,19 @@ class Executor {
   static Executor* SimpleBind(nnvm::Symbol symbol,
   const Context& default_ctx,
   const std::map& group2ctx,
-  const std::vector& in_arg_ctxes,
-  const std::vector& arg_grad_ctxes,
-  const std::vector& aux_state_ctxes,
-  const std::unordered_map& 
arg_shape_map,
-  const std::unordered_map& 
arg_dtype_map,
-  const std::unordered_map& 
arg_stype_map,
-  const std::vector& grad_req_types,
-  const std::unordered_set& 
param_names,
+  std::vector* in_arg_ctxes,
+  std::vector* arg_grad_ctxes,
+  std::vector* aux_state_ctxes,
+  std::unordered_map* 
arg_shape_map,
+  std::unordered_map* 
arg_dtype_map,
+  std::unordered_map* 
arg_stype_map,
+  std::vector* grad_req_types,
+  std::unordered_set* param_names,
   std::vector* in_args,
   std::vector* arg_grads,
   std::vector* aux_states,
   std::unordered_map*
-shared_data_arrays = nullptr,
+  shared_data_arrays = nullptr,
 
 Review comment:
   OK


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197967101
 
 

 ##
 File path: src/common/serialization.h
 ##
 @@ -0,0 +1,526 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2015 by Contributors
+ * \file serialization.h
+ * \brief Serialization of some STL and nnvm data-structures
+ * \author Clement Fuji Tsang
+ */
+
+#ifndef MXNET_COMMON_SERIALIZATION_H_
+#define MXNET_COMMON_SERIALIZATION_H_
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+
+namespace mxnet {
+namespace common {
+
+template
+inline size_t serialized_size(const T& obj);
+
+template
+inline size_t serialized_size(const nnvm::Tuple& obj);
+
+template
+inline size_t serialized_size(const std::vector& obj);
+
+template
+inline size_t serialized_size(const std::pair& obj);
+
+template
+inline size_t serialized_size(const std::map& obj);
+
+template
+inline size_t serialized_size(const std::unordered_map& obj);
+
+template
+inline size_t serialized_size(const std::set& obj);
+
+template
+inline size_t serialized_size(const std::unordered_set& obj);
+
+template<>
+inline size_t serialized_size(const std::string& obj);
+
+template
+inline size_t serialized_size(const std::tuple& obj);
+
+template
+inline void serialize(const T& obj, char** buffer);
+
+template
+inline void serialize(const nnvm::Tuple& obj, char** buffer);
+
+template
+inline void serialize(const std::vector& obj, char** buffer);
+
+template
+inline void serialize(const std::pair& obj, char** buffer);
+
+template
+inline void serialize(const std::map& obj, char** buffer);
+
+template
+inline void serialize(const std::unordered_map& obj, char** buffer);
+
+template
+inline void serialize(const std::set& obj, char** buffer);
+
+template
+inline void serialize(const std::unordered_set& obj, char** buffer);
+
+template<>
+inline void serialize(const std::string& obj, char** buffer);
+
+template
+inline void serialize(const std::tuple& obj, char** buffer);
+
+template
+inline void deserialize(T* obj, const std::string& buffer, size_t* curr_pos);
+
+template
+inline void deserialize(nnvm::Tuple* obj, const std::string& buffer, 
size_t* curr_pos);
+
+template
+inline void deserialize(std::vector* obj, const std::string& buffer, 
size_t* curr_pos);
+
+template
+inline void deserialize(std::pair* obj, const std::string& buffer, 
size_t* curr_pos);
+
+template
+inline void deserialize(std::map* obj, const std::string& buffer, 
size_t* curr_pos);
+
+template
+inline void deserialize(std::unordered_map* obj, const std::string& 
buffer, size_t* curr_pos);
+
+template
+inline void deserialize(std::set* obj, const std::string& buffer, size_t* 
curr_pos);
+
+template
+inline void deserialize(std::unordered_set* obj, const std::string& buffer, 
size_t* curr_pos);
+
+template<>
+inline void deserialize(std::string* obj, const std::string& buffer, size_t* 
curr_pos);
+
+template
+inline void deserialize(std::tuple* obj, const std::string& buffer, 
size_t* curr_pos);
+
+
+template
+struct is_cont {
+  static const bool value = !std::is_pod::value;
+};
+
+template
+inline size_t serialized_size(const T& obj) {
+  return sizeof(T);
+}
+
+template
+inline size_t serialized_size(const nnvm::Tuple& obj) {
+  if (is_cont::value) {
+size_t sum_val = 4;
+for (auto& el : obj) {
+  sum_val += serialized_size(el);
+}
+return sum_val;
+  } else {
+return 4 + (obj.ndim() * sizeof(T));
+  }
+}
+
+template
+inline size_t serialized_size(const std::vector& obj) {
+  if (is_cont::value) {
+size_t sum_val = 4;
+for (T i : obj) {
+  sum_val += serialized_size(i);
+}
+return sum_val;
+  } else {
+return sizeof(T) * obj.size() + 4;
+  }
+}
+
+template
+inline size_t serialized_size(const std::pair& obj) {
+  return serialized_size(obj.first) + serialized_size(obj.second);
+}
+
+template
+inline size_t serialized_size(const std::map& obj) {
+  size_t sum_val = 4;
+  if (is_cont::value && is_cont::value) {
+for (auto p : obj) {
+  sum_val += 

[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197966899
 
 

 ##
 File path: docs/api/python/contrib/tensorrt.md
 ##
 @@ -0,0 +1,117 @@
+# MxNet-TensorRT Runtime Integration
+## What is this?
+
+This document described how to use the 
[MxNet](http://mxnet.incubator.apache.org/)-[TensorRT](https://developer.nvidia.com/tensorrt)
 runtime integration to accelerate model inference.
+
+## Why is TensorRT integration useful? 
+
+TensorRT can greatly speed up inference of deep learning models. One 
experiment on a Titan V (V100) GPU shows that with MxNet 1.2, we can get an 
approximately 3x speed-up when running inference of the ResNet-50 model on the 
CIFAR-10 dataset in single precision (fp32). As batch sizes and image sizes go 
up (for CNN inference), the benefit may be less, but in general, TensorRT helps 
especially in cases which have:
+- many bandwidth-bound layers (e.g. pointwise operations) that benefit from 
GPU kernel fusion
+- inference use cases which have tight latency requirements and where the 
client application can't wait for large batches to be queued up
+- embedded systems, where memory constraints are tighter than on servers
+- when performing inference in reduced precision, especially for integer (e.g. 
int8) inference. 
+
+In the past, the main hindrance for the user wishing to benefit from TensorRT 
was the fact that the model needed to be exported from the framework first. 
Once the model got exported through some means (NNVM to TensorRT graph rewrite, 
via ONNX, etc.), one had to then write a TensorRT client application, which 
would feed the data into the TensorRT engine. Since at that point the model was 
independent of the original framework, and since TensorRT could only compute 
the neural network layers but the user had to bring their own data pipeline, 
this increased the burden on the user and reduced the likelihood of 
reproducibility (e.g. different frameworks may have slightly different data 
pipelines, or flexibility of data pipeline operation ordering). Moreover, since 
frameworks typically support more operators than TensorRT, one could have to 
resort to TensorRT plugins for operations that aren't already available via the 
TensorRT graph API.  
+
+The current experimental runtime integration of TensorRT with MxNet resolves 
the above concerns by ensuring that:
+- the graph is still executed by MxNet
+- the MxNet data pipeline is preserved
+- the TensorRT runtime integration logic partitions the graph into subgraphs 
that are either TensorRT compatible or incompatible
+- the graph partitioner collects the TensorRT-compatible subgraphs, hands them 
over to TensorRT, and substitutes the TensorRT compatible subgraph with a 
TensorRT library call, represented as a TensorRT node in NNVM.
+- if a node is not TensorRT compatible, it won't be extracted and substituted 
with a TensorRT call, and will still execute within MxNet
+
+The above points ensure that we find a compromise between the flexibility of 
MxNet, and fast inference in TensorRT, without putting a burden on the user to 
learn how TensorRT APIs work, without the need to write one's own client 
application and data pipeline, etc.
+
+## How do I build MxNet with TensorRT integration?
+
+Building MxNet together with TensorRT is somewhat complex. The recipe will 
hopefully be simplified in the near future, but for now, it's easiest to build 
a Docker container with a Ubuntu 16.04 base. This Dockerfile can be found under 
the ci subdirectory of the MxNet repository. You can build the container as 
follows:
+
+```
+docker build -t ci/docker/Dockerfile.build.ubuntu_gpu_tensorrt 
mxnet_with_tensorrt
+```
+
+Next, we can run this container as follows (don't forget to install 
[nvidia-docker](https://github.com/NVIDIA/nvidia-docker)):
+
+```no-highlight
+nvidia-docker run -ti --rm mxnet_with_tensorrt
+```
+
+After starting the container, you will find yourself in the /opt/mxnet 
directory by default.
+
+## Running a "hello, world" model / unit test:
+
+You can then run the LeNet-5 unit test, which will train LeNet-5 on MNIST, and 
subsequently run inference in MxNet, as well as using the MxNet-TensorRT 
runtime integration, and compare the results. The test can be run as follows:
+
+```no-highlight
+python tests/python/tensorrt/test_tensorrt_lenet5.py
+```
+
+You should get a result similar to the following:
+
+```no-highlight
+Running inference in MxNet
+[03:31:18] src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:107: Running 
performance tests to find the best convolution algorithm, this can take a 
while... (setting env variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
+Running inference in MxNet-TensorRT
+[03:31:18] src/operator/contrib/nnvm_to_onnx.cc:152: ONNX graph construction 
complete.
+Building TensorRT engine, FP16 available:1
+Max batch size: 1024
+Max workspace size: 1024 

[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197966796
 
 

 ##
 File path: src/operator/contrib/nnvm_to_onnx-inl.h
 ##
 @@ -0,0 +1,156 @@
+#ifndef MXNET_OPERATOR_CONTRIB_NNVM_TO_ONNX_INL_H_
+#define MXNET_OPERATOR_CONTRIB_NNVM_TO_ONNX_INL_H_
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2018 by Contributors
+ * \file tensorrt-inl.h
+ * \brief TensorRT Operator
+ * \author Marek Kolodziej, Clement Fuji Tsang
+*/
+
+#if MXNET_USE_TENSORRT
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "./tensorrt-inl.h"
+#include "../operator_common.h"
+#include "../../common/utils.h"
+#include "../../common/serialization.h"
+
+namespace mxnet {
+namespace op {
+namespace nnvm_to_onnx {
+
+using namespace nnvm;
+using namespace ::onnx;
+using int64 = ::google::protobuf::int64;
+
+std::unordered_map GetPlaceholderShapes(const 
ShapeVector& shape_inputs,
+const nnvm::IndexedGraph& ig);
+
+std::unordered_map GetOutputLookup(const 
nnvm::IndexedGraph& ig);
+
+void ConvertPlaceholder(
+  const std::string& node_name,
+  const std::unordered_map& placeholder_shapes,
+  GraphProto* const graph_proto);
+
+void ConvertConstant(GraphProto* const graph_proto,
+  const std::string& node_name,
+  std::unordered_map* const shared_buffer);
+
+void ConvertOutput(op::tensorrt::InferenceMap_t* const trt_output_map,
+   GraphProto* const graph_proto,
+   const std::unordered_map::iterator& 
out_iter,
+   const std::string& node_name,
+   const nnvm::Graph& g,
+   const StorageTypeVector& storage_types,
+   const DTypeVector& dtypes);
+
+typedef void (*ConverterFunction)(NodeProto *node_proto,
+  const NodeAttrs ,
+  const nnvm::IndexedGraph ,
+  const array_view 
);
+
+
+// Forward declarations
+void ConvertConvolution(
+NodeProto *node_proto,
+const NodeAttrs ,
+const nnvm::IndexedGraph ,
+const array_view );
+
+
+void ConvertPooling(NodeProto *node_proto,
+const NodeAttrs ,
+const nnvm::IndexedGraph ,
+const array_view );
+
+void ConvertActivation(NodeProto *node_proto,
+   const NodeAttrs ,
+   const nnvm::IndexedGraph ,
+   const array_view );
+
+void ConvertFullyConnected(NodeProto *node_proto,
+   const NodeAttrs ,
+   const nnvm::IndexedGraph ,
+   const array_view );
+
+void ConvertSoftmaxOutput(NodeProto *node_proto,
+  const NodeAttrs ,
+  const nnvm::IndexedGraph ,
+  const array_view );
+
+void ConvertFlatten(NodeProto *node_proto,
+const NodeAttrs ,
+const nnvm::IndexedGraph ,
+const array_view );
+
+void ConvertBatchNorm(NodeProto *node_proto,
+const NodeAttrs ,
+const nnvm::IndexedGraph ,
+const array_view );
+
+void ConvertElementwiseAdd(NodeProto *node_proto,
+const NodeAttrs ,
+const nnvm::IndexedGraph ,
+const array_view );
+
+TRTParam ConvertNnvmGraphToOnnx(
+const nnvm::Graph ,
+std::unordered_map *const shared_buffer);
+
+static const std::unordered_map converter_map 
= {
 
 Review comment:
   @eric-haibin-lin Yes, so far. TensorRT supports more operators, so the list 
will be expanded once the initial integration is in place.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the

[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197966636
 
 

 ##
 File path: src/common/serialization.h
 ##
 @@ -0,0 +1,526 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2015 by Contributors
+ * \file serialization.h
+ * \brief Serialization of some STL and nnvm data-structures
+ * \author Clement Fuji Tsang
+ */
+
+#ifndef MXNET_COMMON_SERIALIZATION_H_
+#define MXNET_COMMON_SERIALIZATION_H_
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+
+namespace mxnet {
+namespace common {
+
+template
+inline size_t serialized_size(const T& obj);
 
 Review comment:
   @eric-haibin-lin It would make sense to increase test coverage for this 
independently. Will add it to the to-do list for polishing up the PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197966380
 
 

 ##
 File path: python/mxnet/cuda_utils.py
 ##
 @@ -0,0 +1,90 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# Copyright (c) 2015 by Contributors
+# File: serialization.h
+# Purpose: Functions to query GPU count, arch, etc.
+# Author: Dick Carter
+
+"""Provides information on the visible CUDA GPUs on the system."""
+# pylint: disable=broad-except
+# As a stand-alone program, it prints a list of unique cuda SM architectures
+import ctypes as C
+from ctypes.util import find_library
+
+def cint(init_val=0):
 
 Review comment:
   @eric-haibin-lin Good point, the Ctypes utils could just be moved to base, 
and then reused in cuda_utils.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #11340: [MXNET-559] Scripts for running the Broken link checker job

2018-06-25 Thread GitBox
szha commented on issue #11340: [MXNET-559] Scripts for running the  Broken 
link checker job
URL: https://github.com/apache/incubator-mxnet/pull/11340#issuecomment-400118911
 
 
   @marcoabreu CI passed. Should this be merged?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed issue #11353: Flaky test test_gluon_trainer.test_trainer_reset_kv

2018-06-25 Thread GitBox
szha closed issue #11353: Flaky test test_gluon_trainer.test_trainer_reset_kv
URL: https://github.com/apache/incubator-mxnet/issues/11353
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Fix #11353 (#11360)

2018-06-25 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 619e4bd  Fix #11353 (#11360)
619e4bd is described below

commit 619e4bded058e4bf77029bc30b395275d72f7907
Author: Haibin Lin 
AuthorDate: Mon Jun 25 15:34:54 2018 -0700

Fix #11353 (#11360)

* Update test_gluon_trainer.py

* Update test_gluon_trainer.py

* Update test_gluon_trainer.py

* Update test_gluon_trainer.py

* Update test_gluon_trainer.py

* trigger

* Run 10 times

* Update test_gluon_trainer.py

* run 10K times

* test_trainer_reset_kv didn't fail for 10K time . 2nd Trigger.

* test_trainer_reset_kv didn't fail for 10K times. 3rd Trigger.

* remove for loop
---
 tests/python/unittest/test_gluon_trainer.py | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tests/python/unittest/test_gluon_trainer.py 
b/tests/python/unittest/test_gluon_trainer.py
index 1c59cea..eac9fad 100644
--- a/tests/python/unittest/test_gluon_trainer.py
+++ b/tests/python/unittest/test_gluon_trainer.py
@@ -190,6 +190,7 @@ def test_trainer_reset_kv():
 trainer.step(1)
 assert trainer._kvstore.type == kv
 # load would reset kvstore
+mx.nd.waitall()
 params.load('test_trainer_reset_kv.params')
 assert trainer._kvstore is None
 assert trainer._kv_initialized is False



[GitHub] szha closed pull request #11360: Fix #11353

2018-06-25 Thread GitBox
szha closed pull request #11360: Fix #11353
URL: https://github.com/apache/incubator-mxnet/pull/11360
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/tests/python/unittest/test_gluon_trainer.py 
b/tests/python/unittest/test_gluon_trainer.py
index 1c59ceaa093..eac9fad45f5 100644
--- a/tests/python/unittest/test_gluon_trainer.py
+++ b/tests/python/unittest/test_gluon_trainer.py
@@ -190,6 +190,7 @@ def check_trainer_reset_kv(kv):
 trainer.step(1)
 assert trainer._kvstore.type == kv
 # load would reset kvstore
+mx.nd.waitall()
 params.load('test_trainer_reset_kv.params')
 assert trainer._kvstore is None
 assert trainer._kv_initialized is False


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ankkhedia edited a comment on issue #10274: test_ndarray.test_reduce fails in v1.0.0

2018-06-25 Thread GitBox
ankkhedia edited a comment on issue #10274: test_ndarray.test_reduce fails in 
v1.0.0
URL: 
https://github.com/apache/incubator-mxnet/issues/10274#issuecomment-400113596
 
 
   Tested on master for d6813efa2206afb5be98c2da16dd6e2efaf44cda using gcc-6 
(Ubuntu 6.4.0-17ubuntu1~16.04) 6.4.0 20180424, could not reproduce. It's 
probably ARM specific issue on edge devices(RaspBerry PI)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hcho3 opened a new pull request #11396: Fix flaky test test_operator_gpu.test_batchnorm_with_type

2018-06-25 Thread GitBox
hcho3 opened a new pull request #11396: Fix flaky test 
test_operator_gpu.test_batchnorm_with_type
URL: https://github.com/apache/incubator-mxnet/pull/11396
 
 
   ## Description ##
   Addresses #10087. See [#9916 
(comment-371736378)](https://github.com/apache/incubator-mxnet/issues/9916#issuecomment-371736378)
 for a justification for this change.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Don't fail storing test results if test suite got aborted (#11363) (#11391)

2018-06-25 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new cdb01fc  Don't fail storing test results if test suite got aborted 
(#11363) (#11391)
cdb01fc is described below

commit cdb01fc72ec5c8973a5ed48076380721db50ffa8
Author: Marco de Abreu 
AuthorDate: Tue Jun 26 00:26:41 2018 +0200

Don't fail storing test results if test suite got aborted (#11363) (#11391)

* Dont fail during artifact storage

* Update Jenkinsfile

* Update Jenkinsfile
---
 Jenkinsfile | 21 +
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 44aad8e..10fdf1d 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -97,18 +97,23 @@ def publish_test_coverage() {
 }
 
 def collect_test_results_unix(original_file_name, new_file_name) {
-echo 'Saving python test results for ' + new_file_name
-// Rename file to make it distinguishable. Unfortunately, it's not 
possible to get STAGE_NAME in a parallel stage
-sh 'cp ' + original_file_name + ' ' + new_file_name
-archiveArtifacts artifacts: new_file_name
+if (fileExists(original_file_name)) {
+// Rename file to make it distinguishable. Unfortunately, it's not 
possible to get STAGE_NAME in a parallel stage
+// Thus, we have to pick a name manually and rename the files so that 
they can be stored separately.
+sh 'cp ' + original_file_name + ' ' + new_file_name
+archiveArtifacts artifacts: new_file_name
+}
 }
 
 def collect_test_results_windows(original_file_name, new_file_name) {
-echo 'Saving python test results for ' + new_file_name
 // Rename file to make it distinguishable. Unfortunately, it's not 
possible to get STAGE_NAME in a parallel stage
-bat 'xcopy ' + original_file_name + ' ' + new_file_name + '*'
-archiveArtifacts artifacts: new_file_name
-} 
+// Thus, we have to pick a name manually and rename the files so that they 
can be stored separately.
+if (fileExists(original_file_name)) {
+bat 'xcopy ' + original_file_name + ' ' + new_file_name + '*'
+archiveArtifacts artifacts: new_file_name
+}
+}
+
 
 def docker_run(platform, function_name, use_nvidia, shared_mem = '500m') {
   def command = "ci/build.py --docker-registry ${env.DOCKER_CACHE_REGISTRY} 
%USE_NVIDIA% --platform %PLATFORM% --shm-size %SHARED_MEM% 
/work/runtime_functions.sh %FUNCTION_NAME%"



[GitHub] szha closed pull request #11391: Don't fail storing test results if test suite got aborted (#11363)

2018-06-25 Thread GitBox
szha closed pull request #11391: Don't fail storing test results if test suite 
got aborted (#11363)
URL: https://github.com/apache/incubator-mxnet/pull/11391
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/Jenkinsfile b/Jenkinsfile
index 44aad8e006e..10fdf1d6cfa 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -97,18 +97,23 @@ def publish_test_coverage() {
 }
 
 def collect_test_results_unix(original_file_name, new_file_name) {
-echo 'Saving python test results for ' + new_file_name
-// Rename file to make it distinguishable. Unfortunately, it's not 
possible to get STAGE_NAME in a parallel stage
-sh 'cp ' + original_file_name + ' ' + new_file_name
-archiveArtifacts artifacts: new_file_name
+if (fileExists(original_file_name)) {
+// Rename file to make it distinguishable. Unfortunately, it's not 
possible to get STAGE_NAME in a parallel stage
+// Thus, we have to pick a name manually and rename the files so that 
they can be stored separately.
+sh 'cp ' + original_file_name + ' ' + new_file_name
+archiveArtifacts artifacts: new_file_name
+}
 }
 
 def collect_test_results_windows(original_file_name, new_file_name) {
-echo 'Saving python test results for ' + new_file_name
 // Rename file to make it distinguishable. Unfortunately, it's not 
possible to get STAGE_NAME in a parallel stage
-bat 'xcopy ' + original_file_name + ' ' + new_file_name + '*'
-archiveArtifacts artifacts: new_file_name
-} 
+// Thus, we have to pick a name manually and rename the files so that they 
can be stored separately.
+if (fileExists(original_file_name)) {
+bat 'xcopy ' + original_file_name + ' ' + new_file_name + '*'
+archiveArtifacts artifacts: new_file_name
+}
+}
+
 
 def docker_run(platform, function_name, use_nvidia, shared_mem = '500m') {
   def command = "ci/build.py --docker-registry ${env.DOCKER_CACHE_REGISTRY} 
%USE_NVIDIA% --platform %PLATFORM% --shm-size %SHARED_MEM% 
/work/runtime_functions.sh %FUNCTION_NAME%"


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ankkhedia commented on issue #10274: test_ndarray.test_reduce fails in v1.0.0

2018-06-25 Thread GitBox
ankkhedia commented on issue #10274: test_ndarray.test_reduce fails in v1.0.0
URL: 
https://github.com/apache/incubator-mxnet/issues/10274#issuecomment-400113596
 
 
   Tested on master for d6813efa2206afb5be98c2da16dd6e2efaf44cda using gcc-6 
(Ubuntu 6.4.0-17ubuntu1~16.04) 6.4.0 20180424, could not reproduce 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vrakesh edited a comment on issue #11367: Segfault when running test_operator_gpu.test_sparse_dot many times

2018-06-25 Thread GitBox
vrakesh edited a comment on issue #11367: Segfault when running 
test_operator_gpu.test_sparse_dot many times
URL: 
https://github.com/apache/incubator-mxnet/issues/11367#issuecomment-400102780
 
 
   @haojin2 sounds good   thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vrakesh commented on issue #11367: Segfault when running test_operator_gpu.test_sparse_dot many times

2018-06-25 Thread GitBox
vrakesh commented on issue #11367: Segfault when running 
test_operator_gpu.test_sparse_dot many times
URL: 
https://github.com/apache/incubator-mxnet/issues/11367#issuecomment-400102780
 
 
   @haojin2 ah sounds good :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on issue #9974: DataLoader with workers not compatible with ImageRecordDataset

2018-06-25 Thread GitBox
zhreshold commented on issue #9974: DataLoader with workers not compatible with 
ImageRecordDataset
URL: 
https://github.com/apache/incubator-mxnet/issues/9974#issuecomment-400101752
 
 
   https://github.com/apache/incubator-mxnet/pull/11370


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] DickJC123 commented on issue #11395: Check failed: e == cudaSuccess CUDA: unspecified launch failure

2018-06-25 Thread GitBox
DickJC123 commented on issue #11395: Check failed: e == cudaSuccess CUDA: 
unspecified launch failure
URL: 
https://github.com/apache/incubator-mxnet/issues/11395#issuecomment-400101599
 
 
   In a private communication, you indicated this was seen on all platforms.  
Here you tag it as 'Windows'.  Please clarify.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] DickJC123 commented on issue #11341: Deterministic cudnn algorithms

2018-06-25 Thread GitBox
DickJC123 commented on issue #11341: Deterministic cudnn algorithms
URL: 
https://github.com/apache/incubator-mxnet/issues/11341#issuecomment-400098491
 
 
   I often use tests/jenkins/run_test_ubuntu.sh to compile MXNet and run the 
regression tests.  You may need to set DEV=0 in that script to get past compile 
warnings treated as errors.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11394: Flaky test on Python2 Windows

2018-06-25 Thread GitBox
marcoabreu commented on issue #11394: Flaky test on Python2 Windows
URL: 
https://github.com/apache/incubator-mxnet/issues/11394#issuecomment-400094588
 
 
   Ah, sorry. I have documented it at 
https://github.com/apache/incubator-mxnet/issues/11395. 
   
   I already engaged with Nvidia about that one but forgot to create GitHub 
issue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11395: Check failed: e == cudaSuccess CUDA: unspecified launch failure

2018-06-25 Thread GitBox
marcoabreu commented on issue #11395: Check failed: e == cudaSuccess CUDA: 
unspecified launch failure
URL: 
https://github.com/apache/incubator-mxnet/issues/11395#issuecomment-400094621
 
 
   @DickJC123 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu opened a new issue #11395: Check failed: e == cudaSuccess CUDA: unspecified launch failure

2018-06-25 Thread GitBox
marcoabreu opened a new issue #11395: Check failed: e == cudaSuccess CUDA: 
unspecified launch failure
URL: https://github.com/apache/incubator-mxnet/issues/11395
 
 
   Sometimes, our slaves get corrupted and suddenly all test start to fail. 
This is unrelated to the tests directly.
   
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-11377/5/pipeline/
   
   ```
   ==
   
   ERROR: test_operator_gpu.test_op_roi_align
   
   --
   
   Traceback (most recent call last):
   
 File "C:\Anaconda3\envs\py2\lib\site-packages\nose\case.py", line 197, in 
runTest
   
   self.test(*self.arg)
   
 File "C:\Anaconda3\envs\py2\lib\site-packages\nose\util.py", line 620, in 
newfunc
   
   return func(*arg, **kw)
   
 File 
"C:\jenkins_slave\workspace\ut-python-gpu\tests\python\gpu\../unittest\common.py",
 line 157, in test_new
   
   orig_test(*args, **kwargs)
   
 File 
"C:\jenkins_slave\workspace\ut-python-gpu\tests\python\gpu\../unittest\test_operator.py",
 line 6269, in test_op_roi_align
   
   test_roi_align_value()
   
 File 
"C:\jenkins_slave\workspace\ut-python-gpu\tests\python\gpu\../unittest\test_operator.py",
 line 6230, in test_roi_align_value
   
   data = mx.nd.array(np.arange(N*C*W*H).reshape((N,C,H,W)), ctx=ctx, dtype 
= dtype)
   
 File 
"C:\jenkins_slave\workspace\ut-python-gpu\pkg_vc14_gpu\python\mxnet\ndarray\utils.py",
 line 146, in array
   
   return _array(source_array, ctx=ctx, dtype=dtype)
   
 File 
"C:\jenkins_slave\workspace\ut-python-gpu\pkg_vc14_gpu\python\mxnet\ndarray\ndarray.py",
 line 2357, in array
   
   arr[:] = source_array
   
 File 
"C:\jenkins_slave\workspace\ut-python-gpu\pkg_vc14_gpu\python\mxnet\ndarray\ndarray.py",
 line 444, in __setitem__
   
   self._set_nd_basic_indexing(key, value)
   
 File 
"C:\jenkins_slave\workspace\ut-python-gpu\pkg_vc14_gpu\python\mxnet\ndarray\ndarray.py",
 line 710, in _set_nd_basic_indexing
   
   self._sync_copyfrom(value)
   
 File 
"C:\jenkins_slave\workspace\ut-python-gpu\pkg_vc14_gpu\python\mxnet\ndarray\ndarray.py",
 line 876, in _sync_copyfrom
   
   ctypes.c_size_t(source_array.size)))
   
 File 
"C:\jenkins_slave\workspace\ut-python-gpu\pkg_vc14_gpu\python\mxnet\base.py", 
line 210, in check_call
   
   raise MXNetError(py_str(_LIB.MXGetLastError()))
   
   MXNetError: [06:35:08] 
c:\jenkins_slave\workspace\build-gpu\3rdparty\mshadow\mshadow\./tensor_gpu-inl.h:69:
 Check failed: e == cudaSuccess CUDA: unspecified launch failure
   
    >> begin captured logging << 
   
   common: INFO: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=1046236735 to reproduce.
   
   - >> end captured logging << -
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu closed issue #11064: Flaky test: test_operator.test_op_roi_align

2018-06-25 Thread GitBox
marcoabreu closed issue #11064: Flaky test: test_operator.test_op_roi_align
URL: https://github.com/apache/incubator-mxnet/issues/11064
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu removed a comment on issue #11394: Flaky test on Python2 Windows

2018-06-25 Thread GitBox
marcoabreu removed a comment on issue #11394: Flaky test on Python2 Windows
URL: 
https://github.com/apache/incubator-mxnet/issues/11394#issuecomment-400093569
 
 
   Oh sorry, that run is a duplicate of 
https://github.com/apache/incubator-mxnet/issues/11064 
   
   I have reopened the issue for you. Please document your findings there.
   
   P.S. In future, please paste the log into the ticket to prevent people from 
having to access our website.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new issue #11064: Flaky test: test_operator.test_op_roi_align

2018-06-25 Thread GitBox
eric-haibin-lin opened a new issue #11064: Flaky test: 
test_operator.test_op_roi_align
URL: https://github.com/apache/incubator-mxnet/issues/11064
 
 
   ```
   ==
   
   FAIL: test_operator.test_op_roi_align
   
   --
   
   Traceback (most recent call last):
   
 File "/usr/local/lib/python3.5/dist-packages/nose/case.py", line 198, in 
runTest
   
   self.test(*self.arg)
   
 File "/work/mxnet/tests/python/unittest/common.py", line 157, in test_new
   
   orig_test(*args, **kwargs)
   
 File "/work/mxnet/tests/python/unittest/test_operator.py", line 6170, in 
test_op_roi_align
   
   test_roi_align_value()
   
 File "/work/mxnet/tests/python/unittest/test_operator.py", line 6149, in 
test_roi_align_value
   
   assert np.allclose(data.grad.asnumpy(), dx, atol = 1e-6), 
np.abs(data.grad.asnumpy() - dx).max()
   
   AssertionError: 1.3150275e-06
   
    >> begin captured logging << 
   
   common: INFO: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=1619190489 to reproduce.
   
   - >> end captured logging << -
   
   ```
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-11058/1/pipeline
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11394: Flaky test on Python2 Windows

2018-06-25 Thread GitBox
marcoabreu commented on issue #11394: Flaky test on Python2 Windows
URL: 
https://github.com/apache/incubator-mxnet/issues/11394#issuecomment-400093569
 
 
   Oh sorry, that run is a duplicate of 
https://github.com/apache/incubator-mxnet/issues/11064 
   
   I have reopened the issue for you. Please document your findings there.
   
   P.S. In future, please paste the log into the ticket to prevent people from 
having to access our website.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #11370: fix recordfile dataset with multi worker

2018-06-25 Thread GitBox
szha commented on a change in pull request #11370: fix recordfile dataset with 
multi worker
URL: https://github.com/apache/incubator-mxnet/pull/11370#discussion_r197938947
 
 

 ##
 File path: tests/python/unittest/test_gluon_data.py
 ##
 @@ -72,6 +72,18 @@ def test_recordimage_dataset():
 assert x.shape[0] == 1 and x.shape[3] == 3
 assert y.asscalar() == i
 
+with_seed()
 
 Review comment:
   @with_seed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 commented on issue #11394: Flaky test on Python2 Windows

2018-06-25 Thread GitBox
lanking520 commented on issue #11394: Flaky test on Python2 Windows
URL: 
https://github.com/apache/incubator-mxnet/issues/11394#issuecomment-400091572
 
 
   @marcoabreu this one is not dupes: 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-11377/5/pipeline


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nswamy commented on issue #11315: Support Double precision type in Scala

2018-06-25 Thread GitBox
nswamy commented on issue #11315: Support Double precision type in Scala
URL: 
https://github.com/apache/incubator-mxnet/issues/11315#issuecomment-400090838
 
 
   This is both a bug and feature request, since the accuracy will drop using 
Float32 on a model vs Float64


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11394: Flaky test on Python2 Windows

2018-06-25 Thread GitBox
marcoabreu commented on issue #11394: Flaky test on Python2 Windows
URL: 
https://github.com/apache/incubator-mxnet/issues/11394#issuecomment-400090376
 
 
   Duplicate of https://github.com/apache/incubator-mxnet/issues/11353


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu closed issue #11394: Flaky test on Python2 Windows

2018-06-25 Thread GitBox
marcoabreu closed issue #11394: Flaky test on Python2 Windows
URL: https://github.com/apache/incubator-mxnet/issues/11394
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #11180: [MXNET-503] Website landing page for MMS, PR II

2018-06-25 Thread GitBox
aaronmarkham commented on issue #11180: [MXNET-503] Website landing page for 
MMS, PR II
URL: https://github.com/apache/incubator-mxnet/pull/11180#issuecomment-400090248
 
 
   Closing for now. Will add this to an ecosystem page soon.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] azai91 commented on issue #11371: [MXNET-486] Create CPP test for concat MKLDNN operator

2018-06-25 Thread GitBox
azai91 commented on issue #11371: [MXNET-486] Create CPP test for concat MKLDNN 
operator
URL: https://github.com/apache/incubator-mxnet/pull/11371#issuecomment-400086783
 
 
   @zheng-da please review when you have time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wenyangchu edited a comment on issue #11341: Deterministic cudnn algorithms

2018-06-25 Thread GitBox
wenyangchu edited a comment on issue #11341: Deterministic cudnn algorithms
URL: 
https://github.com/apache/incubator-mxnet/issues/11341#issuecomment-400075286
 
 
   Hi @DickJC123 , I have little knowledge on the regression test in mxnet. 
Could you please let me know how you ran the test? Thank you!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] frankfliu commented on issue #11394: Flaky test on Python2 Windows

2018-06-25 Thread GitBox
frankfliu commented on issue #11394: Flaky test on Python2 Windows
URL: 
https://github.com/apache/incubator-mxnet/issues/11394#issuecomment-400084661
 
 
   Thanks for the question.
   @sandeep-krishnamurthy requesting this be labeled under Flaky and Test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] frankfliu commented on issue #11393: Validation Accuracy is higher than training accuracy.

2018-06-25 Thread GitBox
frankfliu commented on issue #11393: Validation Accuracy is higher than 
training accuracy. 
URL: 
https://github.com/apache/incubator-mxnet/issues/11393#issuecomment-400083102
 
 
   Hi @absalama thanks for the question, @sandeep-krishnamurthy requesting this 
be labeled under Question


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] azai91 closed pull request #11328: [MXNET-549] MKLDNNSum can handle variable number of inputs

2018-06-25 Thread GitBox
azai91 closed pull request #11328: [MXNET-549] MKLDNNSum can handle variable 
number of inputs
URL: https://github.com/apache/incubator-mxnet/pull/11328
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/nn/mkldnn/mkldnn_base.cc 
b/src/operator/nn/mkldnn/mkldnn_base.cc
index b182aa0b68d..a05a3218911 100644
--- a/src/operator/nn/mkldnn/mkldnn_base.cc
+++ b/src/operator/nn/mkldnn/mkldnn_base.cc
@@ -146,7 +146,8 @@ void CommitOutput(const NDArray , const mkldnn_output_t 
) {
 // We have to allocate new memory for the sum result.
 auto sum_res = TmpMemMgr::Get()->Alloc(
 res.second->get_primitive_desc());
-op::MKLDNNSum(*res.second, *mem, *sum_res);
+std::vector in_mems = {*res.second, *mem};
+op::MKLDNNSum(in_mems, *sum_res);
 const_cast(arr).CopyFrom(*sum_res);
   }
 }
diff --git a/src/operator/nn/mkldnn/mkldnn_copy.cc 
b/src/operator/nn/mkldnn/mkldnn_copy.cc
index 75e51aff006..d6a12f01610 100644
--- a/src/operator/nn/mkldnn/mkldnn_copy.cc
+++ b/src/operator/nn/mkldnn/mkldnn_copy.cc
@@ -50,7 +50,8 @@ void MKLDNNCopy(const nnvm::NodeAttrs& attrs, const OpContext 
,
 if (out_mem == nullptr)
   out_mem = out_data.GetMKLDNNData();
 auto sum_res = TmpMemMgr::Get()->Alloc(out_mem->get_primitive_desc());
-MKLDNNSum(*in_mem, *out_mem, *sum_res);
+std::vector in_mems = {in_mem, out_mem};
+MKLDNNSum(in_mems, *sum_res);
 const_cast(out_data).CopyFrom(*sum_res);
   } else {
 const_cast(out_data).CopyFrom(*in_mem);
diff --git a/src/operator/nn/mkldnn/mkldnn_ops-inl.h 
b/src/operator/nn/mkldnn/mkldnn_ops-inl.h
index 50937706d93..850fc509d23 100644
--- a/src/operator/nn/mkldnn/mkldnn_ops-inl.h
+++ b/src/operator/nn/mkldnn/mkldnn_ops-inl.h
@@ -104,8 +104,8 @@ void MKLDNNActivationBackward(const nnvm::NodeAttrs& attrs, 
const OpContext 
   const NDArray _grad, const NDArray _data,
   const OpReqType , const NDArray _grad);
 
-void MKLDNNSum(const mkldnn::memory , const mkldnn::memory ,
- const mkldnn::memory );
+void MKLDNNSum(std::vector _mems,
+   const mkldnn::memory );
 
 }  // namespace op
 }  // namespace mxnet
diff --git a/src/operator/nn/mkldnn/mkldnn_sum.cc 
b/src/operator/nn/mkldnn/mkldnn_sum.cc
index c51e1081d69..00f2a323510 100644
--- a/src/operator/nn/mkldnn/mkldnn_sum.cc
+++ b/src/operator/nn/mkldnn/mkldnn_sum.cc
@@ -23,6 +23,7 @@
  * \author Da Zheng
 */
 #include 
+#include 
 
 #include "./mkldnn_ops-inl.h"
 #include "./mkldnn_base-inl.h"
@@ -31,16 +32,16 @@
 namespace mxnet {
 namespace op {
 
-void MKLDNNSum(const mkldnn::memory , const mkldnn::memory ,
+void MKLDNNSum(std::vector _mems,
  const mkldnn::memory ) {
-  std::vector input_pds(2);
-  std::vector scales(2, 1);
+  std::vector input_pds(in_mems.size());
+  std::vector scales(in_mems.size(), 1);
   std::vector inputs;
-  input_pds[0] = arr1.get_primitive_desc();
-  input_pds[1] = arr2.get_primitive_desc();
-  CHECK(input_pds[0] == input_pds[1]);
-  inputs.push_back(arr1);
-  inputs.push_back(arr2);
+  for (int i = 0; i < in_mems.size(); i++) {
+input_pds[i] = in_mems[i].get_primitive_desc();
+inputs.push_back(in_mems[i]);
+if (i > 0) CHECK(input_pds[i] == input_pds[i-1]);
+  }
   // TODO(zhengda) I need to reorder memory here.
   mkldnn::sum::primitive_desc sum_pd(scales, input_pds);
   MKLDNNStream::Get()->RegisterPrim(mkldnn::sum(sum_pd, inputs, out));
@@ -54,7 +55,7 @@ void MKLDNNSumForward(const nnvm::NodeAttrs& attrs, const 
OpContext ,
   }
 
   TmpMemMgr::Get()->Init(ctx.requested[0]);
-  std::vector in_prims;
+  std::vector in_prims;
   std::vector in_pds(inputs.size());
   std::vector scales(inputs.size(), 1);
   in_prims.reserve(inputs.size());
@@ -70,11 +71,10 @@ void MKLDNNSumForward(const nnvm::NodeAttrs& attrs, const 
OpContext ,
 in_prims.push_back(*in_mem);
 in_pds[i] = in_mem->get_primitive_desc();
   }
-
   mkldnn::sum::primitive_desc pdesc(scales, in_pds);
   auto mem = CreateMKLDNNMem(out_data, pdesc.dst_primitive_desc(), req, 
[0]);
   MKLDNNStream *stream = MKLDNNStream::Get();
-  stream->RegisterPrim(mkldnn::sum(pdesc, in_prims, *mem.second));
+  MKLDNNSum(in_prims, *mem.second);
   CommitOutput(out_data, mem);
   stream->Submit();
 }
diff --git a/tests/cpp/operator/mkldnn.cc b/tests/cpp/operator/mkldnn.cc
index 82fee67b114..45c03d61e62 100644
--- a/tests/cpp/operator/mkldnn.cc
+++ b/tests/cpp/operator/mkldnn.cc
@@ -799,7 +799,8 @@ TEST(MKLDNN_BASE, MKLDNNSum) {
   if (out_mem == nullptr)
 continue;
   PrintVerifyMsg(in_arr, in_arr);
-  op::MKLDNNSum(*in_mem1, *in_mem2, *out_mem);
+  std::vector in_mems = {*in_mem1, *in_mem2};
+  op::MKLDNNSum(in_mems, *out_mem);
   

[GitHub] lanking520 opened a new issue #11394: Flaky test on Python2 Windows

2018-06-25 Thread GitBox
lanking520 opened a new issue #11394: Flaky test on Python2 Windows
URL: https://github.com/apache/incubator-mxnet/issues/11394
 
 
   ## Description
   CI Flaky on Python and Windows
   @haojin2 @marcoabreu 
   
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-11377/6/pipeline
   
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-11377/5/pipeline
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on a change in pull request #11356: [MXNET-560][WIP] Add temperature parameter in Softmax and SoftmaxOutput operator

2018-06-25 Thread GitBox
haojin2 commented on a change in pull request #11356: [MXNET-560][WIP] Add 
temperature parameter in Softmax and SoftmaxOutput operator
URL: https://github.com/apache/incubator-mxnet/pull/11356#discussion_r197924316
 
 

 ##
 File path: src/operator/nn/softmax-inl.h
 ##
 @@ -127,7 +137,7 @@ inline void SoftmaxGrad(Stream *s, DType *out, DType 
*ograd,
 #ifdef __CUDACC__
 template
 __global__ void softmax_compute_kernel(DType *in, DType *out, index_t M, int 
axis,
-   Shape sshape, Shape stride) 
{
+   Shape sshape, Shape stride, 
float temperature) {
 
 Review comment:
   adding `const` qualifier could be a safe net by explicitly telling the 
compiler about your assumption that you're not changing the input value at all 
in the function.
   Say if you have a function like this:
   ```c++
   void foo(int a) {
 return a;
   }
   ```
   Here your assumption that the original input should be returned without any 
changes, and that is essential for correct behavior of this function.
   Now if someone happens to change the function to:
   ```c++
   void foo(int a) {
 a++;  // <- new code that happens to change the value of a, and will 
affect correctness
 return a;
   }
   ```
   compiler will not complain about it as you do not have a `const` qualifier 
here, so an extra `const` qualifier is not necessary, but it could be helpful.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wenyangchu commented on issue #11341: Deterministic cudnn algorithms

2018-06-25 Thread GitBox
wenyangchu commented on issue #11341: Deterministic cudnn algorithms
URL: 
https://github.com/apache/incubator-mxnet/issues/11341#issuecomment-400075286
 
 
   Hi @DickJC123 , I have little knowledge on the regression test in mxnet. 
Could you please let me know how you ran the test?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] absalama opened a new issue #11393: Validation Accuracy is higher than training accuracy.

2018-06-25 Thread GitBox
absalama opened a new issue #11393: Validation Accuracy is higher than training 
accuracy. 
URL: https://github.com/apache/incubator-mxnet/issues/11393
 
 
   I am training Imagenet (1k) on alexnet. I used the im2rec tool to split 5% 
of the training data to be used by the validation phase. The results are two 
set of records files (I use chunks) one set for training and on set for 
validation. 
   
   The log shows the following: 
   
   ```
   top_k_accuracy_5=0.159258cross-entropy=5.465672
   INFO:root:Epoch[0] Batch [9400]  Speed: 395.94 samples/sec   
accuracy=0.052461   top_k_accuracy_5=0.160195   cross-entropy=5.454822
   **INFO:root:Epoch[0] Train-accuracy=0.056324
   INFO:root:Epoch[0] Train-top_k_accuracy_5=0.165848
   INFO:root:Epoch[0] Train-cross-entropy=5.416635
   INFO:root:Epoch[0] Time cost=3079.019
   INFO:root:Saved checkpoint to 
"mxnet_alexnet_single_gpu_all_data_set_256-0001.params"
   INFO:root:Epoch[0] `Validation-accuracy=0.078869
   INFO:root:Epoch[0] Validation-top_k_accuracy_5=0.216859
   INFO:root:Epoch[0] Validation-cross-entropy=5.142231**
   ```
   
   The validation here is higher than training accuracy and this increases with 
further epochs (Until the time I write this issue the epoch 10 , the the 
validation is higher than the training accuracy with around 7%). 
   
   **The commands used for data preprocessing:**
   `python3 im2rec.py --list --recursive  --chunks 1024 --train-ratio 0.95  
${IMAGENET_ROOT}/record_io_all_raw_data/metadata-train256/imagenet1k 
${IMAGENET_EXTRACTED}/train  
   `
   `python3 im2rec.py  --resize 256 --quality 95 --num-thread 16  
${IMAGENET_ROOT}/record_io_all_raw_data/metadata-train256/imagenet1k 
${IMAGENET_EXTRACTED}/train`
   
   python3 im2rec.py --resize 256 --quality 95 --num-thread 16  
`${IMAGENET_ROOT}/record_io_all_raw_data/metadata-val256/imagenet1k 
${IMAGENET_EXTRACTED}/train`
   
   **The arguments used for training:** 
   
   ```
Namespace(batch_size=128, benchmark=0, data_nthreads=4, 
data_train='/work/projects/Project00755/datasets/imagenet/record_io_all_raw_data/train256/',
 data_train_
   idx='', 
data_val='/work/projects/Project00755/datasets/imagenet/record_io_all_raw_data/val256/',
 data_val_idx='', disp_batches=200, dtype='float32', gc_threshold=0.5, 
gc_type='none', gpus='0'
   , image_shape='3,227,227', initializer='default', kv_store='device', 
load_epoch=None, loss='ce', lr=0.01, lr_factor=0.1, lr_step_epochs='30,60', 
macrobatch_size=0, max_random_aspect_ratio=0.2
   5, max_random_h=36, max_random_l=50, max_random_rotate_angle=10, 
max_random_s=50, max_random_scale=1, max_random_shear_ratio=0.1, 
min_random_scale=1, model_prefix='mxnet_alexnet_single_gpu_al
   l_data_set_256', mom=0.9, monitor=0, network='alexnet', num_classes=1000, 
num_epochs=80, num_examples=1216718, num_layers=8, optimizer='sgd', pad_size=0, 
random_crop=1, random_mirror=1, rgb_m
   ean='123.68,116.779,103.939', save_period=1, test_io=0, top_k=5, 
warmup_epochs=5, warmup_strategy='linear', wd=0.0005)
   ```
   
   Any help will be appreciated. 
   Thanks 
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11359: Flaky test test_io:test_ImageRecordIter_seed_augmentation

2018-06-25 Thread GitBox
marcoabreu commented on issue #11359: Flaky test 
test_io:test_ImageRecordIter_seed_augmentation
URL: 
https://github.com/apache/incubator-mxnet/issues/11359#issuecomment-400069936
 
 
   I would have to check, but that could be a good start for further 
investigation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland closed pull request #11362: WIP

2018-06-25 Thread GitBox
KellenSunderland closed pull request #11362: WIP
URL: https://github.com/apache/incubator-mxnet/pull/11362
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/.gitmodules b/.gitmodules
index 9aeb1c75498..836d824a6f5 100644
--- a/.gitmodules
+++ b/.gitmodules
@@ -26,3 +26,6 @@
 [submodule "3rdparty/tvm"]
path = 3rdparty/tvm
url = https://github.com/dmlc/tvm
+[submodule "3rdparty/onnx-tensorrt"]
+   path = 3rdparty/onnx-tensorrt
+   url = https://github.com/onnx/onnx-tensorrt.git
diff --git a/3rdparty/onnx-tensorrt b/3rdparty/onnx-tensorrt
new file mode 16
index 000..e7be19cff37
--- /dev/null
+++ b/3rdparty/onnx-tensorrt
@@ -0,0 +1 @@
+Subproject commit e7be19cff377a95817503e8525e20de34cdc574a
diff --git a/Jenkinsfile b/Jenkinsfile
index cc839171f86..452c002fd3c 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -340,6 +340,17 @@ try {
 }
   }
 },
+'TensorRT': {
+  node('mxnetlinux-cpu') {
+ws('workspace/build-tensorrt') {
+  timeout(time: max_time, unit: 'MINUTES') {
+init_git()
+docker_run('ubuntu_gpu_tensorrt', 'build_ubuntu_gpu_tensorrt', 
false)
+pack_lib('tensorrt')
+  }
+}
+  }
+},
 'Build CPU windows':{
   node('mxnetwindows-cpu') {
 timeout(time: max_time, unit: 'MINUTES') {
diff --git a/Makefile b/Makefile
index 67aaa7cf707..6a9b34cccf1 100644
--- a/Makefile
+++ b/Makefile
@@ -94,6 +94,14 @@ else
 endif
 CFLAGS += -I$(TPARTYDIR)/mshadow/ -I$(TPARTYDIR)/dmlc-core/include -fPIC 
-I$(NNVM_PATH)/include -I$(DLPACK_PATH)/include -I$(TPARTYDIR)/tvm/include 
-Iinclude $(MSHADOW_CFLAGS)
 LDFLAGS = -pthread $(MSHADOW_LDFLAGS) $(DMLC_LDFLAGS)
+
+
+ifeq ($(USE_TENSORRT), 1)
+   CFLAGS +=  -I$(ROOTDIR) -I$(TPARTYDIR) 
-DONNX_NAMESPACE=$(ONNX_NAMESPACE) -DMXNET_USE_TENSORRT=1
+   LDFLAGS += -lprotobuf -pthread -lonnx -lonnx_proto -lnvonnxparser 
-lnvonnxparser_runtime -lnvinfer -lnvinfer_plugin
+endif
+# -L/usr/local/lib
+
 ifeq ($(DEBUG), 1)
NVCCFLAGS += -std=c++11 -Xcompiler -D_FORCE_INLINES -g -G -O0 -ccbin 
$(CXX) $(MSHADOW_NVCCFLAGS)
 else
diff --git a/amalgamation/amalgamation.py b/amalgamation/amalgamation.py
index 52d775b7692..a3c28f7118e 100644
--- a/amalgamation/amalgamation.py
+++ b/amalgamation/amalgamation.py
@@ -23,13 +23,12 @@
 import platform
 
 blacklist = [
-'Windows.h', 'cublas_v2.h', 'cuda/tensor_gpu-inl.cuh',
-'cuda_runtime.h', 'cudnn.h', 'cudnn_lrn-inl.h', 'curand.h', 
'curand_kernel.h',
-'glog/logging.h', 'io/azure_filesys.h', 'io/hdfs_filesys.h', 
'io/s3_filesys.h',
-'kvstore_dist.h', 'mach/clock.h', 'mach/mach.h',
-'malloc.h', 'mkl.h', 'mkl_cblas.h', 'mkl_vsl.h', 'mkl_vsl_functions.h',
-'nvml.h', 'opencv2/opencv.hpp', 'sys/stat.h', 'sys/types.h', 'cuda.h', 
'cuda_fp16.h',
-'omp.h', 'execinfo.h', 'packet/sse-inl.h', 'emmintrin.h', 
'thrust/device_vector.h',
+'Windows.h', 'cublas_v2.h', 'cuda/tensor_gpu-inl.cuh', 'cuda_runtime.h', 
'cudnn.h',
+'cudnn_lrn-inl.h', 'curand.h', 'curand_kernel.h', 'glog/logging.h', 
'io/azure_filesys.h',
+'io/hdfs_filesys.h', 'io/s3_filesys.h', 'kvstore_dist.h', 'mach/clock.h', 
'mach/mach.h',
+'malloc.h', 'mkl.h', 'mkl_cblas.h', 'mkl_vsl.h', 'mkl_vsl_functions.h', 
'NvInfer.h', 'nvml.h',
+'opencv2/opencv.hpp', 'sys/stat.h', 'sys/types.h', 'cuda.h', 
'cuda_fp16.h', 'omp.h',
+'onnx/onnx.pb.h', 'execinfo.h', 'packet/sse-inl.h', 'emmintrin.h', 
'thrust/device_vector.h',
 'cusolverDn.h', 'internal/concurrentqueue_internal_debug.h', 
'relacy/relacy_std.hpp',
 'relacy_shims.h', 'ittnotify.h', 'shared_mutex'
 ]
@@ -150,6 +149,7 @@ def expand(x, pending, stage):
 h not in sysheaders and
 'mkl' not in h and
 'nnpack' not in h and
+'tensorrt' not in h and
 not h.endswith('.cuh')): sysheaders.append(h)
 else:
 expand.treeDepth += 1
diff --git a/ci/docker/Dockerfile.build.ubuntu_gpu_tensorrt 
b/ci/docker/Dockerfile.build.ubuntu_gpu_tensorrt
new file mode 100755
index 000..255da316041
--- /dev/null
+++ b/ci/docker/Dockerfile.build.ubuntu_gpu_tensorrt
@@ -0,0 +1,41 @@
+# -*- mode: dockerfile -*-
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by 

[GitHub] marcoabreu commented on issue #11390: [MXNET-23] add README for test directory

2018-06-25 Thread GitBox
marcoabreu commented on issue #11390: [MXNET-23] add README for test directory
URL: https://github.com/apache/incubator-mxnet/pull/11390#issuecomment-400068052
 
 
   See 
https://cwiki.apache.org/confluence/display/MXNET/Reproducing+test+results for 
reference


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11390: [MXNET-23] add README for test directory

2018-06-25 Thread GitBox
marcoabreu commented on issue #11390: [MXNET-23] add README for test directory
URL: https://github.com/apache/incubator-mxnet/pull/11390#issuecomment-400066497
 
 
   Hi, thanks for improving the instructions!
   
   Could you please also include the Docker-based method? This way, people 
don't have to set up local dependencies but can simply use the Docker container.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev commented on a change in pull request #11390: [MXNET-23] add README for test directory

2018-06-25 Thread GitBox
spidyDev commented on a change in pull request #11390: [MXNET-23] add README 
for test directory
URL: https://github.com/apache/incubator-mxnet/pull/11390#discussion_r197912493
 
 

 ##
 File path: tests/README.md
 ##
 @@ -0,0 +1,45 @@
+# Testing MXNET
+
+## Running CPP Tests
+
+1. Install [cmake](https://cmake.org/install/)
+1. Create a build directory in the root of the mxnet project
+```
+mkdir build
+cd build
+```
+1. Generate your Makefile and build along with the tests with cmake (specify 
appropraite flags)
+```
+cmake -DUSE_CUDNN=ON -DUSE_CUDA=ON -DUSE_MKLDNN=ON -DBLAS=Open 
-DCMAKE_BUILD_TYPE=Debug .. && make
+```
+1.  Run tests
+```
+ctest --verbose
+```
+
+1. The following will run all the tests the in `cpp` directory. To run just 
your test file replace the following in your `tests/CMakeLists.txt`
+```
+file(GLOB_RECURSE UNIT_TEST_SOURCE "cpp/*.cc" "cpp/*.h")
+```
+with
+```
+file(GLOB_RECURSE UNIT_TEST_SOURCE "cpp/test_main.cc" "cpp/{YOUR TEST 
FILE}")
 
 Review comment:
   Just to be clear :  
   {COMPLETE PATH TO YOUR TEST FILE}


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya removed a comment on issue #11380: Add ability to query cuDNN BatchNorm min. epsilon. Allow ONNX importer to use cuDNN BN if chosen eps >= cuDNN min. eps.

2018-06-25 Thread GitBox
anirudhacharya removed a comment on issue #11380: Add ability to query cuDNN 
BatchNorm min. epsilon. Allow ONNX importer to use cuDNN BN if chosen eps >= 
cuDNN min. eps.
URL: https://github.com/apache/incubator-mxnet/pull/11380#issuecomment-400063961
 
 
   yes, I can make this change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on issue #11380: Add ability to query cuDNN BatchNorm min. epsilon. Allow ONNX importer to use cuDNN BN if chosen eps >= cuDNN min. eps.

2018-06-25 Thread GitBox
anirudhacharya commented on issue #11380: Add ability to query cuDNN BatchNorm 
min. epsilon. Allow ONNX importer to use cuDNN BN if chosen eps >= cuDNN min. 
eps.
URL: https://github.com/apache/incubator-mxnet/pull/11380#issuecomment-400063961
 
 
   yes, I can make this change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leezu opened a new pull request #11392: Document AdaGrad eps as initial history accumulator value

2018-06-25 Thread GitBox
leezu opened a new pull request #11392: Document AdaGrad eps as initial history 
accumulator value
URL: https://github.com/apache/incubator-mxnet/pull/11392
 
 
   See  #11223


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11340: [MXNET-559] Scripts for running the Broken link checker job

2018-06-25 Thread GitBox
marcoabreu commented on issue #11340: [MXNET-559] Scripts for running the  
Broken link checker job
URL: https://github.com/apache/incubator-mxnet/pull/11340#issuecomment-400062796
 
 
   I have retriggered CI. Please force push your PR until CI passes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu opened a new pull request #11391: Don't fail storing test results if test suite got aborted (#11363)

2018-06-25 Thread GitBox
marcoabreu opened a new pull request #11391: Don't fail storing test results if 
test suite got aborted (#11363)
URL: https://github.com/apache/incubator-mxnet/pull/11391
 
 
   ## Description ##
   Address https://github.com/apache/incubator-mxnet/issues/11363 
   
   Don't fail if test result file does not exist. This might happen if a test 
suite failed and thus the following test suits got aborted. 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   
   ## Comments ##
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on a change in pull request #11356: [MXNET-560][WIP] Add temperature parameter in Softmax and SoftmaxOutput operator

2018-06-25 Thread GitBox
haojin2 commented on a change in pull request #11356: [MXNET-560][WIP] Add 
temperature parameter in Softmax and SoftmaxOutput operator
URL: https://github.com/apache/incubator-mxnet/pull/11356#discussion_r197905846
 
 

 ##
 File path: src/operator/nn/softmax-inl.h
 ##
 @@ -145,23 +155,36 @@ __global__ void softmax_compute_kernel(DType *in, DType 
*out, index_t M, int axi
   __syncthreads();
 
   red::sum::SetInitValue(smem[x]);
-  for (index_t i = x; i < M; i += x_size) {
-red::sum::Reduce(smem[x], static_cast(expf(in[base + i*sa] - 
smax)));
+  if (temperature == 1.0) {
+for (index_t i = x; i < M; i += x_size) {
+  red::sum::Reduce(smem[x], static_cast(expf(in[base + i*sa] - 
smax)));
+}
+  } else {
+for (index_t i = x; i < M; i += x_size) {
+  red::sum::Reduce(smem[x], static_cast(expf((in[base + i*sa] - 
smax)/temperature)));
+}
   }
+
   __syncthreads();
   cuda::Reduce1D(smem);
   __syncthreads();
   DType ssum = smem[0];
   __syncthreads();
 
-  for (index_t i = x; i < M; i += x_size) {
-out[base + i*sa] = OP::Map(in[base + i*sa] - smax, ssum);
+  if (temperature == 1.0) {
 
 Review comment:
   @apeforest Could you provide data on which version is faster? We have 
`test_speed` function: 
https://github.com/apache/incubator-mxnet/blob/a7952f0b3218363a9520aa606f43db94a34c55b8/python/mxnet/test_utils.py#L1133
 ready for you to test the speed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [MXNET-538] Add XUnit test result publishing to windows (#11348)

2018-06-25 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 225f71f  [MXNET-538] Add XUnit test result publishing to windows 
(#11348)
225f71f is described below

commit 225f71f744ac5e7bd29868b6d3ba0e4fe2527c43
Author: Marco de Abreu 
AuthorDate: Mon Jun 25 20:57:39 2018 +0200

[MXNET-538] Add XUnit test result publishing to windows (#11348)

* Add test result publishing to windows

* Fix names of files

* Fix syntax of xcopy on Windows
---
 Jenkinsfile | 20 +---
 1 file changed, 9 insertions(+), 11 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index cc83917..44aad8e 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -106,7 +106,7 @@ def collect_test_results_unix(original_file_name, 
new_file_name) {
 def collect_test_results_windows(original_file_name, new_file_name) {
 echo 'Saving python test results for ' + new_file_name
 // Rename file to make it distinguishable. Unfortunately, it's not 
possible to get STAGE_NAME in a parallel stage
-bat 'xcopy ' + original_file_name + ' ' + new_file_name
+bat 'xcopy ' + original_file_name + ' ' + new_file_name + '*'
 archiveArtifacts artifacts: new_file_name
 } 
 
@@ -786,8 +786,7 @@ try {
 del /S /Q ${env.WORKSPACE}\\pkg_vc14_cpu\\python\\*.pyc
 C:\\mxnet\\test_cpu.bat"""
 } finally {
-  // We are unable to modify test_cpu.bat, so we can't track test 
failures on Windows
-  // collect_test_results_windows('nosetests.xml', 
'nosetests_windows_python2_cpu.xml')
+  collect_test_results_windows('nosetests_unittest.xml', 
'nosetests_unittest_windows_python2_cpu.xml')
 }
   }
 }
@@ -809,8 +808,7 @@ try {
 del /S /Q ${env.WORKSPACE}\\pkg_vc14_cpu\\python\\*.pyc
 C:\\mxnet\\test_cpu.bat"""
 } finally {
-  // We are unable to modify test_cpu.bat, so we can't track test 
failures on Windows
-  // collect_test_results_windows('nosetests.xml', 
'nosetests_windows_python3_cpu.xml')
+  collect_test_results_windows('nosetests_unittest.xml', 
'nosetests_unittest_windows_python3_cpu.xml')
 }
   }
 }
@@ -832,8 +830,8 @@ try {
 del /S /Q ${env.WORKSPACE}\\pkg_vc14_gpu\\python\\*.pyc
 C:\\mxnet\\test_gpu.bat"""
 } finally {
-  // We are unable to modify test_cpu.bat, so we can't track test 
failures on Windows
-  // collect_test_results_windows('nosetests.xml', 
'nosetests_windows_python2_gpu.xml')
+  collect_test_results_windows('nosetests_gpu_forward.xml', 
'nosetests_gpu_forward_windows_python2_gpu.xml')
+  collect_test_results_windows('nosetests_gpu_operator.xml', 
'nosetests_gpu_operator_windows_python2_gpu.xml')
 }
   }
 }
@@ -855,8 +853,8 @@ try {
 del /S /Q ${env.WORKSPACE}\\pkg_vc14_gpu\\python\\*.pyc
 C:\\mxnet\\test_gpu.bat"""
 } finally {
-  // We are unable to modify test_cpu.bat, so we can't track test 
failures on Windows
-  // collect_test_results_windows('nosetests.xml', 
'nosetests_windows_python3_gpu.xml')
+  collect_test_results_windows('nosetests_gpu_forward.xml', 
'nosetests_gpu_forward_windows_python3_gpu.xml')
+  collect_test_results_windows('nosetests_gpu_operator.xml', 
'nosetests_gpu_operator_windows_python3_gpu.xml')   
 }
   }
 }
@@ -878,8 +876,8 @@ try {
 del /S /Q ${env.WORKSPACE}\\pkg_vc14_gpu_mkldnn\\python\\*.pyc
 C:\\mxnet\\test_gpu.bat"""
 } finally {
-  // We are unable to modify test_cpu.bat, so we can't track test 
failures on Windows
-  // collect_test_results_windows('nosetests.xml', 
'nosetests_windows_python3_mkldnn_Gpu.xml')
+  collect_test_results_windows('nosetests_gpu_forward.xml', 
'nosetests_gpu_forward_windows_python3_gpu_mkldnn.xml')
+  collect_test_results_windows('nosetests_gpu_operator.xml', 
'nosetests_gpu_operator_windows_python3_gpu_mkldnn.xml')
 }
   }
 }



[GitHub] marcoabreu closed pull request #11348: [MXNET-538] Add XUnit test result publishing to windows

2018-06-25 Thread GitBox
marcoabreu closed pull request #11348: [MXNET-538] Add XUnit test result 
publishing to windows
URL: https://github.com/apache/incubator-mxnet/pull/11348
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/Jenkinsfile b/Jenkinsfile
index cc839171f86..44aad8e006e 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -106,7 +106,7 @@ def collect_test_results_unix(original_file_name, 
new_file_name) {
 def collect_test_results_windows(original_file_name, new_file_name) {
 echo 'Saving python test results for ' + new_file_name
 // Rename file to make it distinguishable. Unfortunately, it's not 
possible to get STAGE_NAME in a parallel stage
-bat 'xcopy ' + original_file_name + ' ' + new_file_name
+bat 'xcopy ' + original_file_name + ' ' + new_file_name + '*'
 archiveArtifacts artifacts: new_file_name
 } 
 
@@ -786,8 +786,7 @@ try {
 del /S /Q ${env.WORKSPACE}\\pkg_vc14_cpu\\python\\*.pyc
 C:\\mxnet\\test_cpu.bat"""
 } finally {
-  // We are unable to modify test_cpu.bat, so we can't track test 
failures on Windows
-  // collect_test_results_windows('nosetests.xml', 
'nosetests_windows_python2_cpu.xml')
+  collect_test_results_windows('nosetests_unittest.xml', 
'nosetests_unittest_windows_python2_cpu.xml')
 }
   }
 }
@@ -809,8 +808,7 @@ try {
 del /S /Q ${env.WORKSPACE}\\pkg_vc14_cpu\\python\\*.pyc
 C:\\mxnet\\test_cpu.bat"""
 } finally {
-  // We are unable to modify test_cpu.bat, so we can't track test 
failures on Windows
-  // collect_test_results_windows('nosetests.xml', 
'nosetests_windows_python3_cpu.xml')
+  collect_test_results_windows('nosetests_unittest.xml', 
'nosetests_unittest_windows_python3_cpu.xml')
 }
   }
 }
@@ -832,8 +830,8 @@ try {
 del /S /Q ${env.WORKSPACE}\\pkg_vc14_gpu\\python\\*.pyc
 C:\\mxnet\\test_gpu.bat"""
 } finally {
-  // We are unable to modify test_cpu.bat, so we can't track test 
failures on Windows
-  // collect_test_results_windows('nosetests.xml', 
'nosetests_windows_python2_gpu.xml')
+  collect_test_results_windows('nosetests_gpu_forward.xml', 
'nosetests_gpu_forward_windows_python2_gpu.xml')
+  collect_test_results_windows('nosetests_gpu_operator.xml', 
'nosetests_gpu_operator_windows_python2_gpu.xml')
 }
   }
 }
@@ -855,8 +853,8 @@ try {
 del /S /Q ${env.WORKSPACE}\\pkg_vc14_gpu\\python\\*.pyc
 C:\\mxnet\\test_gpu.bat"""
 } finally {
-  // We are unable to modify test_cpu.bat, so we can't track test 
failures on Windows
-  // collect_test_results_windows('nosetests.xml', 
'nosetests_windows_python3_gpu.xml')
+  collect_test_results_windows('nosetests_gpu_forward.xml', 
'nosetests_gpu_forward_windows_python3_gpu.xml')
+  collect_test_results_windows('nosetests_gpu_operator.xml', 
'nosetests_gpu_operator_windows_python3_gpu.xml')   
 }
   }
 }
@@ -878,8 +876,8 @@ try {
 del /S /Q ${env.WORKSPACE}\\pkg_vc14_gpu_mkldnn\\python\\*.pyc
 C:\\mxnet\\test_gpu.bat"""
 } finally {
-  // We are unable to modify test_cpu.bat, so we can't track test 
failures on Windows
-  // collect_test_results_windows('nosetests.xml', 
'nosetests_windows_python3_mkldnn_Gpu.xml')
+  collect_test_results_windows('nosetests_gpu_forward.xml', 
'nosetests_gpu_forward_windows_python3_gpu_mkldnn.xml')
+  collect_test_results_windows('nosetests_gpu_operator.xml', 
'nosetests_gpu_operator_windows_python3_gpu_mkldnn.xml')
 }
   }
 }


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] azai91 opened a new pull request #11390: [MXNET-23] add README for test directory

2018-06-25 Thread GitBox
azai91 opened a new pull request #11390: [MXNET-23] add README for test 
directory
URL: https://github.com/apache/incubator-mxnet/pull/11390
 
 
   ## Description ##
   add readme to onboard developers for building / running CPP tests
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] add readme
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leleamol commented on issue #11340: [MXNET-559] Scripts for running the Broken link checker job

2018-06-25 Thread GitBox
leleamol commented on issue #11340: [MXNET-559] Scripts for running the  Broken 
link checker job
URL: https://github.com/apache/incubator-mxnet/pull/11340#issuecomment-400058320
 
 
   @marcoabreu @sandeep-krishnamurthy @bhavinthaker @eric-haibin-lin 
   
   Hi everybody,
   The PR is approved for merge. However, can you please merge this PR if you 
have committers access.
   The CI integration reports failure in 2 tests (python2: GPU Win and Python3: 
MKLDNN CPU) that are not related to changes in this PR. These checks are 
blocking this PR from merging.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leezu closed pull request #11223: Allow specifying AdaGrad initial accumulator value

2018-06-25 Thread GitBox
leezu closed pull request #11223: Allow specifying AdaGrad initial accumulator 
value
URL: https://github.com/apache/incubator-mxnet/pull/11223
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/optimizer.py b/python/mxnet/optimizer.py
index 0c3fc904fb1..e7727b7e586 100644
--- a/python/mxnet/optimizer.py
+++ b/python/mxnet/optimizer.py
@@ -1091,14 +1091,20 @@ class AdaGrad(Optimizer):
 --
 eps: float, optional
 Small value to avoid division by 0.
+initial_accumulator_value: float, default 0
+The Adagrad state is initially set to this value.
 
 """
-def __init__(self, eps=1e-7, **kwargs):
+def __init__(self, eps=1e-7, initial_accumulator_value=0, **kwargs):
 super(AdaGrad, self).__init__(**kwargs)
 self.float_stable_eps = eps
+self.initial_accumulator_value = initial_accumulator_value
 
 def create_state(self, index, weight):
-return zeros(weight.shape, weight.context, stype=weight.stype)  # 
history
+history = zeros(weight.shape, weight.context, stype=weight.stype)
+if self.initial_accumulator_value:
+history[:] = self.initial_accumulator_value
+return history
 
 def update(self, index, weight, grad, state):
 assert(isinstance(weight, NDArray))
diff --git a/tests/python/unittest/test_optimizer.py 
b/tests/python/unittest/test_optimizer.py
index fba10fb522a..cd516738130 100644
--- a/tests/python/unittest/test_optimizer.py
+++ b/tests/python/unittest/test_optimizer.py
@@ -15,6 +15,7 @@
 # specific language governing permissions and limitations
 # under the License.
 
+import itertools
 import numpy as np
 import mxnet as mx
 import mxnet.lr_scheduler as lr_scheduler
@@ -991,12 +992,16 @@ class PyAdaGrad(mx.optimizer.Optimizer):
 Small value to avoid division by 0.
 
 """
-def __init__(self, eps=1e-7, **kwargs):
+def __init__(self, eps=1e-7, initial_accumulator_value=0, **kwargs):
 super(PyAdaGrad, self).__init__(**kwargs)
 self.float_stable_eps = eps
+self.initial_accumulator_value = initial_accumulator_value
 
 def create_state(self, index, weight):
-return mx.nd.zeros(weight.shape, weight.context, stype=weight.stype)
+history = mx.nd.zeros(weight.shape, weight.context, stype=weight.stype)
+if self.initial_accumulator_value:
+history[:] = self.initial_accumulator_value
+return history
 
 def update(self, index, weight, grad, state):
 self._update_count(index)
@@ -1020,21 +1025,21 @@ def test_adagrad():
 cg_options = [{}, {'clip_gradient': 0.4}, {'clip_gradient': 0.5}]
 rg_options = [{}, {'rescale_grad': 0.14}, {'rescale_grad': 0.8}]
 wd_options = [{}, {'wd': 0.0}]
-for dtype in [np.float32]:
-for eps_option in eps_options:
-for cg_option in cg_options:
-for rg_option in rg_options:
-for wd_option in wd_options:
-kwarg = {}
-kwarg.update(eps_option)
-kwarg.update(cg_option)
-kwarg.update(rg_option)
-kwarg.update(wd_option)
-compare_optimizer(opt1(**kwarg), opt2(**kwarg), shape, 
dtype)
-if wd_option.get('wd', 0.0) == 0.0:
-compare_optimizer(opt1(**kwarg), opt2(**kwarg), 
shape, dtype,
-  w_stype='row_sparse', 
g_stype='row_sparse')
+acc_options = [{}, {'initial_accumulator_value': 1.0}]
 
+for dtype in [np.float32]:
+for eps_option, cg_option, rg_option, wd_option, acc_option in 
itertools.product(
+eps_options, cg_options, rg_options, wd_options, acc_options):
+kwarg = {}
+kwarg.update(eps_option)
+kwarg.update(cg_option)
+kwarg.update(rg_option)
+kwarg.update(wd_option)
+kwarg.update(acc_option)
+compare_optimizer(opt1(**kwarg), opt2(**kwarg), shape, dtype)
+if wd_option.get('wd', 0.0) == 0.0:
+compare_optimizer(opt1(**kwarg), opt2(**kwarg), shape, dtype,
+  w_stype='row_sparse', g_stype='row_sparse')
 
 
 if __name__ == '__main__':


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Roshrini commented on issue #11380: Add ability to query cuDNN BatchNorm min. epsilon. Allow ONNX importer to use cuDNN BN if chosen eps >= cuDNN min. eps.

2018-06-25 Thread GitBox
Roshrini commented on issue #11380: Add ability to query cuDNN BatchNorm min. 
epsilon. Allow ONNX importer to use cuDNN BN if chosen eps >= cuDNN min. eps.
URL: https://github.com/apache/incubator-mxnet/pull/11380#issuecomment-400052737
 
 
   Thanks @mkolod for making this change. This will definitely be helpful.
   @anirudhacharya @spidyDev 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #11367: Segfault when running test_operator_gpu.test_sparse_dot many times

2018-06-25 Thread GitBox
haojin2 commented on issue #11367: Segfault when running 
test_operator_gpu.test_sparse_dot many times
URL: 
https://github.com/apache/incubator-mxnet/issues/11367#issuecomment-400050189
 
 
   I already submitted a fix @vrakesh 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >