[GitHub] perdasilva commented on a change in pull request #14144: Fixes installation nightly test

2019-02-13 Thread GitBox
perdasilva commented on a change in pull request #14144: Fixes installation 
nightly test
URL: https://github.com/apache/incubator-mxnet/pull/14144#discussion_r256721251
 
 

 ##
 File path: tests/jenkins/run_test_installation_docs.sh
 ##
 @@ -250,6 +250,20 @@ function set_instruction_set() {
 ${sorted_indexes[$end_buildfromsource_command_index]})
 }
 
+# given a build commands string, filter any build commands that ought not be 
executed
+# during the test. An example would be git clone'ing the mxnet repository. You 
want to 
 
 Review comment:
   I'll update the comment - the git clone'ing is to reference the git clone 
command. But fair comment anyway.
   
   The issues is the build from source commands include cloning the repository, 
which checks out the master branch. The problem occurs when we are testing 
build from source commands pinned to v1.3.x against the master branch. This is 
why the nightly is failing.
   
   The chosen approach to mitigate this was to filter out these git commands 
(and the cd into the incubator-mxnet directory). This means that the build from 
source commands will run directly on the version of the repository that is 
being tested by Jenkins and not against master.
   
   does it make sense?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #14156: V1.4.x RAT check fix

2019-02-13 Thread GitBox
szha commented on issue #14156: V1.4.x RAT check fix
URL: https://github.com/apache/incubator-mxnet/pull/14156#issuecomment-463523688
 
 
   
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-14156/3/faq/index.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] junrushao1994 commented on issue #14018: [MXNET-1315] Add checks for dynamic-shaped operators in CachedOp

2019-02-13 Thread GitBox
junrushao1994 commented on issue #14018: [MXNET-1315] Add checks for 
dynamic-shaped operators in CachedOp
URL: https://github.com/apache/incubator-mxnet/pull/14018#issuecomment-463520555
 
 
   @zheng-da Finally get CI passed. Could you help merge this PR? Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mxnet-label-bot commented on issue #14157: Inconsistent handling for nan

2019-02-13 Thread GitBox
mxnet-label-bot commented on issue #14157: Inconsistent handling for nan 
URL: 
https://github.com/apache/incubator-mxnet/issues/14157#issuecomment-463505234
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended labels: Bug


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new issue #14157: Inconsistent handling for nan

2019-02-13 Thread GitBox
eric-haibin-lin opened a new issue #14157: Inconsistent handling for nan 
URL: https://github.com/apache/incubator-mxnet/issues/14157
 
 
   ```
   >>> np.maximum([1],[np.nan])
   array([nan])
   >>> np.maximum([np.nan],[1])
   array([nan])
   
   >>> a = mx.nd.array([np.nan],ctx=mx.gpu()); b=mx.nd.array([3],ctx=mx.gpu())
   >>> mx.nd.maximum(a,b)
   
   [3.]
   
   >>> mx.nd.maximum(b,a)
   
   [nan]
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha edited a comment on issue #14141: [v1.4.x] Update MKL-DNN to fix the OSX build issue

2019-02-13 Thread GitBox
szha edited a comment on issue #14141: [v1.4.x] Update MKL-DNN to fix the OSX 
build issue
URL: https://github.com/apache/incubator-mxnet/pull/14141#issuecomment-463468100
 
 
   Errors:
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-14141/5/pipeline#step-1982-log-1537
 (test_stn)
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fcentos-gpu/detail/PR-14141/4/pipeline/#step-97-log-1536
 (test_stn)
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-14141/5/pipeline/1121#step-1540-log-2421
 (R package coverage, seems transient)
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fwebsite/detail/PR-14141/4/pipeline/#step-118-log-1420
 (clojure, @gigasquid does it look like the problem you've fixed on master?) 
(update: I cherry-picked the clojure doc fixes to v1.4.x branch)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ciyongch commented on a change in pull request #14128: MKLDNN based Quantized FullyConnected Operator and its fusion

2019-02-13 Thread GitBox
ciyongch commented on a change in pull request #14128: MKLDNN based Quantized 
FullyConnected Operator and its fusion
URL: https://github.com/apache/incubator-mxnet/pull/14128#discussion_r256697213
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_fully_connected-inl.h
 ##
 @@ -0,0 +1,135 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file mkldnn_fully_connected-inl.h
+ * \brief
+ * \author
+*/
+
+#ifndef MXNET_OPERATOR_NN_MKLDNN_MKLDNN_FULLY_CONNECTED_INL_H_
+#define MXNET_OPERATOR_NN_MKLDNN_MKLDNN_FULLY_CONNECTED_INL_H_
+
+#if MXNET_USE_MKLDNN == 1
+
+#include 
+#include 
+#include "../fully_connected-inl.h"
+#include "./mkldnn_base-inl.h"
+
+namespace mxnet {
+namespace op {
+
+struct MKLDNNFCParam: public dmlc::Parameter {
+  bool quantized;
+  bool fuse_requantize;
+  bool fuse_dequantize;
+  bool with_relu;
+  dmlc::optional min_calib_range;  // min float value calculated from 
calibration dataset
+  dmlc::optional max_calib_range;  // max float value calculated from 
calibration dataset
+
+  DMLC_DECLARE_PARAMETER(MKLDNNFCParam) {
+DMLC_DECLARE_FIELD(quantized).set_default(false)
+.describe("enable quantization");
+DMLC_DECLARE_FIELD(fuse_requantize).set_default(false)
+.describe("Whether to fuse requantize");
+DMLC_DECLARE_FIELD(fuse_dequantize).set_default(false)
+.describe("Whether to fuse dequantize");
+DMLC_DECLARE_FIELD(with_relu).set_default(false)
+.describe("Add post relu");
+DMLC_DECLARE_FIELD(min_calib_range)
+.set_default(dmlc::optional())
+.describe("The minimum scalar value in the form of float32 obtained "
+  "through calibration. If present, it will be used to by "
+  "quantized fullyconnected op to calculate primitive scale");
+DMLC_DECLARE_FIELD(max_calib_range)
+.set_default(dmlc::optional())
+.describe("The maximum scalar value in the form of float32 obtained "
+  "through calibration. If present, it will be used to by "
+  "quantized fullyconnected op to calculate primitive scale");
+  }
+};
+
+struct MKLDNNFCFullParam {
+  FullyConnectedParam default_param;
+  MKLDNNFCParam mkldnn_param;
+  std::vector output_scales = {0.0};
+  std::vector requantize_scales = {0.0};
+};
+
+mkldnn::inner_product_forward::primitive_desc GetFCFwdImpl(
+const MKLDNNFCFullParam &full_param, const bool is_train,
+const NDArray &data, const NDArray &weight, const NDArray *bias,
+const mkldnn::memory::desc &out_md);
+
+class MKLDNNFullyConnectedForward {
+ public:
+  mkldnn::inner_product_forward::primitive_desc fwd_pd;
+
+  MKLDNNFullyConnectedForward(const MKLDNNFCFullParam &full_param, bool 
is_train,
+  const NDArray &data, const NDArray &weight,
+  const NDArray *bias,
+  const mkldnn::memory::desc &out_md)
+  : fwd_pd(GetFCFwdImpl(full_param, is_train, data, weight, bias, out_md)) 
{}
+
+
+  void SetNewMem(const mkldnn::memory &data, const mkldnn::memory &weight,
+ const mkldnn::memory *bias, const mkldnn::memory &output);
+
+  const mkldnn::inner_product_forward &GetFwd() const {
+return *fwd;
+  }
+
+ private:
+  std::shared_ptr fwd;
+  std::shared_ptr data;
+  std::shared_ptr weight;
+  std::shared_ptr bias;
+  std::shared_ptr out;
+};
+
+typedef ParamOpSign MKLDNNFullyconSignature;
+
+MKLDNNFullyConnectedForward &GetFCFwd(
+const FullyConnectedParam ¶m, const bool is_train,
 
 Review comment:
   sure, will fix.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ciyongch commented on a change in pull request #14128: MKLDNN based Quantized FullyConnected Operator and its fusion

2019-02-13 Thread GitBox
ciyongch commented on a change in pull request #14128: MKLDNN based Quantized 
FullyConnected Operator and its fusion
URL: https://github.com/apache/incubator-mxnet/pull/14128#discussion_r256696845
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_fully_connected-inl.h
 ##
 @@ -0,0 +1,135 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file mkldnn_fully_connected-inl.h
+ * \brief
+ * \author
+*/
+
+#ifndef MXNET_OPERATOR_NN_MKLDNN_MKLDNN_FULLY_CONNECTED_INL_H_
+#define MXNET_OPERATOR_NN_MKLDNN_MKLDNN_FULLY_CONNECTED_INL_H_
+
+#if MXNET_USE_MKLDNN == 1
+
+#include 
+#include 
+#include "../fully_connected-inl.h"
+#include "./mkldnn_base-inl.h"
+
+namespace mxnet {
+namespace op {
+
+struct MKLDNNFCParam: public dmlc::Parameter {
+  bool quantized;
+  bool fuse_requantize;
+  bool fuse_dequantize;
+  bool with_relu;
+  dmlc::optional min_calib_range;  // min float value calculated from 
calibration dataset
+  dmlc::optional max_calib_range;  // max float value calculated from 
calibration dataset
+
+  DMLC_DECLARE_PARAMETER(MKLDNNFCParam) {
+DMLC_DECLARE_FIELD(quantized).set_default(false)
+.describe("enable quantization");
+DMLC_DECLARE_FIELD(fuse_requantize).set_default(false)
+.describe("Whether to fuse requantize");
+DMLC_DECLARE_FIELD(fuse_dequantize).set_default(false)
+.describe("Whether to fuse dequantize");
+DMLC_DECLARE_FIELD(with_relu).set_default(false)
+.describe("Add post relu");
+DMLC_DECLARE_FIELD(min_calib_range)
+.set_default(dmlc::optional())
+.describe("The minimum scalar value in the form of float32 obtained "
+  "through calibration. If present, it will be used to by "
+  "quantized fullyconnected op to calculate primitive scale");
+DMLC_DECLARE_FIELD(max_calib_range)
+.set_default(dmlc::optional())
+.describe("The maximum scalar value in the form of float32 obtained "
+  "through calibration. If present, it will be used to by "
+  "quantized fullyconnected op to calculate primitive scale");
+  }
+};
+
+struct MKLDNNFCFullParam {
+  FullyConnectedParam default_param;
+  MKLDNNFCParam mkldnn_param;
+  std::vector output_scales = {0.0};
+  std::vector requantize_scales = {0.0};
+};
+
+mkldnn::inner_product_forward::primitive_desc GetFCFwdImpl(
+const MKLDNNFCFullParam &full_param, const bool is_train,
+const NDArray &data, const NDArray &weight, const NDArray *bias,
+const mkldnn::memory::desc &out_md);
+
+class MKLDNNFullyConnectedForward {
+ public:
+  mkldnn::inner_product_forward::primitive_desc fwd_pd;
+
+  MKLDNNFullyConnectedForward(const MKLDNNFCFullParam &full_param, bool 
is_train,
+  const NDArray &data, const NDArray &weight,
+  const NDArray *bias,
+  const mkldnn::memory::desc &out_md)
+  : fwd_pd(GetFCFwdImpl(full_param, is_train, data, weight, bias, out_md)) 
{}
+
+
+  void SetNewMem(const mkldnn::memory &data, const mkldnn::memory &weight,
+ const mkldnn::memory *bias, const mkldnn::memory &output);
+
+  const mkldnn::inner_product_forward &GetFwd() const {
+return *fwd;
+  }
+
+ private:
+  std::shared_ptr fwd;
 
 Review comment:
   sure, will fix.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ciyongch commented on a change in pull request #14128: MKLDNN based Quantized FullyConnected Operator and its fusion

2019-02-13 Thread GitBox
ciyongch commented on a change in pull request #14128: MKLDNN based Quantized 
FullyConnected Operator and its fusion
URL: https://github.com/apache/incubator-mxnet/pull/14128#discussion_r256696803
 
 

 ##
 File path: python/mxnet/initializer.py
 ##
 @@ -245,6 +254,10 @@ def _init_weight(self, name, arr):
 """Abstract method to Initialize weight."""
 raise NotImplementedError("Must override it")
 
+def _init_quantized_weight(self, _, arr):
+_arr = random.randint(-127, 127, dtype='int32').asnumpy()
 
 Review comment:
   Yes, `int8` dtype is limitied to current `randint`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.4.x updated (97ce14d -> 12608d4)

2019-02-13 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch v1.4.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 97ce14d  add license (#14155)
 new 0cce0fe  The latest version of leiningen has a dependency problem with 
codox (#14132)
 new 12608d4  upgrade codox to work with lein 2.9.0 (#14133)

The 8972 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 ci/docker/install/ubuntu_clojure.sh | 1 +
 contrib/clojure-package/project.clj | 2 +-
 docs/build_version_doc/setup_docs_ubuntu.sh | 1 +
 3 files changed, 3 insertions(+), 1 deletion(-)



[GitHub] hey-yahei commented on issue #13986: [Quantization]'Symbol' object has no attribute 'get_backend_symbol'

2019-02-13 Thread GitBox
hey-yahei commented on issue #13986: [Quantization]'Symbol' object has no 
attribute 'get_backend_symbol'
URL: 
https://github.com/apache/incubator-mxnet/issues/13986#issuecomment-463482946
 
 
   I try to install the nightly version. But when I import mxnet in python, 
some faults occur --
   ```bash
   Python 3.6.7 |Anaconda, Inc.| (default, Oct 23 2018, 19:16:44)
   [GCC 7.3.0] on linux
   Type "help", "copyright", "credits" or "license" for more information.
   >>> import mxnet
   
   Segmentation fault: 11
   
   Stack trace returned 0 entries:
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ChaiBapchya commented on a change in pull request #14144: Fixes installation nightly test

2019-02-13 Thread GitBox
ChaiBapchya commented on a change in pull request #14144: Fixes installation 
nightly test
URL: https://github.com/apache/incubator-mxnet/pull/14144#discussion_r256685757
 
 

 ##
 File path: tests/jenkins/run_test_installation_docs.sh
 ##
 @@ -250,6 +250,20 @@ function set_instruction_set() {
 ${sorted_indexes[$end_buildfromsource_command_index]})
 }
 
+# given a build commands string, filter any build commands that ought not be 
executed
+# during the test. An example would be git clone'ing the mxnet repository. You 
want to 
 
 Review comment:
   Also would like to rephrase the sentence "You want to run the tests against 
the current commit being tested in Jenkins" - I am unable to understand how 
does in this example "filter_build_command" play a role?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ChaiBapchya commented on a change in pull request #14144: Fixes installation nightly test

2019-02-13 Thread GitBox
ChaiBapchya commented on a change in pull request #14144: Fixes installation 
nightly test
URL: https://github.com/apache/incubator-mxnet/pull/14144#discussion_r256685485
 
 

 ##
 File path: tests/jenkins/run_test_installation_docs.sh
 ##
 @@ -250,6 +250,20 @@ function set_instruction_set() {
 ${sorted_indexes[$end_buildfromsource_command_index]})
 }
 
+# given a build commands string, filter any build commands that ought not be 
executed
+# during the test. An example would be git clone'ing the mxnet repository. You 
want to 
 
 Review comment:
   nitpick: cloning* and MXNet*


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha opened a new pull request #14156: V1.4.x RAT check fix

2019-02-13 Thread GitBox
szha opened a new pull request #14156: V1.4.x RAT check fix
URL: https://github.com/apache/incubator-mxnet/pull/14156
 
 
   ## Description ##
   #14142 #14148 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: disable flaky integration test (#14151)

2019-02-13 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new e356586  disable flaky integration test (#14151)
e356586 is described below

commit e35658628617dbf1a078805f767002e7e589c282
Author: Carin Meier 
AuthorDate: Wed Feb 13 22:34:37 2019 -0500

disable flaky integration test (#14151)
---
 contrib/clojure-package/integration-tests.sh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/contrib/clojure-package/integration-tests.sh 
b/contrib/clojure-package/integration-tests.sh
index ce480a5..3f80ea5 100755
--- a/contrib/clojure-package/integration-tests.sh
+++ b/contrib/clojure-package/integration-tests.sh
@@ -26,7 +26,7 @@ lein install
 # then run through the examples 
 EXAMPLES_HOME=${MXNET_HOME}/contrib/clojure-package/examples
 # use AWK pattern for blacklisting
-TEST_CASES=`find ${EXAMPLES_HOME} -name test | awk 
'!/dontselect1|dontselect2/'`
+TEST_CASES=`find ${EXAMPLES_HOME} -name test | awk 
'!/dontselect1|cnn-text-classification/'`
 for i in $TEST_CASES ; do
  cd ${i} && lein test
-done
\ No newline at end of file
+done



[GitHub] szha merged pull request #14151: [Clojure] Disable flaky integration test

2019-02-13 Thread GitBox
szha merged pull request #14151: [Clojure] Disable flaky integration test
URL: https://github.com/apache/incubator-mxnet/pull/14151
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.4.x updated: add license (#14155)

2019-02-13 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch v1.4.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.4.x by this push:
 new 97ce14d  add license (#14155)
97ce14d is described below

commit 97ce14d5aa7321bf9e06b976bffc9e75f820bd05
Author: Lanking 
AuthorDate: Wed Feb 13 19:33:38 2019 -0800

add license (#14155)
---
 scala-package/assembly/linux-x86_64-cpu/pom.xml  | 16 
 .../linux-x86_64-cpu/src/main/assembly/assembly.xml  | 16 
 scala-package/assembly/linux-x86_64-gpu/pom.xml  | 16 
 .../linux-x86_64-gpu/src/main/assembly/assembly.xml  | 16 
 .../assembly/osx-x86_64-cpu/main/assembly/assembly.xml   | 16 
 scala-package/assembly/osx-x86_64-cpu/pom.xml| 16 
 .../osx-x86_64-cpu/src/main/assembly/assembly.xml| 16 
 scala-package/assembly/pom.xml   | 16 
 scala-package/assembly/src/javadoc.xml   | 16 
 scala-package/assembly/src/source.xml| 16 
 scala-package/core/pom.xml   | 16 
 scala-package/examples/pom.xml   | 16 
 scala-package/infer/pom.xml  | 16 
 scala-package/init-native/linux-x86_64/pom.xml   | 16 
 scala-package/init-native/osx-x86_64/pom.xml | 16 
 scala-package/init-native/pom.xml| 16 
 scala-package/init/pom.xml   | 16 
 scala-package/macros/pom.xml | 16 
 scala-package/mxnet-demo/java-demo/pom.xml   | 16 
 scala-package/mxnet-demo/scala-demo/pom.xml  | 16 
 scala-package/native/linux-x86_64-cpu/pom.xml| 16 
 scala-package/native/linux-x86_64-gpu/pom.xml| 16 
 scala-package/native/osx-x86_64-cpu/pom.xml  | 16 
 scala-package/native/pom.xml | 16 
 scala-package/pom.xml| 16 
 scala-package/spark/pom.xml  | 16 
 tests/nightly/apache_rat_license_check/rat-excludes  |  1 -
 27 files changed, 416 insertions(+), 1 deletion(-)

diff --git a/scala-package/assembly/linux-x86_64-cpu/pom.xml 
b/scala-package/assembly/linux-x86_64-cpu/pom.xml
index fbc0ab0..05db3a5 100644
--- a/scala-package/assembly/linux-x86_64-cpu/pom.xml
+++ b/scala-package/assembly/linux-x86_64-cpu/pom.xml
@@ -1,4 +1,20 @@
 
+
 http://maven.apache.org/POM/4.0.0";
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
diff --git 
a/scala-package/assembly/linux-x86_64-cpu/src/main/assembly/assembly.xml 
b/scala-package/assembly/linux-x86_64-cpu/src/main/assembly/assembly.xml
index a574f8a..9f28706 100644
--- a/scala-package/assembly/linux-x86_64-cpu/src/main/assembly/assembly.xml
+++ b/scala-package/assembly/linux-x86_64-cpu/src/main/assembly/assembly.xml
@@ -1,3 +1,19 @@
+
 
   full
   
diff --git a/scala-package/assembly/linux-x86_64-gpu/pom.xml 
b/scala-package/assembly/linux-x86_64-gpu/pom.xml
index a1a9480..708b1c4 100644
--- a/scala-package/assembly/linux-x86_64-gpu/pom.xml
+++ b/scala-package/assembly/linux-x86_64-gpu/pom.xml
@@ -1,4 +1,20 @@
 
+
 http://maven.apache.org/POM/4.0.0";
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
diff --git 
a/scala-package/assembly/linux-x86_64-gpu/src/main/assembly/assembly.xml 
b/scala-package/assembly/linux-x86_64-gpu/src/main/assembly/assembly.xml
index 3a064bf..2b65a8c 100644
--- a/scala-package/assembly/linux-x86_64-gpu/src/main/assembly/assembly.xml
+++ b/scala-package/assembly/linux-x86_64-gpu/src/main/assembly/assembly.xml
@@ -1,3 +1,19 @@
+
 
   full
   
diff --git a/scala-package/assembly/osx-x86_64-cpu/main/assembly/assembly.xml 
b/scala-package/assembly/osx-x86_64-cpu/main/assembly/assembly.xml
index fecafec..d0550a3 100644
--- a/scala-package/assembly/osx-x86_64-cpu/main/assembly/assembly.xml
+++ b/scala-package/assembly/osx-x86_64-cpu/main/assembly/assembly.xml
@@ -1,3 +1,19 @@
+
 
   full
   
diff --git a/scala-package/assembly/osx-x86_64-cpu/pom.xml 
b/scala-package/assembly/osx-x86_64-cpu/pom.xml
index bb6af03..2f80dd7 100644
--- a/scala-package/assembly/osx-x86_64-cpu/pom.xml
+++ b/scala-package/assembly/osx-x86_64-cpu/pom.xml
@@ -1,4 +1,20 @@
 
+
 http:/

[GitHub] szha merged pull request #14155: [v1.4.x] add license to pom files

2019-02-13 Thread GitBox
szha merged pull request #14155: [v1.4.x] add license to pom files
URL: https://github.com/apache/incubator-mxnet/pull/14155
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #14155: [v1.4.x] add license to pom files

2019-02-13 Thread GitBox
szha commented on issue #14155: [v1.4.x] add license to pom files
URL: https://github.com/apache/incubator-mxnet/pull/14155#issuecomment-463474653
 
 
   Errors:
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fcentos-gpu/detail/PR-14155/1/pipeline#step-97-log-1537
 (test_stn)
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-14155/1/pipeline/1112#step-1838-log-1537
 (test_stn)
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fwebsite/detail/PR-14155/1/pipeline#step-114-log-1420
 (clojure doc, also happened in another PR)
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-14155/1/pipeline/1121#step-1815-log-2420
 (R covr)
   
   No issue found for scala.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #13749: Add NHWC layout support to Pooling (cpu, gpu cuda, gpu cuDNN)

2019-02-13 Thread GitBox
szha commented on a change in pull request #13749: Add NHWC layout support to 
Pooling (cpu, gpu cuda, gpu cuDNN)
URL: https://github.com/apache/incubator-mxnet/pull/13749#discussion_r256679681
 
 

 ##
 File path: python/mxnet/gluon/nn/conv_layers.py
 ##
 @@ -716,7 +718,8 @@ class MaxPool1D(_Pooling):
 If padding is non-zero, then the input is implicitly
 zero-padded on both sides for padding number of points.
 layout : str, default 'NCW'
-Dimension ordering of data and weight. Only supports 'NCW' layout for 
now.
+Dimension ordering of data and weight. Only supports 'NCW' and 'NWC'
+(only with cuDNN) layouts for now.
 
 Review comment:
   it's a bit ambiguous whether it's the class that requires cudnn, or it's the 
second layout.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.4.x updated: [v1.4.x] Update MKL-DNN to fix the OSX build issue (#14141)

2019-02-13 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch v1.4.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.4.x by this push:
 new 766ca04  [v1.4.x] Update MKL-DNN to fix the OSX build issue (#14141)
766ca04 is described below

commit 766ca044869be5009047fd16712c57aa1260b409
Author: Tao Lv 
AuthorDate: Thu Feb 14 11:01:58 2019 +0800

[v1.4.x] Update MKL-DNN to fix the OSX build issue (#14141)

* update mkldnn to 0.17.x

* update mkldnn to 0.17.4

* empty commit
---
 3rdparty/mkldnn | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/3rdparty/mkldnn b/3rdparty/mkldnn
index a7c5f53..722901c 16
--- a/3rdparty/mkldnn
+++ b/3rdparty/mkldnn
@@ -1 +1 @@
-Subproject commit a7c5f53832acabade6e5086e72c960adedb3c38a
+Subproject commit 722901c9aaefa579698df778d061d4848ab8c3e3



[GitHub] szha merged pull request #14141: [v1.4.x] Update MKL-DNN to fix the OSX build issue

2019-02-13 Thread GitBox
szha merged pull request #14141: [v1.4.x] Update MKL-DNN to fix the OSX build 
issue
URL: https://github.com/apache/incubator-mxnet/pull/14141
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #14141: [v1.4.x] Update MKL-DNN to fix the OSX build issue

2019-02-13 Thread GitBox
szha commented on issue #14141: [v1.4.x] Update MKL-DNN to fix the OSX build 
issue
URL: https://github.com/apache/incubator-mxnet/pull/14141#issuecomment-463468179
 
 
   None looks related to mkldnn.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #14141: [v1.4.x] Update MKL-DNN to fix the OSX build issue

2019-02-13 Thread GitBox
szha commented on issue #14141: [v1.4.x] Update MKL-DNN to fix the OSX build 
issue
URL: https://github.com/apache/incubator-mxnet/pull/14141#issuecomment-463468100
 
 
   Errors:
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-14141/5/pipeline#step-1982-log-1537
 (test_stn)
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fcentos-gpu/detail/PR-14141/4/pipeline/#step-97-log-1536
 (test_stn)
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-14141/5/pipeline/1121#step-1540-log-2421
 (R package coverage, seems transient)
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fwebsite/detail/PR-14141/4/pipeline/#step-118-log-1420
 (clojure, @gigasquid does it look like the problem you've fixed on master?)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #13574: Replaced rand_r with std:: random generation

2019-02-13 Thread GitBox
TaoLv commented on issue #13574: Replaced rand_r with std:: random generation
URL: https://github.com/apache/incubator-mxnet/pull/13574#issuecomment-463455399
 
 
   Looks like the change is extracted from #11148 .  Reviewed the code before 
and related comment and response are here: 
https://github.com/apache/incubator-mxnet/pull/11148/files#r210174235


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #14130: Refine runtime feature discovery python API and add documentation to …

2019-02-13 Thread GitBox
szha commented on a change in pull request #14130: Refine runtime feature 
discovery python API and add documentation to …
URL: https://github.com/apache/incubator-mxnet/pull/14130#discussion_r256663427
 
 

 ##
 File path: docs/api/python/libinfo/libinfo.md
 ##
 @@ -0,0 +1,51 @@
+# Run-Time Feature detection / Library info
+
+```eval_rst
+.. currentmodule:: mxnet.runtime
+```
+
+## Overview
+
+The libinfo functionality allows to check for compile-time features supported 
by the library.
+
+### Example usage
+
+```
+In [1]: import mxnet as mx
+   ...: import mxnet.runtime
+   ...: fs = mx.runtime.Features()
+
+In [2]: fs
+Out[2]: [✖ CUDA, ✖ CUDNN, ✖ NCCL, ✖ CUDA_RTC, ✖ TENSORRT, ✔ CPU_SSE, ✔ 
CPU_SSE2, ✔ CPU_SSE3, ✔ CPU_SSE4_1, ✔ CPU_SSE4_2, ✖ CPU_SSE4A, ✔ CPU_AVX, ✖ 
CPU_AVX2, ✖ OPENMP, ✖ SSE, ✔ F16C, ✖ JEMALLOC, ✔ BLAS_OPEN, ✖ BLAS_ATLAS, ✖ 
BLAS_MKL, ✖ BLAS_APPLE, ✔ LAPACK, ✖ MKLDNN, ✔ OPENCV, ✖ CAFFE, ✖ PROFILER, ✖ 
DIST_KVSTORE, ✖ CXX14, ✔ SIGNAL_HANDLER, ✔ DEBUG]
+
+In [3]: fs['CUDA'].enabled
+Out[3]: False
+
+In [4]: fs.is_enabled('CPU_SSE')
+Out[4]: True
+
+In [5]: fs.is_enabled('CUDA')
+Out[5]: False
+
+In [6]:
+```
+
+
+```eval_rst
+.. autosummary::
+:nosignatures:
+
+LibFeature
 
 Review comment:
   outdated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #14141: [v1.4.x] Update MKL-DNN to fix the OSX build issue

2019-02-13 Thread GitBox
szha commented on issue #14141: [v1.4.x] Update MKL-DNN to fix the OSX build 
issue
URL: https://github.com/apache/incubator-mxnet/pull/14141#issuecomment-463450799
 
 
   @TaoLv I cherry-picked the fix. I will merge this PR if there's no other 
issue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.4.x updated: fix test_stn (#14063)

2019-02-13 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch v1.4.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.4.x by this push:
 new 9ba5119  fix test_stn (#14063)
9ba5119 is described below

commit 9ba51194abb569d69d01a5ccb4be7576e886c2b9
Author: Sheng Zha 
AuthorDate: Sun Feb 3 19:39:49 2019 -0800

fix test_stn (#14063)
---
 tests/python/unittest/test_operator.py | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tests/python/unittest/test_operator.py 
b/tests/python/unittest/test_operator.py
index 0915739..a20f267 100644
--- a/tests/python/unittest/test_operator.py
+++ b/tests/python/unittest/test_operator.py
@@ -2529,7 +2529,8 @@ def test_flip():
 
 @with_seed()
 def test_stn():
-np.set_printoptions(threshold=np.nan)
+import sys
+np.set_printoptions(threshold=sys.maxsize)
 num_filter = 2  # conv of loc net
 kernel = (3, 3)  # conv of loc net
 num_hidden = 6  # fc of loc net



[GitHub] lanking520 opened a new pull request #14155: add license

2019-02-13 Thread GitBox
lanking520 opened a new pull request #14155: add license
URL: https://github.com/apache/incubator-mxnet/pull/14155
 
 
   ## Description ##
   Add License to XML
   @szha @zachgk @aaronmarkham @piyushghai 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] drivanov opened a new pull request #14154: Improving the run time of the tests which use assert_almost_equal OR check_numeric_gradient functions

2019-02-13 Thread GitBox
drivanov opened a new pull request #14154: Improving the run time of the tests 
which use assert_almost_equal OR check_numeric_gradient functions
URL: https://github.com/apache/incubator-mxnet/pull/14154
 
 
   ## Description ##
   - The analog of `numpy.allclose(a, b, rtol=1e-05, atol=1e-08, 
equal_nan=False)` is implemented as a MxNet operator:
   ```
   mx.nd.contrib.allclose(a, b, rtol, atol, equal_nan)
   ```
   - For now, besides the unit test `test_allclose_function()`, this method is 
used only in
   ```
   def assert_almost_equal(a, b, rtol=None, atol=None, names=('a', 'b'), 
equal_nan=False)
   ```
   where parameter **a** or/and **b** could be defined also as  
`mx.nd.array`(s):
   ```
   Parameters
   --
   a : np.ndarray or mx.nd.array
   b : np.ndarray or mx.nd.array
   ```
   - When calling `assert_almost_equal`, no more `asnumpy()` conversion needed. 
It will be done automatically (in `assert_almost_equal`), if 
(a) **a** or **b** has no attribute "context" OR 
(b) these attributes are different.
   
   - The elimination of `asnumpy()` conversions and the use of 
`mx.nd.contrib.allclose(...)` for `mx.nd.array`'s allows to achieve 5-7x 
speedup for GPU tests that use long arrays.
   
   - The MxNet operator `mx.nd.contrib.approx_gradient(...)`, for calculation 
of **one** OR **all** coordinates of the numerical approximation of the 
gradient vector was implemented. This operator is called in the 
`numeric_grad(...)`, which is called by widely used 
`check_numeric_gradient(...)`

   - New parameter  `use_batch` which was added to
   ```
   def check_numeric_gradient(... use_batch=False):
   ```
   allows to calculate the approximation of gradient in a _batch mode_. 
   - Unfortunately our current implementation of _batch mode_ **DOES NOT** 
allow to use it for all operations. For instance, it is impossible to use it   
   (a) for operation, which are similar to matrix multiplications;  
   (b) when the shape of the output of the operation is NOT the same as the 
shapes of its input parameters;   
   (c) some other cases;   
   
   Currently we are using the _batch mode_ **only** when we call
   ```
   def test_elemwise_binary_ops():
   . . .
   if skip_gradient_check is not True:
   check_numeric_gradient(test, location,
  grad_stype_dict=grad_stypes, 
use_batch=True)
   ```
   - For _elemwise_ operations the _batch mode_ used in 
`check_numeric_gradient(...)` gives significant (3.5-5x on CPU and 6-10x on 
GPU) speedup. 
   
   - Even when we cannot use  _batch mode_, we still see the run time 
improvement associated with the usage of new MxNet operators:
   ```
mx.nd.contrib.approx_gradient(...)
mx.nd.contrib.allclose(...)
   ```
   There are two causes of this improvement:
   (a) now the number of `asnumpy()` conversions, used inside 
`check_numeric_gradient(...)`, is significantly less than before;
   (b) more calculations which are made on GPUs.  
   
   - In general, we see 1.58x improvement in runtime for the test suite 
`L0_self_test`:
   ```
   Now:  Ran 520 tests in  994.935s
   Before:   Ran 520 tests in 1574.341s 
   ```
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   
   ## Comments ##
   Perhaps, it makes sense:
   - to make possible to use (OR just start to use)  _batch mode_ for other 
operations;
   - eliminate `asnumpy()` conversions used when `assert_almost_equal` function 
is called.
   
   Unfortunately, it's not an easy task, because `assert_almost_equal` is used 
in many different places. But, it should make the Python code cleaner and some 
tests will run much faster, especially, those, where both parameters **a** and 
**b** are the  `mx.nd.array`'s
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #14141: [v1.4.x] Update MKL-DNN to fix the OSX build issue

2019-02-13 Thread GitBox
TaoLv commented on issue #14141: [v1.4.x] Update MKL-DNN to fix the OSX build 
issue
URL: https://github.com/apache/incubator-mxnet/pull/14141#issuecomment-463446166
 
 
   Thank you for your attention @lanking520, @piyushghai. The test_stn issue 
was fixed on master branch several day ago: 
https://github.com/apache/incubator-mxnet/pull/14063. But the fix is not ported 
to v1.4.x branch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ptrendx opened a new pull request #14153: Fix shape inference pass

2019-02-13 Thread GitBox
ptrendx opened a new pull request #14153: Fix shape inference pass
URL: https://github.com/apache/incubator-mxnet/pull/14153
 
 
   ## Description ##
   
   This PR fixes shape inference, which currently results in wrong result for 
some cases.
   
   An example problematic case (as it works in current version of MXNet, before 
applying this change):
   If I do
   ```
   import mxnet as mx
   
   data = mx.sym.Variable('data', shape=(1,0,512,512))
   weight = mx.sym.Variable('weight')
   cdata = mx.sym.cast(data, dtype='float16')
   cweight = mx.sym.cast(weight, dtype='float16')
   test = mx.sym.Convolution(data=cdata, weight=cweight,layout='NCHW', pad=(3, 
3), num_filter=64, stride=(2, 2), no_bias=True, dilate=(1, 1), kernel=(7, 7), 
num_group=1)
   
   print(test.infer_shape_partial())
   ```
   I get expected result:
   ```
   ([(1, 0, 512, 512), (64, 0, 7, 7)], [(1, 64, 256, 256)], [])
   ```
   but when I change H and W dimensions in the shape to 0s,
   ```
   import mxnet as mx
   
   data = mx.sym.Variable('data', shape=(1,0,0,0))
   weight = mx.sym.Variable('weight')
   cdata = mx.sym.cast(data, dtype='float16')
   cweight = mx.sym.cast(weight, dtype='float16')
   test = mx.sym.Convolution(data=cdata, weight=cweight,layout='NCHW', pad=(3, 
3), num_filter=64, stride=(2, 2), no_bias=True, dilate=(1, 1), kernel=(7, 7), 
num_group=1)
   
   print(test.infer_shape_partial())
   ```
   I get
   ```
   ([(1, 0, 0, 0), ()], [(1, 64, 0, 0)], [])
   ```
   so the shape of the weight changed to `()`.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - [ ] Code is well-documented: 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Change to the way passes are done so that both forward and backward 
inference is performed every time - I'm not sure if this is necessary - 
@eric-haibin-lin, thoughts? 
   - [x] Change to the way shape inference works. Currently a shape contributes 
1 to the `num_unknown` if it has at least 1 zero. After the change number of 0 
elements is added to the `num_unknown` - that way shape inference pass does not 
end prematurely if only some of the elements of shape were deduced.
   
   ## Comments ##
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leleamol commented on issue #14149: [MXNET-1323] CPP GPU test running too long

2019-02-13 Thread GitBox
leleamol commented on issue #14149: [MXNET-1323] CPP GPU test running too long
URL: https://github.com/apache/incubator-mxnet/pull/14149#issuecomment-463443282
 
 
   #13924 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] songziqin opened a new issue #14152: data layer grad problem

2019-02-13 Thread GitBox
songziqin opened a new issue #14152: data layer grad problem
URL: https://github.com/apache/incubator-mxnet/issues/14152
 
 
   the C++ code as follows:
args_["data"] = NDArray(Shape(params.batch_size, params.dim), ctx);
network_.InferArgsMap(ctx, &args_, args_);
exec_.reset(network_.SimpleBind(ctx, args_));
   
   I find the exec_.get()->grad_dict() get the data layer grad is all 0.
   and I want to know through the code above, can I  get the actual 
   the data layer grad. thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] songziqin closed issue #14129: NDArray C++ construct problem

2019-02-13 Thread GitBox
songziqin closed issue #14129: NDArray C++  construct problem 
URL: https://github.com/apache/incubator-mxnet/issues/14129
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin commented on issue #14150: Fix entropy for uint8

2019-02-13 Thread GitBox
ZhennanQin commented on issue #14150: Fix entropy for uint8
URL: https://github.com/apache/incubator-mxnet/pull/14150#issuecomment-463438268
 
 
   This is a point fix for uint8 and won't affect GPU accuracy. So test isn't 
needed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid opened a new pull request #14151: [Clojure] Disable flaky integration test

2019-02-13 Thread GitBox
gigasquid opened a new pull request #14151: [Clojure] Disable flaky integration 
test
URL: https://github.com/apache/incubator-mxnet/pull/14151
 
 
   ## Description ##
   Disable flaky Clojure integration test until we can figure out right fix.
   
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [X] Changes are complete (i.e. I finished coding on this PR)
   - [X ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   See issue for details: https://github.com/apache/incubator-mxnet/issues/14069
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #14127: Fixes libjpeg-turbo dependecy under Ubuntu 16.04

2019-02-13 Thread GitBox
marcoabreu commented on issue #14127: Fixes libjpeg-turbo dependecy under 
Ubuntu 16.04
URL: https://github.com/apache/incubator-mxnet/pull/14127#issuecomment-463436898
 
 
   Should we maybe only turn it on for a few builds to increase the diversity?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin opened a new pull request #14150: Fix entropy for uint8

2019-02-13 Thread GitBox
ZhennanQin opened a new pull request #14150: Fix entropy for uint8
URL: https://github.com/apache/incubator-mxnet/pull/14150
 
 
   ## Description ##
   Discussed in https://github.com/apache/incubator-mxnet/pull/13697.
   
   @reminisce @zheng-da  @pengzhao-intel @TaoLv 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 commented on issue #14141: [v1.4.x] Update MKL-DNN to fix the OSX build issue

2019-02-13 Thread GitBox
lanking520 commented on issue #14141: [v1.4.x] Update MKL-DNN to fix the OSX 
build issue
URL: https://github.com/apache/incubator-mxnet/pull/14141#issuecomment-463433953
 
 
   Retrigger the test again, not sure if there is an issue with Cent OS. But I 
do saw multiple failure in the Cent OS testing


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piyushghai commented on issue #14141: [v1.4.x] Update MKL-DNN to fix the OSX build issue

2019-02-13 Thread GitBox
piyushghai commented on issue #14141: [v1.4.x] Update MKL-DNN to fix the OSX 
build issue
URL: https://github.com/apache/incubator-mxnet/pull/14141#issuecomment-463433835
 
 
   @TaoLv 
   A unit test seems to be failing on this PR : 
   
   ```test_operator_gpu.test_stn```
   ``` ValueError: threshold must be numeric and non-NAN, try sys.maxsize for 
untruncated representation ```
   
   Not sure if this is related to your changes in this PR. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on issue #14130: Refine runtime feature discovery python API and add documentation to …

2019-02-13 Thread GitBox
larroy commented on issue #14130: Refine runtime feature discovery python API 
and add documentation to …
URL: https://github.com/apache/incubator-mxnet/pull/14130#issuecomment-463433302
 
 
   I made the suggested changed, please have another look and merge if you are 
good with it, thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on a change in pull request #14130: Refine runtime feature discovery python API and add documentation to …

2019-02-13 Thread GitBox
larroy commented on a change in pull request #14130: Refine runtime feature 
discovery python API and add documentation to …
URL: https://github.com/apache/incubator-mxnet/pull/14130#discussion_r256647405
 
 

 ##
 File path: tests/python/unittest/test_runtime.py
 ##
 @@ -29,6 +29,17 @@ def test_libinfo_features():
 ok_(type(features) is list)
 ok_(len(features) > 0)
 
+def test_is_enabled():
 
 Review comment:
   Not sure I get what's the intention.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on a change in pull request #14130: Refine runtime feature discovery python API and add documentation to …

2019-02-13 Thread GitBox
larroy commented on a change in pull request #14130: Refine runtime feature 
discovery python API and add documentation to …
URL: https://github.com/apache/incubator-mxnet/pull/14130#discussion_r256647257
 
 

 ##
 File path: python/mxnet/runtime.py
 ##
 @@ -28,21 +29,48 @@ class LibFeature(ctypes.Structure):
 Compile time feature description
 """
 _fields_ = [
-("name", ctypes.c_char_p),
+("_name", ctypes.c_char_p),
 ("index", ctypes.c_uint32),
 ("enabled", ctypes.c_bool)
 ]
 
+@property
+def name(self):
+return self._name.decode()
+
+def __repr__(self):
+if self.enabled:
+return "✔ {}".format(self.name)
+else:
+return "✖ {}".format(self.name)
+
 def libinfo_features():
 """
 Check the library for compile-time features. The list of features are 
maintained in libinfo.h and libinfo.cc
 
 Returns
 ---
-A list of class LibFeature indicating which features are available and 
enabled
+:return: list of class LibFeature indicating which features are available 
and enabled
 """
 lib_features = ctypes.POINTER(LibFeature)()
 lib_features_size = ctypes.c_size_t()
 check_call(_LIB.MXLibInfoFeatures(ctypes.byref(lib_features), 
ctypes.byref(lib_features_size)))
 feature_list = [lib_features[i] for i in range(lib_features_size.value)]
 return feature_list
+
+def is_enabled(tocheck):
+"""
+Check for a particular feature by name
+
+Parameters
+--
+:param x: str The name of a valid feature as string for example 'CUDA'
+
+Returns
+---
+:return: bool True if it's enabled, False if it's disabled, RuntimeError 
if the feature is not known
+"""
+feature_dict = {f.name: f.enabled for f in libinfo_features()}
+if tocheck not in feature_dict:
+raise RuntimeError("Feature '{}' is unknown, known features are: 
{}".format(tocheck, list(feature_dict.keys(
+return feature_dict[tocheck]
 
 Review comment:
   @apeforest I just removed it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha merged pull request #14148: fix website build

2019-02-13 Thread GitBox
szha merged pull request #14148: fix website build
URL: https://github.com/apache/incubator-mxnet/pull/14148
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on issue #14139: Performance improvement in Normalize GPU Kernel

2019-02-13 Thread GitBox
sandeep-krishnamurthy commented on issue #14139: Performance improvement in 
Normalize GPU Kernel
URL: https://github.com/apache/incubator-mxnet/pull/14139#issuecomment-463429984
 
 
   > Basically lgtm, please make minor revision and once CI passes ,we can merge
   
   Done. Will create ToTensor refactoring PR in a day or two. Thanks again for 
your time and fast turn around time for all PR reviews.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on issue #14139: Performance improvement in Normalize GPU Kernel

2019-02-13 Thread GitBox
zhreshold commented on issue #14139: Performance improvement in Normalize GPU 
Kernel
URL: https://github.com/apache/incubator-mxnet/pull/14139#issuecomment-463428442
 
 
   Basically lgtm, please make minor revision and once CI passes ,we can merge


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on a change in pull request #14139: Performance improvement in Normalize GPU Kernel

2019-02-13 Thread GitBox
zhreshold commented on a change in pull request #14139: Performance improvement 
in Normalize GPU Kernel
URL: https://github.com/apache/incubator-mxnet/pull/14139#discussion_r256643424
 
 

 ##
 File path: src/operator/image/image_random.cu
 ##
 @@ -99,18 +111,134 @@ void ToTensorImplCUDA(mshadow::Stream *s,
 W = input.size(2);
 C = input.size(3);
 blocks = N > 0 ? N : 1;
-blocks = N;
 }
-// One block per image.
-// Number of threads = (32, 32) is optimal, because,
-// computation is minimal and overhead of CUDA preparing
-// all threads is minimal.
+
 ToTensorCudaKernel
-<<>>(input, output,
+<<>>(input, output,
 
 Review comment:
   back to (32, 32), we can address it later. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #14139: Performance improvement in Normalize GPU Kernel

2019-02-13 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #14139: Performance 
improvement in Normalize GPU Kernel
URL: https://github.com/apache/incubator-mxnet/pull/14139#discussion_r256643087
 
 

 ##
 File path: src/operator/image/image_random.cu
 ##
 @@ -99,18 +111,134 @@ void ToTensorImplCUDA(mshadow::Stream *s,
 W = input.size(2);
 C = input.size(3);
 blocks = N > 0 ? N : 1;
-blocks = N;
 }
-// One block per image.
-// Number of threads = (32, 32) is optimal, because,
-// computation is minimal and overhead of CUDA preparing
-// all threads is minimal.
+
 ToTensorCudaKernel
-<<>>(input, output,
+<<>>(input, output,
 
 Review comment:
   Sure already work in progress. Should this PR wait till then or undo dim3(H, 
cuda::kMaxThreadsPerBlock / H) back to dim3(32, 32). Let me know next steps for 
this PR. Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #14149: [MXNET-1323] CPP GPU test running too long

2019-02-13 Thread GitBox
szha commented on issue #14149: [MXNET-1323] CPP GPU test running too long
URL: https://github.com/apache/incubator-mxnet/pull/14149#issuecomment-463425938
 
 
   @lanking520 master is broken right now


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 commented on issue #14149: [MXNET-1323] CPP GPU test running too long

2019-02-13 Thread GitBox
lanking520 commented on issue #14149: [MXNET-1323] CPP GPU test running too long
URL: https://github.com/apache/incubator-mxnet/pull/14149#issuecomment-463425183
 
 
   Could you please rebase with master, it seemed there is a rat license problem
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on a change in pull request #14139: Performance improvement in Normalize GPU Kernel

2019-02-13 Thread GitBox
zhreshold commented on a change in pull request #14139: Performance improvement 
in Normalize GPU Kernel
URL: https://github.com/apache/incubator-mxnet/pull/14139#discussion_r256639710
 
 

 ##
 File path: src/operator/image/image_random.cu
 ##
 @@ -99,18 +111,134 @@ void ToTensorImplCUDA(mshadow::Stream *s,
 W = input.size(2);
 C = input.size(3);
 blocks = N > 0 ? N : 1;
-blocks = N;
 }
-// One block per image.
-// Number of threads = (32, 32) is optimal, because,
-// computation is minimal and overhead of CUDA preparing
-// all threads is minimal.
+
 ToTensorCudaKernel
-<<>>(input, output,
+<<>>(input, output,
 
 Review comment:
   please fix ToTensor similarly in a separate PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 commented on issue #14149: [MXNET-1323] CPP GPU test running too long

2019-02-13 Thread GitBox
lanking520 commented on issue #14149: [MXNET-1323] CPP GPU test running too long
URL: https://github.com/apache/incubator-mxnet/pull/14149#issuecomment-463423440
 
 
   Awesome, could you also put the issue link to it? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid commented on issue #14069: [Test Failure] Clojure: CPU Integration cnn-text-classification.classifier-test/classifier-with-embeddings-test

2019-02-13 Thread GitBox
gigasquid commented on issue #14069: [Test Failure] Clojure: CPU Integration 
cnn-text-classification.classifier-test/classifier-with-embeddings-test
URL: 
https://github.com/apache/incubator-mxnet/issues/14069#issuecomment-463423273
 
 
   We had another report of this - so I'm going to disable it until we can 
figure out a good way to prevent the problem.
   
   Possible solutions:
   - check in a small fake "glove" test file that that only has a couple lines.
   - put glove somewhere out on S3 for more stability and fetch that from CI 
instead.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: upgrade codox to work with lein 2.9.0 (#14133)

2019-02-13 Thread cmeier
This is an automated email from the ASF dual-hosted git repository.

cmeier pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 85d3fa3  upgrade codox to work with lein 2.9.0 (#14133)
85d3fa3 is described below

commit 85d3fa34901c8c31815aa59ae5e125e3c6feea9b
Author: Carin Meier 
AuthorDate: Wed Feb 13 18:44:55 2019 -0500

upgrade codox to work with lein 2.9.0 (#14133)
---
 contrib/clojure-package/project.clj | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/contrib/clojure-package/project.clj 
b/contrib/clojure-package/project.clj
index 61d39e2..e2b999d 100644
--- a/contrib/clojure-package/project.clj
+++ b/contrib/clojure-package/project.clj
@@ -36,7 +36,7 @@
  [org.apache.logging.log4j/log4j-api "2.8.1"]
  [org.slf4j/slf4j-log4j12 "1.7.25" :exclusions 
[org.slf4j/slf4j-api]]]
   :pedantic? :skip
-  :plugins [[lein-codox "0.10.3" :exclusions [org.clojure/clojure]]
+  :plugins [[lein-codox "0.10.6" :exclusions [org.clojure/clojure]]
 [lein-cloverage "1.0.10" :exclusions [org.clojure/clojure]]
 [lein-cljfmt "0.5.7"]]
   :codox {:namespaces [#"^org\.apache\.clojure-mxnet\.(?!gen).*"]}



[GitHub] gigasquid merged pull request #14133: [Clojure] upgrade codox to work with lein 2.9.0

2019-02-13 Thread GitBox
gigasquid merged pull request #14133: [Clojure] upgrade codox to work with lein 
2.9.0
URL: https://github.com/apache/incubator-mxnet/pull/14133
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #14139: Performance improvement in Normalize GPU Kernel

2019-02-13 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #14139: Performance 
improvement in Normalize GPU Kernel
URL: https://github.com/apache/incubator-mxnet/pull/14139#discussion_r256637657
 
 

 ##
 File path: src/operator/image/image_random.cu
 ##
 @@ -99,18 +111,242 @@ void ToTensorImplCUDA(mshadow::Stream *s,
 W = input.size(2);
 C = input.size(3);
 blocks = N > 0 ? N : 1;
-blocks = N;
 }
-// One block per image.
-// Number of threads = (32, 32) is optimal, because,
-// computation is minimal and overhead of CUDA preparing
-// all threads is minimal.
+
 ToTensorCudaKernel
-<<>>(input, output,
+<<>>(input, output,
 req, N, H, W, C, normalize_factor);
 MSHADOW_CUDA_POST_KERNEL_CHECK(ToTensorCudaKernel);
 }
 
+// Normalize Kernel for 3D input
+template
+__global__ void
+__launch_bounds__(cuda::kMaxThreadsPerBlock, 1)
+NormalizeCudaKernel(const Tensor input,
+const Tensor output,
+const int req,
+const int N,
+const int H,
+const int W,
+const int C,
+const float mean_d0,
+const float mean_d1,
+const float mean_d2,
+const float std_d0,
+const float std_d1,
+const float std_d2) {
+// We process one image per thread block.
+// In 3D case, we have only 1 block i.e., blockIdx.x
+// We do not use it.
+
+float mean = mean_d0;
+float std = std_d0;
+for (int c = 0; c < C; ++c) {
+switch (c) {
+case 0 : mean = mean_d0;
+ std = std_d0;
+ break;
+case 1 : mean = mean_d1;
+ std = std_d1;
+ break;
+case 2 : mean = mean_d2;
+ std = std_d2;
+ break;
+}
+for (int h = threadIdx.y; h < H; h += blockDim.y) {
+for (int w = threadIdx.x; w < W; w += blockDim.x) {
+KERNEL_ASSIGN(output[c][h][w], req,
+  (input[c][h][w] - mean) / std);
+}
+}
+}
+}
+
+// Normalize Kernel for 4D input
+template
+__global__ void
+__launch_bounds__(cuda::kMaxThreadsPerBlock, 1)
+NormalizeCudaKernel(const Tensor input,
+const Tensor output,
+const int req,
+const int N,
+const int H,
+const int W,
+const int C,
+const float mean_d0,
+const float mean_d1,
+const float mean_d2,
+const float std_d0,
+const float std_d1,
+const float std_d2) {
+// We process one image per thread block.
+const int n = blockIdx.x;
+
+float mean = mean_d0;
+float std = std_d0;
+for (int c = 0; c < C; ++c) {
+switch (c) {
+case 0 : mean = mean_d0;
+ std = std_d0;
+ break;
+case 1 : mean = mean_d1;
+ std = std_d1;
+ break;
+case 2 : mean = mean_d2;
+ std = std_d2;
+ break;
+}
+for (int h = threadIdx.y; h < H; h += blockDim.y) {
+for (int w = threadIdx.x; w < W; w += blockDim.x) {
+KERNEL_ASSIGN(output[n][c][h][w], req,
+  (input[n][c][h][w] -  mean) / std);
+}
+}
+}
+}
+
+template
+void NormalizeImplCUDA(mshadow::Stream *s,
+   const T input,
+   const T output,
+   const int req,
+   const float mean_d0,
+   const float mean_d1,
+   const float mean_d2,
+   const float std_d0,
+   const float std_d1,
+   const float std_d2) {
+int blocks, H, W, C, N;
+cudaStream_t stream = mshadow::Stream::GetStream(s);
+if (std::is_same>::value) {
+// 3D Input - (C, H, W)
+N = 0;
+C = input.size(0);
+H = input.size(1);
+W = input.size(2);
+blocks = 1;
+} else {
+// 4D Input - (N, C, H, W)
+N = input.size(0);
+C = input.size(1);
+H = input.size(2);
+W = input.size(3);
+blocks = N > 0 ? N : 1;
+}
+// One block per image.
+NormalizeCudaKernel
+<<>>(input, 
output,
+req, N, H, W, C, mean_d0, mean_d1, mean_d2,
+std_d0, std_d1, std_d2);
+MSHADOW_CUDA_POST_KERNEL_CHECK(NormalizeCudaKernel);
+}
+
+// Normalize Backward Kernel for 3D input
+template
 
 Review comment:
   Fixed and made it 1D kernel. Thanks for the poin

[GitHub] sandeep-krishnamurthy commented on a change in pull request #14139: Performance improvement in Normalize GPU Kernel

2019-02-13 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #14139: Performance 
improvement in Normalize GPU Kernel
URL: https://github.com/apache/incubator-mxnet/pull/14139#discussion_r256637700
 
 

 ##
 File path: src/operator/image/image_random.cu
 ##
 @@ -99,18 +111,242 @@ void ToTensorImplCUDA(mshadow::Stream *s,
 W = input.size(2);
 C = input.size(3);
 blocks = N > 0 ? N : 1;
-blocks = N;
 }
-// One block per image.
-// Number of threads = (32, 32) is optimal, because,
-// computation is minimal and overhead of CUDA preparing
-// all threads is minimal.
+
 ToTensorCudaKernel
-<<>>(input, output,
+<<>>(input, output,
 req, N, H, W, C, normalize_factor);
 MSHADOW_CUDA_POST_KERNEL_CHECK(ToTensorCudaKernel);
 }
 
+// Normalize Kernel for 3D input
+template
+__global__ void
+__launch_bounds__(cuda::kMaxThreadsPerBlock, 1)
+NormalizeCudaKernel(const Tensor input,
+const Tensor output,
+const int req,
+const int N,
+const int H,
+const int W,
+const int C,
+const float mean_d0,
+const float mean_d1,
+const float mean_d2,
+const float std_d0,
+const float std_d1,
+const float std_d2) {
+// We process one image per thread block.
+// In 3D case, we have only 1 block i.e., blockIdx.x
+// We do not use it.
+
+float mean = mean_d0;
+float std = std_d0;
+for (int c = 0; c < C; ++c) {
+switch (c) {
+case 0 : mean = mean_d0;
+ std = std_d0;
+ break;
+case 1 : mean = mean_d1;
+ std = std_d1;
+ break;
+case 2 : mean = mean_d2;
+ std = std_d2;
+ break;
+}
+for (int h = threadIdx.y; h < H; h += blockDim.y) {
+for (int w = threadIdx.x; w < W; w += blockDim.x) {
+KERNEL_ASSIGN(output[c][h][w], req,
+  (input[c][h][w] - mean) / std);
+}
+}
+}
+}
+
+// Normalize Kernel for 4D input
+template
+__global__ void
+__launch_bounds__(cuda::kMaxThreadsPerBlock, 1)
+NormalizeCudaKernel(const Tensor input,
+const Tensor output,
+const int req,
+const int N,
+const int H,
+const int W,
+const int C,
+const float mean_d0,
+const float mean_d1,
+const float mean_d2,
+const float std_d0,
+const float std_d1,
+const float std_d2) {
+// We process one image per thread block.
+const int n = blockIdx.x;
+
+float mean = mean_d0;
+float std = std_d0;
+for (int c = 0; c < C; ++c) {
+switch (c) {
+case 0 : mean = mean_d0;
+ std = std_d0;
+ break;
+case 1 : mean = mean_d1;
+ std = std_d1;
+ break;
+case 2 : mean = mean_d2;
+ std = std_d2;
+ break;
+}
+for (int h = threadIdx.y; h < H; h += blockDim.y) {
+for (int w = threadIdx.x; w < W; w += blockDim.x) {
+KERNEL_ASSIGN(output[n][c][h][w], req,
+  (input[n][c][h][w] -  mean) / std);
+}
+}
+}
+}
+
+template
+void NormalizeImplCUDA(mshadow::Stream *s,
+   const T input,
+   const T output,
+   const int req,
+   const float mean_d0,
+   const float mean_d1,
+   const float mean_d2,
+   const float std_d0,
+   const float std_d1,
+   const float std_d2) {
+int blocks, H, W, C, N;
+cudaStream_t stream = mshadow::Stream::GetStream(s);
+if (std::is_same>::value) {
+// 3D Input - (C, H, W)
+N = 0;
+C = input.size(0);
+H = input.size(1);
+W = input.size(2);
+blocks = 1;
+} else {
+// 4D Input - (N, C, H, W)
+N = input.size(0);
+C = input.size(1);
+H = input.size(2);
+W = input.size(3);
+blocks = N > 0 ? N : 1;
+}
+// One block per image.
+NormalizeCudaKernel
+<<>>(input, 
output,
+req, N, H, W, C, mean_d0, mean_d1, mean_d2,
+std_d0, std_d1, std_d2);
+MSHADOW_CUDA_POST_KERNEL_CHECK(NormalizeCudaKernel);
+}
+
+// Normalize Backward Kernel for 3D input
+template
+__global__ void
+__launch_bounds__(cuda::kMaxThreadsPerBlock, 1)
+Nor

[GitHub] leleamol commented on issue #14149: [MXNET-1323] CPP GPU test running too long

2019-02-13 Thread GitBox
leleamol commented on issue #14149: [MXNET-1323] CPP GPU test running too long
URL: https://github.com/apache/incubator-mxnet/pull/14149#issuecomment-463419896
 
 
   @marcoabreu @lanking520 
   I have created this PR to shorten the time it takes to run the ci_tests.sh.
   But the sanity check is failing. It doesn't seem to be related to my change. 
Can you please take a look?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on issue #13907: Fixes downloading of data in cpp-package/example/get_data.sh

2019-02-13 Thread GitBox
larroy commented on issue #13907: Fixes downloading of data in 
cpp-package/example/get_data.sh
URL: https://github.com/apache/incubator-mxnet/pull/13907#issuecomment-463415783
 
 
   @anirudh2290 @apeforest 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy closed pull request #13957: Always go through cmake/ChooseBlas.cmake, now we only execute this lo…

2019-02-13 Thread GitBox
larroy closed pull request #13957: Always go through cmake/ChooseBlas.cmake, 
now we only execute this lo…
URL: https://github.com/apache/incubator-mxnet/pull/13957
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leleamol opened a new pull request #14149: [MXNET-1323] CPP GPU test running too long

2019-02-13 Thread GitBox
leleamol opened a new pull request #14149: [MXNET-1323] CPP GPU test running 
too long
URL: https://github.com/apache/incubator-mxnet/pull/14149
 
 
   
   
   ## Description ##
   CPP GPU test running too long.  The examples in ci_test.sh are taking longer 
time to finish. Reduced the number of epochs for which examples were running. 
Updated mlp example to accept the number of epochs to run. By default the 
example was running for 15000 epochs.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [y] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [y] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   
   Note that the change is made to reduce the time to run the examples in 
ci_tests.sh. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #14148: [WIP] fix website build

2019-02-13 Thread GitBox
szha commented on issue #14148: [WIP] fix website build
URL: https://github.com/apache/incubator-mxnet/pull/14148#issuecomment-463405825
 
 
   @aaronmarkham that worked. 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-14148/6/faq/index.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #14098: softmax for fp16 with fp32 accumulator

2019-02-13 Thread GitBox
eric-haibin-lin commented on a change in pull request #14098: softmax for fp16 
with fp32 accumulator
URL: https://github.com/apache/incubator-mxnet/pull/14098#discussion_r256613039
 
 

 ##
 File path: src/operator/nn/softmax.cc
 ##
 @@ -102,15 +102,36 @@ Example::
 .set_attr("FComputeEx", SoftmaxComputeExCPU)
 .set_attr("FInferStorageType", SoftmaxStorageType)
 #endif
-.set_attr("FGradient", 
ElemwiseGradUseOut{"_backward_softmax"})
+.set_attr("FGradient", SoftmaxFGradient{"_backward_softmax"})
+.set_attr("FInferType", SoftmaxOpType)
+.set_num_inputs(1)
+.set_num_outputs(1)
+.set_attr("FInferShape", ElemwiseShape<1, 1>)
+.set_attr("FInplaceOption",
+  [](const NodeAttrs& attrs){
+return std::vector >{{0, 0}};
+  })
+.add_argument("data", "NDArray-or-Symbol", "The input array.")
 .add_arguments(SoftmaxParam::__FIELDS__());
 
-MXNET_OPERATOR_REGISTER_BINARY(_backward_softmax)
+NNVM_REGISTER_OP(_backward_softmax)
+.set_num_inputs(3)
+.set_num_outputs(1)
+.set_attr("FListInputNames",
+  [](const NodeAttrs& attrs) {
+return std::vector{"ograd", "data", "output"};
+  })
+.set_attr("FInferShape", SoftmaxGradOpShape)
+.set_attr("FInferType", SoftmaxGradOpType)
+.set_attr("FInplaceOption", SoftmaxGradOpInplaceOption)
+.add_argument("ograd", "NDArray-or-Symbol", "gradient of output")
 
 Review comment:
   Add an argument with `NDArray-or_Symbol[]` should be sufficient. Reference 
impl: 
https://github.com/apache/incubator-mxnet/blob/master/src/operator/tensor/elemwise_sum.cc
  


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #14098: softmax for fp16 with fp32 accumulator

2019-02-13 Thread GitBox
eric-haibin-lin commented on a change in pull request #14098: softmax for fp16 
with fp32 accumulator
URL: https://github.com/apache/incubator-mxnet/pull/14098#discussion_r256612825
 
 

 ##
 File path: src/operator/nn/softmax.cc
 ##
 @@ -102,15 +102,36 @@ Example::
 .set_attr("FComputeEx", SoftmaxComputeExCPU)
 .set_attr("FInferStorageType", SoftmaxStorageType)
 #endif
-.set_attr("FGradient", 
ElemwiseGradUseOut{"_backward_softmax"})
+.set_attr("FGradient", SoftmaxFGradient{"_backward_softmax"})
+.set_attr("FInferType", SoftmaxOpType)
+.set_num_inputs(1)
+.set_num_outputs(1)
+.set_attr("FInferShape", ElemwiseShape<1, 1>)
+.set_attr("FInplaceOption",
+  [](const NodeAttrs& attrs){
+return std::vector >{{0, 0}};
+  })
+.add_argument("data", "NDArray-or-Symbol", "The input array.")
 .add_arguments(SoftmaxParam::__FIELDS__());
 
-MXNET_OPERATOR_REGISTER_BINARY(_backward_softmax)
+NNVM_REGISTER_OP(_backward_softmax)
+.set_num_inputs(3)
 
 Review comment:
   num inputs may be 2 or 3 depending on params.dtype. Reference impl: 
https://github.com/apache/incubator-mxnet/blob/master/src/operator/tensor/elemwise_sum.cc
 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #14098: softmax for fp16 with fp32 accumulator

2019-02-13 Thread GitBox
eric-haibin-lin commented on a change in pull request #14098: softmax for fp16 
with fp32 accumulator
URL: https://github.com/apache/incubator-mxnet/pull/14098#discussion_r256604956
 
 

 ##
 File path: src/operator/nn/softmax-inl.h
 ##
 @@ -275,11 +293,90 @@ inline void SoftmaxGrad(Stream *s, DType *out, 
DType *ograd,
 struct SoftmaxParam : public dmlc::Parameter {
   int axis;
   dmlc::optional temperature;
+  dmlc::optional dtype;
   DMLC_DECLARE_PARAMETER(SoftmaxParam) {
 DMLC_DECLARE_FIELD(axis).set_default(-1)
-  .describe("The axis along which to compute softmax.");
+.describe("The axis along which to compute softmax.");
 DMLC_DECLARE_FIELD(temperature).set_default(dmlc::optional())
-  .describe("Temperature parameter in softmax");
+.describe("Temperature parameter in softmax");
+DMLC_DECLARE_FIELD(dtype)
+.add_enum("float16", mshadow::kFloat16)
+.add_enum("float32", mshadow::kFloat32)
+.add_enum("float64", mshadow::kFloat64)
+.set_default(dmlc::optional())
+.describe("DType of the output in case this can't be inferred. "
+  "Defaults to the same as input's dtype if not defined 
(dtype=None).");
+  }
+};
+
+static inline bool softmax_has_dtype_override(const nnvm::NodeAttrs& attrs) {
+  const SoftmaxParam& param = nnvm::get(attrs.parsed);
+  return param.dtype.has_value() && param.dtype.value() != -1;
+}
+
+static inline bool SoftmaxOpType(const nnvm::NodeAttrs& attrs,
+ std::vector* in_attrs,
+ std::vector* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1);
+  CHECK_EQ(out_attrs->size(), 1);
+  const SoftmaxParam& param = nnvm::get(attrs.parsed);
+
+  if (softmax_has_dtype_override(attrs)) {
+TYPE_ASSIGN_CHECK(*out_attrs, 0, param.dtype.value());
+type_assign(&(*in_attrs)[0], (*out_attrs)[0]);
+return true;
+  } else {
+return ElemwiseType<1, 1>(attrs, in_attrs, out_attrs);
+  }
+}
+
+static inline bool SoftmaxGradOpShape(const nnvm::NodeAttrs& attrs,
+  std::vector *in_attrs,
+  std::vector *out_attrs) {
+  if (softmax_has_dtype_override(attrs)) {
+return ElemwiseShape<3, 1>(attrs, in_attrs, out_attrs);
+  } else {
+return ElemwiseShape<2, 1>(attrs, in_attrs, out_attrs);
+  }
+}
+
+static inline bool SoftmaxGradOpType(const nnvm::NodeAttrs& attrs,
+ std::vector* in_attrs,
+ std::vector* out_attrs) {
+  if (softmax_has_dtype_override(attrs)) {
+int in_dtype = (*in_attrs)[1];
 
 Review comment:
   Can we add some sanity checks for vector length before accessing attrs[1] 
and attrs[2]? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #14148: [WIP] fix website build

2019-02-13 Thread GitBox
aaronmarkham commented on issue #14148: [WIP] fix website build
URL: https://github.com/apache/incubator-mxnet/pull/14148#issuecomment-463397438
 
 
   Try this instead:
   ```
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   ```
   I have it working here: 
http://34.201.8.176/versions/fix_web/api/clojure/index.html
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #14066: Add dtype visualization to plot_network

2019-02-13 Thread GitBox
szha commented on a change in pull request #14066: Add dtype visualization to 
plot_network
URL: https://github.com/apache/incubator-mxnet/pull/14066#discussion_r256610834
 
 

 ##
 File path: python/mxnet/visualization.py
 ##
 @@ -271,14 +275,21 @@ def plot_network(symbol, title="plot", 
save_format='pdf', shape=None, node_attrs
 raise ImportError("Draw network requires graphviz library")
 if not isinstance(symbol, Symbol):
 raise TypeError("symbol must be a Symbol")
+internals = symbol.get_internals()
 draw_shape = False
 if shape is not None:
 draw_shape = True
-interals = symbol.get_internals()
-_, out_shapes, _ = interals.infer_shape(**shape)
+_, out_shapes, _ = internals.infer_shape(**shape)
 if out_shapes is None:
 raise ValueError("Input shape is incomplete")
-shape_dict = dict(zip(interals.list_outputs(), out_shapes))
+shape_dict = dict(zip(internals.list_outputs(), out_shapes))
+draw_type = False
+if dtype is not None:
+draw_type = True
+_, out_types, _ = internals.infer_type(**dtype)
+if out_types is None:
+raise ValueError("Input type is incomplete")
 
 Review comment:
   OK, that's fine


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ptrendx commented on a change in pull request #14097: Relaxing type requirements for slice_like op

2019-02-13 Thread GitBox
ptrendx commented on a change in pull request #14097: Relaxing type 
requirements for slice_like op
URL: https://github.com/apache/incubator-mxnet/pull/14097#discussion_r256608217
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -2515,6 +2515,20 @@ def test_slice_like():
 assert_allclose(xx, xgrad.asnumpy())
 assert_allclose(xgrad1.asnumpy(), 
mx.nd.zeros_like(xgrad1).asnumpy())
 
+@with_seed()
+def test_slice_like_different_types():
+x = [[  1.,   2.,   3.,   4.],
 
 Review comment:
   Those parameters do not matter for this test (and they are already tested in 
the dedicated test of `slice_like` op functionality) - the values for `x` and 
`y` were taken from the example from `slice_like` documentation. The only thing 
that matters is the type of `y` in this test. Before this PR this code would 
not work - it would throw an exception during type inference. With this PR, 
this code works (as it should).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ptrendx commented on a change in pull request #14066: Add dtype visualization to plot_network

2019-02-13 Thread GitBox
ptrendx commented on a change in pull request #14066: Add dtype visualization 
to plot_network
URL: https://github.com/apache/incubator-mxnet/pull/14066#discussion_r256606296
 
 

 ##
 File path: python/mxnet/visualization.py
 ##
 @@ -271,14 +275,21 @@ def plot_network(symbol, title="plot", 
save_format='pdf', shape=None, node_attrs
 raise ImportError("Draw network requires graphviz library")
 if not isinstance(symbol, Symbol):
 raise TypeError("symbol must be a Symbol")
+internals = symbol.get_internals()
 draw_shape = False
 if shape is not None:
 draw_shape = True
-interals = symbol.get_internals()
-_, out_shapes, _ = interals.infer_shape(**shape)
+_, out_shapes, _ = internals.infer_shape(**shape)
 if out_shapes is None:
 raise ValueError("Input shape is incomplete")
-shape_dict = dict(zip(interals.list_outputs(), out_shapes))
+shape_dict = dict(zip(internals.list_outputs(), out_shapes))
+draw_type = False
+if dtype is not None:
+draw_type = True
+_, out_types, _ = internals.infer_type(**dtype)
+if out_types is None:
+raise ValueError("Input type is incomplete")
 
 Review comment:
   It is more complicated though as you don't really need to provide type for 
all the inputs. The actual solution for this problem would be to implement 
infer_type_partial (similar to infer_shape_partial) and use it here to 
understand which shapes and types could not be inferred.
   I think this would be good to do in another PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 commented on issue #14141: [v1.4.x] Update MKL-DNN to fix the OSX build issue

2019-02-13 Thread GitBox
lanking520 commented on issue #14141: [v1.4.x] Update MKL-DNN to fix the OSX 
build issue
URL: https://github.com/apache/incubator-mxnet/pull/14141#issuecomment-463389550
 
 
   Restarted :), @marcoabreu could you also grant CI access to @TaoLv so he 
will be able to retrigger the test next time


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piyushghai commented on issue #13926: [v1.4.x] Fix gtest build

2019-02-13 Thread GitBox
piyushghai commented on issue #13926: [v1.4.x] Fix gtest build
URL: https://github.com/apache/incubator-mxnet/pull/13926#issuecomment-463388764
 
 
   @marcoabreu @lebeg Any concerns here with this PR for v1.4.x release ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #14148: [WIP] fix website build

2019-02-13 Thread GitBox
aaronmarkham commented on issue #14148: [WIP] fix website build
URL: https://github.com/apache/incubator-mxnet/pull/14148#issuecomment-463388078
 
 
   My suggestion comes from this discussion on markdown comments: 
https://stackoverflow.com/questions/4823468/comments-in-markdown
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #13749: Add NHWC layout support to Pooling (cpu, gpu cuda, gpu cuDNN)

2019-02-13 Thread GitBox
eric-haibin-lin commented on a change in pull request #13749: Add NHWC layout 
support to Pooling (cpu, gpu cuda, gpu cuDNN)
URL: https://github.com/apache/incubator-mxnet/pull/13749#discussion_r256601480
 
 

 ##
 File path: src/operator/nn/pooling.cc
 ##
 @@ -421,11 +463,16 @@ NNVM_REGISTER_OP(_backward_Pooling)
 .set_attr(
 "FInplaceOption",
 [](const NodeAttrs &attrs) {
-#if MXNET_USE_CUDNN == 1
-  return std::vector >();
-#else
-  return std::vector >{{1, 0}};
+#if MXNET_USE_MKLDNN == 1 && MXNET_USE_CUDA == 0 && MXNET_USE_CUDNN == 0
 
 Review comment:
   Good to know MKLDNN impl is not affected 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Add pin_device_id option to Gluon DataLoader (#14136)

2019-02-13 Thread zhreshold
This is an automated email from the ASF dual-hosted git repository.

zhreshold pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 0b1761f  Add pin_device_id option to Gluon DataLoader (#14136)
0b1761f is described below

commit 0b1761ff118e4724a9d934aa018de90acadda17f
Author: Yuxi Hu 
AuthorDate: Wed Feb 13 13:36:28 2019 -0800

Add pin_device_id option to Gluon DataLoader (#14136)

* add pin_device_id option to DataLoader

* add unit test to check output context

* trigger CI
---
 python/mxnet/gluon/data/dataloader.py| 37 +---
 tests/python/unittest/test_gluon_data.py | 24 +
 2 files changed, 48 insertions(+), 13 deletions(-)

diff --git a/python/mxnet/gluon/data/dataloader.py 
b/python/mxnet/gluon/data/dataloader.py
index 9d76274..934f2d5 100644
--- a/python/mxnet/gluon/data/dataloader.py
+++ b/python/mxnet/gluon/data/dataloader.py
@@ -169,14 +169,15 @@ def worker_loop_v1(dataset, key_queue, data_queue, 
batchify_fn):
 batch = batchify_fn([dataset[i] for i in samples])
 data_queue.put((idx, batch))
 
-def fetcher_loop_v1(data_queue, data_buffer, pin_memory=False, 
data_buffer_lock=None):
+def fetcher_loop_v1(data_queue, data_buffer, pin_memory=False,
+pin_device_id=0, data_buffer_lock=None):
 """Fetcher loop for fetching data from queue and put in reorder dict."""
 while True:
 idx, batch = data_queue.get()
 if idx is None:
 break
 if pin_memory:
-batch = _as_in_context(batch, context.cpu_pinned())
+batch = _as_in_context(batch, context.cpu_pinned(pin_device_id))
 else:
 batch = _as_in_context(batch, context.cpu())
 if data_buffer_lock is not None:
@@ -188,8 +189,8 @@ def fetcher_loop_v1(data_queue, data_buffer, 
pin_memory=False, data_buffer_lock=
 
 class _MultiWorkerIterV1(object):
 """Internal multi-worker iterator for DataLoader."""
-def __init__(self, num_workers, dataset, batchify_fn, batch_sampler, 
pin_memory=False,
- worker_fn=worker_loop_v1):
+def __init__(self, num_workers, dataset, batchify_fn, batch_sampler,
+ pin_memory=False, pin_device_id=0, worker_fn=worker_loop_v1):
 assert num_workers > 0, "_MultiWorkerIter is not for {} 
workers".format(num_workers)
 self._num_workers = num_workers
 self._dataset = dataset
@@ -218,7 +219,8 @@ class _MultiWorkerIterV1(object):
 
 self._fetcher = threading.Thread(
 target=fetcher_loop_v1,
-args=(self._data_queue, self._data_buffer, pin_memory, 
self._data_buffer_lock))
+args=(self._data_queue, self._data_buffer, pin_memory,
+  pin_device_id, self._data_buffer_lock))
 self._fetcher.daemon = True
 self._fetcher.start()
 
@@ -323,12 +325,15 @@ class DataLoaderV1(object):
 If ``True``, the dataloader will copy NDArrays into pinned memory
 before returning them. Copying from CPU pinned memory to GPU is faster
 than from normal CPU memory.
+pin_device_id : int, default 0
+The device id to use for allocating pinned memory if pin_memory is 
``True``
 """
 def __init__(self, dataset, batch_size=None, shuffle=False, sampler=None,
  last_batch=None, batch_sampler=None, batchify_fn=None,
- num_workers=0, pin_memory=False):
+ num_workers=0, pin_memory=False, pin_device_id=0):
 self._dataset = dataset
 self._pin_memory = pin_memory
+self._pin_device_id = pin_device_id
 
 if batch_sampler is None:
 if batch_size is None:
@@ -365,13 +370,14 @@ class DataLoaderV1(object):
 for batch in self._batch_sampler:
 ret = self._batchify_fn([self._dataset[idx] for idx in 
batch])
 if self._pin_memory:
-ret = _as_in_context(ret, context.cpu_pinned())
+ret = _as_in_context(ret, 
context.cpu_pinned(self._pin_device_id))
 yield ret
 return same_process_iter()
 
 # multi-worker
 return _MultiWorkerIterV1(self._num_workers, self._dataset,
-  self._batchify_fn, self._batch_sampler, 
self._pin_memory)
+  self._batchify_fn, self._batch_sampler,
+  self._pin_memory, self._pin_device_id)
 
 def __len__(self):
 return len(self._batch_sampler)
@@ -403,7 +409,7 @@ def _thread_worker_fn(samples, batchify_fn, dataset):
 class _MultiWorkerIter(object):
 """Internal multi-worker iterator for DataLoader."""
 def __init__(self, worker_pool, batchify_fn, batch_sampler, 
pin_memory=False,
- worker_fn=_worker_f

[GitHub] zhreshold merged pull request #14136: Add pin_device_id option to Gluon DataLoader

2019-02-13 Thread GitBox
zhreshold merged pull request #14136: Add pin_device_id option to Gluon 
DataLoader
URL: https://github.com/apache/incubator-mxnet/pull/14136
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ankkhedia commented on issue #14148: [WIP] fix website build

2019-02-13 Thread GitBox
ankkhedia commented on issue #14148: [WIP] fix website build
URL: https://github.com/apache/incubator-mxnet/pull/14148#issuecomment-463382484
 
 
   @szha Thanks for your contribution!
   
   @mxnet-label-bot add [pr-work-in-progress, website, build]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ankkhedia commented on issue #14147: [WIP] Updates base images for arm builds on v1.3.x branch

2019-02-13 Thread GitBox
ankkhedia commented on issue #14147: [WIP] Updates base images for arm builds 
on v1.3.x branch
URL: https://github.com/apache/incubator-mxnet/pull/14147#issuecomment-463381493
 
 
   @perdasilva Thanks for the contribution!
   
   @mxnet-label-bot add [pr-work-in-progress, build]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #14136: Add pin_device_id option to Gluon DataLoader

2019-02-13 Thread GitBox
eric-haibin-lin commented on issue #14136: Add pin_device_id option to Gluon 
DataLoader
URL: https://github.com/apache/incubator-mxnet/pull/14136#issuecomment-463373806
 
 
   lgtm 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #14142: make rat-excludes compliant with apache release policy

2019-02-13 Thread GitBox
szha commented on issue #14142: make rat-excludes compliant with apache release 
policy
URL: https://github.com/apache/incubator-mxnet/pull/14142#issuecomment-463370154
 
 
   It might have been caused by unpaired number of dashes in the comment. I'm 
verifying this assumption in the PR above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid closed issue #14131: Website build is failing in CI in Clojure steps.

2019-02-13 Thread GitBox
gigasquid closed issue #14131: Website build is failing in CI in Clojure steps.
URL: https://github.com/apache/incubator-mxnet/issues/14131
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #14148: [WIP] fix website build

2019-02-13 Thread GitBox
szha commented on a change in pull request #14148: [WIP] fix website build
URL: https://github.com/apache/incubator-mxnet/pull/14148#discussion_r256585835
 
 

 ##
 File path: example/gluon/lipnet/README.md
 ##
 @@ -1,3 +1,22 @@
+

[GitHub] gordon1992 commented on a change in pull request #14000: MXNET-1302 Exclude commons-codec and commons-io from assembled JAR

2019-02-13 Thread GitBox
gordon1992 commented on a change in pull request #14000: MXNET-1302 Exclude 
commons-codec and commons-io from assembled JAR
URL: https://github.com/apache/incubator-mxnet/pull/14000#discussion_r256585296
 
 

 ##
 File path: scala-package/core/pom.xml
 ##
 @@ -138,6 +138,11 @@
   INTERNAL
   provided
 
+
 
 Review comment:
   commons-codec is defined in core for the apidoc-generation stage.
   commons-io can be removed. I'll do that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha opened a new pull request #14148: [WIP] fix website build

2019-02-13 Thread GitBox
szha opened a new pull request #14148: [WIP] fix website build
URL: https://github.com/apache/incubator-mxnet/pull/14148
 
 
   ## Description ##
   fix website build caused by #14142 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gordon1992 commented on a change in pull request #14000: MXNET-1302 Exclude commons-codec and commons-io from assembled JAR

2019-02-13 Thread GitBox
gordon1992 commented on a change in pull request #14000: MXNET-1302 Exclude 
commons-codec and commons-io from assembled JAR
URL: https://github.com/apache/incubator-mxnet/pull/14000#discussion_r256576099
 
 

 ##
 File path: scala-package/core/pom.xml
 ##
 @@ -138,10 +138,22 @@
   INTERNAL
   provided
 
+
 
 Review comment:
   Not sure where you are seeing JUnit in core/pom.xml.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 commented on a change in pull request #14000: MXNET-1302 Exclude commons-codec and commons-io from assembled JAR

2019-02-13 Thread GitBox
lanking520 commented on a change in pull request #14000: MXNET-1302 Exclude 
commons-codec and commons-io from assembled JAR
URL: https://github.com/apache/incubator-mxnet/pull/14000#discussion_r256575312
 
 

 ##
 File path: scala-package/core/pom.xml
 ##
 @@ -138,6 +138,11 @@
   INTERNAL
   provided
 
+
 
 Review comment:
   Since we have commons-codec and commons-io defined in the parent pom, it may 
not be necessary to redefine in here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gordon1992 commented on a change in pull request #14000: MXNET-1302 Exclude commons-codec and commons-io from assembled JAR

2019-02-13 Thread GitBox
gordon1992 commented on a change in pull request #14000: MXNET-1302 Exclude 
commons-codec and commons-io from assembled JAR
URL: https://github.com/apache/incubator-mxnet/pull/14000#discussion_r256574163
 
 

 ##
 File path: scala-package/core/pom.xml
 ##
 @@ -138,10 +138,22 @@
   INTERNAL
   provided
 
+
+  junit
+  junit
+  4.11
+  test
+
+
+  commons-codec
+  commons-codec
+  1.10
+
 
   commons-io
   commons-io
   2.1
+  provided
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leleamol commented on issue #13989: How can I download the precompiled package ?

2019-02-13 Thread GitBox
leleamol commented on issue #13989: How can I download the precompiled package ?
URL: 
https://github.com/apache/incubator-mxnet/issues/13989#issuecomment-463350282
 
 
   
   @mxnet-label-bot add [Pending Requester Info]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leleamol edited a comment on issue #13989: How can I download the precompiled package ?

2019-02-13 Thread GitBox
leleamol edited a comment on issue #13989: How can I download the precompiled 
package ?
URL: 
https://github.com/apache/incubator-mxnet/issues/13989#issuecomment-463350164
 
 
   @yuyijie1995,
   Can you please let us know if you were able to build cpp-package from source?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leleamol commented on issue #13989: How can I download the precompiled package ?

2019-02-13 Thread GitBox
leleamol commented on issue #13989: How can I download the precompiled package ?
URL: 
https://github.com/apache/incubator-mxnet/issues/13989#issuecomment-463350164
 
 
   @yuyijie1995,
   Can you please let us know if you were able to build cpp-package from source?
   
   @mxnet-label-bot add [Pending Requester Info]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leleamol edited a comment on issue #13437: Memory leak in c++ api

2019-02-13 Thread GitBox
leleamol edited a comment on issue #13437: Memory leak in c++ api
URL: 
https://github.com/apache/incubator-mxnet/issues/13437#issuecomment-462954099
 
 
   @mxnet-label-bot add [Pending Requester Info]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leleamol commented on issue #14129: NDArray C++ construct problem

2019-02-13 Thread GitBox
leleamol commented on issue #14129: NDArray C++ construct problem 
URL: 
https://github.com/apache/incubator-mxnet/issues/14129#issuecomment-463346150
 
 
   @mxnet-label-bot [Pending Requester Info]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leleamol edited a comment on issue #14129: NDArray C++ construct problem

2019-02-13 Thread GitBox
leleamol edited a comment on issue #14129: NDArray C++  construct problem 
URL: 
https://github.com/apache/incubator-mxnet/issues/14129#issuecomment-463346150
 
 
   @mxnet-label-bot add [Pending Requester Info]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leleamol commented on issue #14129: NDArray C++ construct problem

2019-02-13 Thread GitBox
leleamol commented on issue #14129: NDArray C++ construct problem 
URL: 
https://github.com/apache/incubator-mxnet/issues/14129#issuecomment-463346023
 
 
   @songziqin,
   The SyncCopyFromCPU() function available in C++ API is a wrapper on 
underlying MXNDArraySyncCopyFromCPU() implementation.  If the context remains 
the same, the copying operation should be faster.
   
   You can try NDArray::Copy(context& ctx) or NDArray::CopyTo(NDArray *other) 
functions to create a copy of existing NDArray.  With the first function, you 
can change the context of resultant NDArray.
   Both these functions use "'_copy" operator.
   
   If the issue with NDArray initialization is addressed, I would suggest 
closing this issue. If there is question about performance, we can open a 
separate issue as "feature request/enhancement". This is for better 
trackability.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid commented on issue #14142: make rat-excludes compliant with apache release policy

2019-02-13 Thread GitBox
gigasquid commented on issue #14142: make rat-excludes compliant with apache 
release policy
URL: https://github.com/apache/incubator-mxnet/pull/14142#issuecomment-463344831
 
 
   Oh I see this PR has the license embedded in the same way as the Beam 
project. They are using https://github.com/apache/beam/tree/master/website 
Jekyll though for generation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on a change in pull request #14139: Performance improvement in Normalize GPU Kernel

2019-02-13 Thread GitBox
zhreshold commented on a change in pull request #14139: Performance improvement 
in Normalize GPU Kernel
URL: https://github.com/apache/incubator-mxnet/pull/14139#discussion_r256559934
 
 

 ##
 File path: src/operator/image/image_random.cu
 ##
 @@ -99,18 +111,242 @@ void ToTensorImplCUDA(mshadow::Stream *s,
 W = input.size(2);
 C = input.size(3);
 blocks = N > 0 ? N : 1;
-blocks = N;
 }
-// One block per image.
-// Number of threads = (32, 32) is optimal, because,
-// computation is minimal and overhead of CUDA preparing
-// all threads is minimal.
+
 ToTensorCudaKernel
-<<>>(input, output,
+<<>>(input, output,
 req, N, H, W, C, normalize_factor);
 MSHADOW_CUDA_POST_KERNEL_CHECK(ToTensorCudaKernel);
 }
 
+// Normalize Kernel for 3D input
+template
+__global__ void
+__launch_bounds__(cuda::kMaxThreadsPerBlock, 1)
+NormalizeCudaKernel(const Tensor input,
+const Tensor output,
+const int req,
+const int N,
+const int H,
+const int W,
+const int C,
+const float mean_d0,
+const float mean_d1,
+const float mean_d2,
+const float std_d0,
+const float std_d1,
+const float std_d2) {
+// We process one image per thread block.
+// In 3D case, we have only 1 block i.e., blockIdx.x
+// We do not use it.
+
+float mean = mean_d0;
+float std = std_d0;
+for (int c = 0; c < C; ++c) {
+switch (c) {
+case 0 : mean = mean_d0;
+ std = std_d0;
+ break;
+case 1 : mean = mean_d1;
+ std = std_d1;
+ break;
+case 2 : mean = mean_d2;
+ std = std_d2;
+ break;
+}
+for (int h = threadIdx.y; h < H; h += blockDim.y) {
+for (int w = threadIdx.x; w < W; w += blockDim.x) {
+KERNEL_ASSIGN(output[c][h][w], req,
+  (input[c][h][w] - mean) / std);
+}
+}
+}
+}
+
+// Normalize Kernel for 4D input
+template
+__global__ void
+__launch_bounds__(cuda::kMaxThreadsPerBlock, 1)
+NormalizeCudaKernel(const Tensor input,
+const Tensor output,
+const int req,
+const int N,
+const int H,
+const int W,
+const int C,
+const float mean_d0,
+const float mean_d1,
+const float mean_d2,
+const float std_d0,
+const float std_d1,
+const float std_d2) {
+// We process one image per thread block.
+const int n = blockIdx.x;
+
+float mean = mean_d0;
+float std = std_d0;
+for (int c = 0; c < C; ++c) {
+switch (c) {
+case 0 : mean = mean_d0;
+ std = std_d0;
+ break;
+case 1 : mean = mean_d1;
+ std = std_d1;
+ break;
+case 2 : mean = mean_d2;
+ std = std_d2;
+ break;
+}
+for (int h = threadIdx.y; h < H; h += blockDim.y) {
+for (int w = threadIdx.x; w < W; w += blockDim.x) {
+KERNEL_ASSIGN(output[n][c][h][w], req,
+  (input[n][c][h][w] -  mean) / std);
+}
+}
+}
+}
+
+template
+void NormalizeImplCUDA(mshadow::Stream *s,
+   const T input,
+   const T output,
+   const int req,
+   const float mean_d0,
+   const float mean_d1,
+   const float mean_d2,
+   const float std_d0,
+   const float std_d1,
+   const float std_d2) {
+int blocks, H, W, C, N;
+cudaStream_t stream = mshadow::Stream::GetStream(s);
+if (std::is_same>::value) {
+// 3D Input - (C, H, W)
+N = 0;
+C = input.size(0);
+H = input.size(1);
+W = input.size(2);
+blocks = 1;
+} else {
+// 4D Input - (N, C, H, W)
+N = input.size(0);
+C = input.size(1);
+H = input.size(2);
+W = input.size(3);
+blocks = N > 0 ? N : 1;
+}
+// One block per image.
+NormalizeCudaKernel
+<<>>(input, 
output,
+req, N, H, W, C, mean_d0, mean_d1, mean_d2,
+std_d0, std_d1, std_d2);
+MSHADOW_CUDA_POST_KERNEL_CHECK(NormalizeCudaKernel);
+}
+
+// Normalize Backward Kernel for 3D input
+template
 
 Review comment:
   Still think 1D kernel is more efficient
   
   ```c
   // la

[GitHub] zhreshold commented on a change in pull request #14139: Performance improvement in Normalize GPU Kernel

2019-02-13 Thread GitBox
zhreshold commented on a change in pull request #14139: Performance improvement 
in Normalize GPU Kernel
URL: https://github.com/apache/incubator-mxnet/pull/14139#discussion_r256561687
 
 

 ##
 File path: src/operator/image/image_random.cu
 ##
 @@ -99,18 +111,242 @@ void ToTensorImplCUDA(mshadow::Stream *s,
 W = input.size(2);
 C = input.size(3);
 blocks = N > 0 ? N : 1;
-blocks = N;
 }
-// One block per image.
-// Number of threads = (32, 32) is optimal, because,
-// computation is minimal and overhead of CUDA preparing
-// all threads is minimal.
+
 ToTensorCudaKernel
-<<>>(input, output,
+<<>>(input, output,
 req, N, H, W, C, normalize_factor);
 MSHADOW_CUDA_POST_KERNEL_CHECK(ToTensorCudaKernel);
 }
 
+// Normalize Kernel for 3D input
+template
+__global__ void
+__launch_bounds__(cuda::kMaxThreadsPerBlock, 1)
+NormalizeCudaKernel(const Tensor input,
+const Tensor output,
+const int req,
+const int N,
+const int H,
+const int W,
+const int C,
+const float mean_d0,
+const float mean_d1,
+const float mean_d2,
+const float std_d0,
+const float std_d1,
+const float std_d2) {
+// We process one image per thread block.
+// In 3D case, we have only 1 block i.e., blockIdx.x
+// We do not use it.
+
+float mean = mean_d0;
+float std = std_d0;
+for (int c = 0; c < C; ++c) {
+switch (c) {
+case 0 : mean = mean_d0;
+ std = std_d0;
+ break;
+case 1 : mean = mean_d1;
+ std = std_d1;
+ break;
+case 2 : mean = mean_d2;
+ std = std_d2;
+ break;
+}
+for (int h = threadIdx.y; h < H; h += blockDim.y) {
+for (int w = threadIdx.x; w < W; w += blockDim.x) {
+KERNEL_ASSIGN(output[c][h][w], req,
+  (input[c][h][w] - mean) / std);
+}
+}
+}
+}
+
+// Normalize Kernel for 4D input
+template
+__global__ void
+__launch_bounds__(cuda::kMaxThreadsPerBlock, 1)
+NormalizeCudaKernel(const Tensor input,
+const Tensor output,
+const int req,
+const int N,
+const int H,
+const int W,
+const int C,
+const float mean_d0,
+const float mean_d1,
+const float mean_d2,
+const float std_d0,
+const float std_d1,
+const float std_d2) {
+// We process one image per thread block.
+const int n = blockIdx.x;
+
+float mean = mean_d0;
+float std = std_d0;
+for (int c = 0; c < C; ++c) {
+switch (c) {
+case 0 : mean = mean_d0;
+ std = std_d0;
+ break;
+case 1 : mean = mean_d1;
+ std = std_d1;
+ break;
+case 2 : mean = mean_d2;
+ std = std_d2;
+ break;
+}
+for (int h = threadIdx.y; h < H; h += blockDim.y) {
+for (int w = threadIdx.x; w < W; w += blockDim.x) {
+KERNEL_ASSIGN(output[n][c][h][w], req,
+  (input[n][c][h][w] -  mean) / std);
+}
+}
+}
+}
+
+template
+void NormalizeImplCUDA(mshadow::Stream *s,
+   const T input,
+   const T output,
+   const int req,
+   const float mean_d0,
+   const float mean_d1,
+   const float mean_d2,
+   const float std_d0,
+   const float std_d1,
+   const float std_d2) {
+int blocks, H, W, C, N;
+cudaStream_t stream = mshadow::Stream::GetStream(s);
+if (std::is_same>::value) {
+// 3D Input - (C, H, W)
+N = 0;
+C = input.size(0);
+H = input.size(1);
+W = input.size(2);
+blocks = 1;
+} else {
+// 4D Input - (N, C, H, W)
+N = input.size(0);
+C = input.size(1);
+H = input.size(2);
+W = input.size(3);
+blocks = N > 0 ? N : 1;
+}
+// One block per image.
+NormalizeCudaKernel
+<<>>(input, 
output,
+req, N, H, W, C, mean_d0, mean_d1, mean_d2,
+std_d0, std_d1, std_d2);
+MSHADOW_CUDA_POST_KERNEL_CHECK(NormalizeCudaKernel);
+}
+
+// Normalize Backward Kernel for 3D input
+template
+__global__ void
+__launch_bounds__(cuda::kMaxThreadsPerBlock, 1)
+NormalizeBackwa

[GitHub] gigasquid commented on issue #14142: make rat-excludes compliant with apache release policy

2019-02-13 Thread GitBox
gigasquid commented on issue #14142: make rat-excludes compliant with apache 
release policy
URL: https://github.com/apache/incubator-mxnet/pull/14142#issuecomment-463340123
 
 
   The Beam project seems to embed the licenses in their markdown files. Maybe 
we can do the same thing? 
   Example: 
https://raw.githubusercontent.com/apache/beam/88acc8eb84c128bab6f8c655cdbba9270f44b94c/website/src/get-started/quickstart-go.md


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yuxihu commented on issue #14136: Add pin_device_id option to Gluon DataLoader

2019-02-13 Thread GitBox
yuxihu commented on issue #14136: Add pin_device_id option to Gluon DataLoader
URL: https://github.com/apache/incubator-mxnet/pull/14136#issuecomment-463336378
 
 
   @mxnet-label-bot update [Gluon, pr-awaiting-merge]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] perdasilva opened a new pull request #14147: [WIP] Updates base images for arm builds

2019-02-13 Thread GitBox
perdasilva opened a new pull request #14147: [WIP] Updates base images for arm 
builds
URL: https://github.com/apache/incubator-mxnet/pull/14147
 
 
   ## Description ##
   Fixes broken build for 1.3.x branch: 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/v1.3.x/127/pipeline/117
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   
   ## Comments ##
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >