[GitHub] daleydeng opened a new pull request #13559: fix for opencv4

2018-12-05 Thread GitBox
daleydeng opened a new pull request #13559: fix for opencv4
URL: https://github.com/apache/incubator-mxnet/pull/13559
 
 
   ## Description ##
   Fix compatibility for opencv 4 with flags started with cv namespace.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #13544: turn on Sphinx warnings as errors

2018-12-05 Thread GitBox
TaoLv commented on issue #13544: turn on Sphinx warnings as errors
URL: https://github.com/apache/incubator-mxnet/pull/13544#issuecomment-444780008
 
 
   @aaronmarkham I have submitted a PR to revert #13478 and am waiting for the 
CI finishing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv opened a new pull request #13558: Revert "Bumped minor version from 1.4.0 to 1.5.0 on master, updated License file"

2018-12-05 Thread GitBox
TaoLv opened a new pull request #13558: Revert "Bumped minor version from 1.4.0 
to 1.5.0 on master, updated License file"
URL: https://github.com/apache/incubator-mxnet/pull/13558
 
 
   Reverts apache/incubator-mxnet#13478
   
   #13478 brought in some non-related code change including those changes in 
#13503 and would cause build issue of MXNet.
   
   @srochel Feel free to submit new PR for your changes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] 04/09: Revert "Fix #13521 (#13537)"

2018-12-05 Thread taolv
This is an automated email from the ASF dual-hosted git repository.

taolv pushed a commit to branch revert-13478-master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit e5271672a89f330548575507abda46cd08bbfbc4
Author: Tao Lv 
AuthorDate: Thu Dec 6 15:34:18 2018 +0800

Revert "Fix #13521 (#13537)"

This reverts commit f6b4665995f8f8ff32862a029b2074475d8467eb.



[incubator-mxnet] branch revert-13478-master created (now b228241)

2018-12-05 Thread taolv
This is an automated email from the ASF dual-hosted git repository.

taolv pushed a change to branch revert-13478-master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


  at b228241  Revert "Bumped minor version from 1.4.0 to 1.5.0 on master, 
updated License file (#13478)"

This branch includes the following new commits:

 new d145e7d  Revert "Chi_square_check for discrete distribution fix 
(#13543)"
 new 61a0b69  Revert "Updated docs for randint operator (#13541)"
 new b9096f4  Revert "Simplifications and some fun stuff for the MNIST 
Gluon tutorial (#13094)"
 new e527167  Revert "Fix #13521 (#13537)"
 new 66e38c3  Revert "Add a retry to qemu_provision (#13551)"
 new c014a4b  Revert "[MXNET-769] Use MXNET_HOME in a tempdir in windows to 
prevent access denied due t… (#13531)"
 new 30f2bde  Revert "[MXNET-1249] Fix Object Detector Performance with GPU 
(#13522)"
 new 171786e  Revert "Fixing a 404 in the ubuntu setup doc (#13542)"
 new b228241  Revert "Bumped minor version from 1.4.0 to 1.5.0 on master, 
updated License file (#13478)"

The 9 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[incubator-mxnet] 02/09: Revert "Updated docs for randint operator (#13541)"

2018-12-05 Thread taolv
This is an automated email from the ASF dual-hosted git repository.

taolv pushed a commit to branch revert-13478-master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 61a0b69607eff3b15f3ea6bb60f4115759bf547c
Author: Tao Lv 
AuthorDate: Thu Dec 6 15:34:18 2018 +0800

Revert "Updated docs for randint operator (#13541)"

This reverts commit e0ff3c36ee171386fef01fb86c54c343e4b04c14.



[incubator-mxnet] 06/09: Revert "[MXNET-769] Use MXNET_HOME in a tempdir in windows to prevent access denied due t… (#13531)"

2018-12-05 Thread taolv
This is an automated email from the ASF dual-hosted git repository.

taolv pushed a commit to branch revert-13478-master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit c014a4b90372ea72d4fbb5c71d29d4209235fe46
Author: Tao Lv 
AuthorDate: Thu Dec 6 15:34:18 2018 +0800

Revert "[MXNET-769] Use MXNET_HOME in a tempdir in windows to prevent 
access denied due t… (#13531)"

This reverts commit bd8e0f8356676749ecae16ec38a366b4cc00bf15.



[incubator-mxnet] 08/09: Revert "Fixing a 404 in the ubuntu setup doc (#13542)"

2018-12-05 Thread taolv
This is an automated email from the ASF dual-hosted git repository.

taolv pushed a commit to branch revert-13478-master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 171786e6a5d7aa61107d901e1eef0f77067df5e7
Author: Tao Lv 
AuthorDate: Thu Dec 6 15:34:18 2018 +0800

Revert "Fixing a 404 in the ubuntu setup doc (#13542)"

This reverts commit cb0db290adcfd0fce956d02c234f81d453e41013.



[incubator-mxnet] 09/09: Revert "Bumped minor version from 1.4.0 to 1.5.0 on master, updated License file (#13478)"

2018-12-05 Thread taolv
This is an automated email from the ASF dual-hosted git repository.

taolv pushed a commit to branch revert-13478-master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit b2282417b8b2bbba9abc11bf96b235d6674f9731
Author: Tao Lv 
AuthorDate: Thu Dec 6 15:34:18 2018 +0800

Revert "Bumped minor version from 1.4.0 to 1.5.0 on master, updated License 
file (#13478)"

This reverts commit 40db61908000ee86d21aac847ff2225807d6c168.
---
 CMakeLists.txt |  1 -
 LICENSE| 94 ++
 Makefile   |  9 +--
 R-package/DESCRIPTION  | 10 +--
 ci/docker/runtime_functions.sh |  3 +
 ci/jenkins/Jenkins_steps.groovy|  8 +-
 contrib/clojure-package/README.md  | 16 ++--
 .../examples/cnn-text-classification/project.clj   |  2 +-
 contrib/clojure-package/examples/gan/project.clj   |  2 +-
 .../examples/imclassification/project.clj  |  2 +-
 .../clojure-package/examples/module/project.clj|  2 +-
 .../examples/multi-label/project.clj   |  2 +-
 .../examples/neural-style/project.clj  |  2 +-
 .../examples/pre-trained-models/project.clj|  2 +-
 .../clojure-package/examples/profiler/project.clj  |  2 +-
 contrib/clojure-package/examples/rnn/project.clj   |  2 +-
 .../clojure-package/examples/tutorial/project.clj  |  6 +-
 .../examples/visualization/project.clj |  2 +-
 contrib/clojure-package/project.clj|  4 +-
 docs/api/python/symbol/contrib.md  |  3 -
 docs/tutorials/scala/mxnet_scala_on_intellij.md|  4 +-
 include/mxnet/base.h   |  2 +-
 mkldnn.mk  | 12 +--
 python/mxnet/libinfo.py|  2 +-
 scala-package/assembly/linux-x86_64-cpu/pom.xml|  8 +-
 scala-package/assembly/linux-x86_64-gpu/pom.xml|  8 +-
 scala-package/assembly/osx-x86_64-cpu/pom.xml  |  8 +-
 scala-package/assembly/pom.xml |  2 +-
 scala-package/core/pom.xml |  6 +-
 scala-package/examples/pom.xml |  6 +-
 scala-package/infer/pom.xml|  4 +-
 scala-package/init-native/linux-x86_64/pom.xml |  4 +-
 scala-package/init-native/osx-x86_64/pom.xml   |  4 +-
 scala-package/init-native/pom.xml  |  2 +-
 scala-package/init/pom.xml |  2 +-
 scala-package/macros/pom.xml   |  6 +-
 scala-package/native/linux-x86_64-cpu/pom.xml  |  4 +-
 scala-package/native/linux-x86_64-gpu/pom.xml  |  4 +-
 scala-package/native/osx-x86_64-cpu/pom.xml|  4 +-
 scala-package/native/pom.xml   |  2 +-
 scala-package/pom.xml  |  2 +-
 scala-package/spark/pom.xml|  4 +-
 snapcraft.yaml |  2 +-
 tests/cpp/unittest.mk  |  8 +-
 .../train_mxnet_legacy_models.sh   |  4 +-
 tests/python/mkl/test_mkldnn.py|  6 +-
 tests/python/mkl/test_mkldnn_install.py| 56 +
 47 files changed, 158 insertions(+), 192 deletions(-)

diff --git a/CMakeLists.txt b/CMakeLists.txt
index 1617056..3b8bbd2 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -227,7 +227,6 @@ if(USE_MKLDNN)
   include(cmake/DownloadMKLML.cmake)
   # CPU architecture (e.g., C5) can't run on another architecture (e.g., g3).
   if(NOT MSVC)
-set(MKLDNN_LIBRARY_TYPE "STATIC" CACHE INTERNAL "" FORCE)
 set(ARCH_OPT_FLAGS "-mtune=generic")
   else()
 set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} /EHsc")
diff --git a/LICENSE b/LICENSE
index 2eb9c32..a8b57e5 100644
--- a/LICENSE
+++ b/LICENSE
@@ -218,20 +218,16 @@
 1. MXNet Cpp-package - For details, /cpp-package/LICENSE
 2. MXNet rcnn - For details, see, example/rcnn/LICENSE
 3. scala-package - For details, see, scala-package/LICENSE
-4. Warp-CTC - For details, see, 3rdparty/ctc_include/LICENSE
+4. Warp-CTC - For details, see, src/operator/contrib/ctc_include/LICENSE
 5. 3rdparty/dlpack - For details, see, 3rdparty/dlpack/LICENSE
 6. 3rdparty/dmlc-core - For details, see, 3rdparty/dmlc-core/LICENSE
 7. 3rdparty/mshadow - For details, see, 3rdparty/mshadow/LICENSE
 8. 3rdparty/tvm - For details, see, 3rdparty/tvm/LICENSE
 9. 3rdparty/tvm/dmlc-core - For details, see, 
3rdparty/tvm/dmlc-core/LICENSE
-10. 3rdparty/tvm/dlpack - For details, see, 
3rdparty/tvm/3rdparty/dlpack/LICENSE
-11. 3rdparty/tvm/nnvm - For details, see, 3rdparty/tvm/nnvm/LICENSE
-12. 3rdparty/ps-lite - For details, see, 3rdparty/ps-lite/LICENSE
-13. 3rdparty/mkldnn - For details, see, 3rdparty/mkldnn/LICENSE
-14. googlemock scripts/generator - For details, 

[incubator-mxnet] 01/09: Revert "Chi_square_check for discrete distribution fix (#13543)"

2018-12-05 Thread taolv
This is an automated email from the ASF dual-hosted git repository.

taolv pushed a commit to branch revert-13478-master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit d145e7dbd34f2a2fd5393a7f15d403e31bcde171
Author: Tao Lv 
AuthorDate: Thu Dec 6 15:34:18 2018 +0800

Revert "Chi_square_check for discrete distribution fix (#13543)"

This reverts commit cf6e8cbd035bf315b3e8280416468a629c780d03.



[incubator-mxnet] 07/09: Revert "[MXNET-1249] Fix Object Detector Performance with GPU (#13522)"

2018-12-05 Thread taolv
This is an automated email from the ASF dual-hosted git repository.

taolv pushed a commit to branch revert-13478-master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 30f2bde4da0ae4973050f417efbb29fa30b2639e
Author: Tao Lv 
AuthorDate: Thu Dec 6 15:34:18 2018 +0800

Revert "[MXNET-1249] Fix Object Detector Performance with GPU (#13522)"

This reverts commit 1c8972c3c8f832519364916865541f48597581c7.



[incubator-mxnet] 05/09: Revert "Add a retry to qemu_provision (#13551)"

2018-12-05 Thread taolv
This is an automated email from the ASF dual-hosted git repository.

taolv pushed a commit to branch revert-13478-master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 66e38c37a8bf97e9b7607a080d02a487f1eba0e0
Author: Tao Lv 
AuthorDate: Thu Dec 6 15:34:18 2018 +0800

Revert "Add a retry to qemu_provision (#13551)"

This reverts commit f6f840110d74111f98c20eab5b08d64a46ebf0cd.



[incubator-mxnet] 03/09: Revert "Simplifications and some fun stuff for the MNIST Gluon tutorial (#13094)"

2018-12-05 Thread taolv
This is an automated email from the ASF dual-hosted git repository.

taolv pushed a commit to branch revert-13478-master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit b9096f42fd0cde31baca79efadcc5533d67df644
Author: Tao Lv 
AuthorDate: Thu Dec 6 15:34:18 2018 +0800

Revert "Simplifications and some fun stuff for the MNIST Gluon tutorial 
(#13094)"

This reverts commit 8bbac827742c21607a863137792f03bd09847419.



[GitHub] TaoLv commented on issue #13478: Bumped minor version from 1.4.0 to 1.5.0 on master, updated License file

2018-12-05 Thread GitBox
TaoLv commented on issue #13478: Bumped minor version from 1.4.0 to 1.5.0 on 
master, updated License file
URL: https://github.com/apache/incubator-mxnet/pull/13478#issuecomment-444775918
 
 
   @srochel I will revert this PR on the master branch to mitigate issues. Feel 
free to submit new PR for your changes. Sorry for any inconvenience. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yzhliu commented on issue #13543: Chi_square_check for discrete distribution fix

2018-12-05 Thread GitBox
yzhliu commented on issue #13543: Chi_square_check for discrete distribution fix
URL: https://github.com/apache/incubator-mxnet/pull/13543#issuecomment-444774416
 
 
   Nice catch!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Vikas89 commented on issue #13526: distributed training van.cc Check failed

2018-12-05 Thread GitBox
Vikas89 commented on issue #13526: distributed training  van.cc Check failed
URL: 
https://github.com/apache/incubator-mxnet/issues/13526#issuecomment-444773980
 
 
   is this ec2 or your own host?
   please share the command you use to ssh-ing to master instance? 
   1/ use ssh -A to login to master 
   2/ on your master instance,run below command (this enables master to use 
your agent to login to workers)
   # setup ssh-forwarding. 
   sed -i "s/^#\(\s\+\)ForwardAgent\(\s\+\)no/\ \1ForwardAgent\2yes/g" 
/etc/ssh/ssh_config
   
   3/ In host file use ip address instead. 
   
   if you want to launch with 1 ps and 2 worker
   your host file should look like:
   192.168.113.227
   192.168.113.228
   192.168.113.229
   
   4/use full path for host file
   
   5/ launch with command: -s option says that use 1 server and -n is 2 worker 
. 1st host in host file will be server, 2nd host will be worker, 3rd host will 
be worker
   launch.py -s 1 -n 2 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] srochel commented on issue #13478: Bumped minor version from 1.4.0 to 1.5.0 on master, updated License file

2018-12-05 Thread GitBox
srochel commented on issue #13478: Bumped minor version from 1.4.0 to 1.5.0 on 
master, updated License file
URL: https://github.com/apache/incubator-mxnet/pull/13478#issuecomment-444773282
 
 
   @TaoL - sorry about this. Looks like I made a mistake with the PR and see 
that mklddn related changes got included. I need to check my setup. 
   What do you suggest is the best way to correct? Should the PR be reverted or 
is there a better way to exclude the mkldnn related changes? Feel free to 
revert the PR or advice on best solution. I will check in the morning PST.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (e0ff3c3 -> cf6e8cb)

2018-12-05 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from e0ff3c3  Updated docs for randint operator (#13541)
 add cf6e8cb  Chi_square_check for discrete distribution fix (#13543)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/test_utils.py | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)



[GitHub] yzhliu closed pull request #13543: Chi_square_check for discrete distribution fix

2018-12-05 Thread GitBox
yzhliu closed pull request #13543: Chi_square_check for discrete distribution 
fix
URL: https://github.com/apache/incubator-mxnet/pull/13543
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/test_utils.py b/python/mxnet/test_utils.py
index 14875601cd2..26f7762ca9b 100644
--- a/python/mxnet/test_utils.py
+++ b/python/mxnet/test_utils.py
@@ -1911,12 +1911,15 @@ def chi_square_check(generator, buckets, probs, 
nsamples=100):
 if continuous_dist:
 sample_bucket_ids = np.searchsorted(buckets_npy, samples, side='right')
 else:
-sample_bucket_ids = samples
+sample_bucket_ids = np.array(samples)
 if continuous_dist:
 sample_bucket_ids = sample_bucket_ids // 2
 obs_freq = np.zeros(shape=len(buckets), dtype=np.int)
-for i in range(len(buckets)):
-obs_freq[i] = (sample_bucket_ids == i).sum()
+for i, _ in enumerate(buckets):
+if continuous_dist:
+obs_freq[i] = (sample_bucket_ids == i).sum()
+else:
+obs_freq[i] = (sample_bucket_ids == buckets[i]).sum()
 _, p = ss.chisquare(f_obs=obs_freq, f_exp=expected_freq)
 return p, obs_freq, expected_freq
 


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CyberZHG opened a new pull request #13557: Fix BatchNorm converter for CoreML when fix_gamma=True

2018-12-05 Thread GitBox
CyberZHG opened a new pull request #13557: Fix BatchNorm converter for CoreML 
when fix_gamma=True
URL: https://github.com/apache/incubator-mxnet/pull/13557
 
 
   ## Description ##
   
   Set gamma values to ones while converting batch normalization in case of 
accidental changes of gamma.
   
   ## Checklist ##
   ### Essentials ###
   
   - [x] Changes are complete
   - [x] All changes have test coverage:
   - [x] Code is well-documented: 
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv edited a comment on issue #13362: Add NHWC layout support to Pooling (cuDNN only)

2018-12-05 Thread GitBox
TaoLv edited a comment on issue #13362: Add NHWC layout support to Pooling 
(cuDNN only)
URL: https://github.com/apache/incubator-mxnet/pull/13362#issuecomment-444771242
 
 
   Tested this PR on CPU w/ and w/o MKL-DNN  and got error message as below.
   w/o MKL-DNN:
   ```
   Traceback (most recent call last):
 File "test_pool_nhwc.py", line 28, in 
   test_pooling_nhwc()
 File "test_pool_nhwc.py", line 21, in test_pooling_nhwc
   out_nhwc = mx.nd.Pooling(data=inp_nhwc, kernel=kernel, pool_type='max', 
global_pool=False, stride=stride, pad=pad, layout='NHWC')
 File "", line 132, in Pooling
 File "/home/lvtao/Workspace/mxnet/python/mxnet/_ctypes/ndarray.py", line 
92, in _imperative_invoke
   ctypes.byref(out_stypes)))
 File "/home/lvtao/Workspace/mxnet/python/mxnet/base.py", line 252, in 
check_call
   raise MXNetError(py_str(_LIB.MXGetLastError()))
   mxnet.base.MXNetError: [12:35:52] src/operator/nn/./pooling-inl.h:167: Check 
failed: param_.layout.value() == kNCW || param_.layout.value() == kNCHW || 
param_.layout.value() == kNCDHW Need CuDNN for layout support
   ```
   w/ MKL-DNN:
   ```
   terminate called after throwing an instance of 'dmlc::Error'
 what():  [12:28:11] src/engine/./threaded_engine.h:380: std::exception
   A fatal error occurred in asynchronous engine operation. If you do not know 
what caused this error, you can try set environment variable MXNET_ENGINE_TYPE 
to NaiveEngine and run with debugger (i.e. gdb). This will force all operations 
to be synchronous and backtrace will give you the series of calls that lead to 
this error. Remember to set MXNET_ENGINE_TYPE back to empty after debugging.
   
   Stack trace returned 10 entries:
   [bt] (0) 
/home/lvtao/Workspace/mxnet/python/mxnet/../../lib/libmxnet.so(dmlc::StackTrace()+0x42)
 [0x7f547889afb6]
   [bt] (1) 
/home/lvtao/Workspace/mxnet/python/mxnet/../../lib/libmxnet.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x1b)
 [0x7f547889b233]
   [bt] (2) 
/home/lvtao/Workspace/mxnet/python/mxnet/../../lib/libmxnet.so(mxnet::engine::ThreadedEngine::ExecuteOprBlock(mxnet::RunContext,
 mxnet::engine::OprBlock*)+0x66e) [0x7f547b7ad24c]
   [bt] (3) /home/lvtao/Workspace/mxnet/python/mxnet/../../lib/libmxnet.so(void 
mxnet::engine::ThreadedEnginePerDevice::CPUWorker<(dmlc::ConcurrentQueueType)0>(mxnet::Context,
 
mxnet::engine::ThreadedEnginePerDevice::ThreadWorkerBlock<(dmlc::ConcurrentQueueType)0>*,
 std::shared_ptr const&)+0x8b) [0x7f547b7bfe9d]
   [bt] (4) 
/home/lvtao/Workspace/mxnet/python/mxnet/../../lib/libmxnet.so(mxnet::engine::ThreadedEnginePerDevice::PushToExecute(mxnet::engine::OprBlock*,
 bool)::{lambda()#1}::operator()() 
const::{lambda(std::shared_ptr)#1}::operator()(dmlc::ManualEvent)
 const+0x33) [0x7f547b7be22b]
   [bt] (5) 
/home/lvtao/Workspace/mxnet/python/mxnet/../../lib/libmxnet.so(std::_Function_handler), 
mxnet::engine::ThreadedEnginePerDevice::PushToExecute(mxnet::engine::OprBlock*, 
bool)::{lambda()#1}::operator()() 
const::{lambda(std::shared_ptr)#1}>::_M_invoke(std::_Any_data
 const&, std::shared_ptr)+0x4a) [0x7f547b7c2751]
   [bt] (6) 
/home/lvtao/Workspace/mxnet/python/mxnet/../../lib/libmxnet.so(std::function)>::operator()(std::shared_ptr)
 const+0x5c) [0x7f547b7b6142]
   [bt] (7) /home/lvtao/Workspace/mxnet/python/mxnet/../../lib/libmxnet.so(void 
std::_Bind_simple)> 
(std::shared_ptr)>::_M_invoke<0ul>(std::_Index_tuple<0ul>)+0x56)
 [0x7f547b7b6036]
   [bt] (8) 
/home/lvtao/Workspace/mxnet/python/mxnet/../../lib/libmxnet.so(std::_Bind_simple)> 
(std::shared_ptr)>::operator()()+0x1b) [0x7f547b7b5ee1]
   [bt] (9) 
/home/lvtao/Workspace/mxnet/python/mxnet/../../lib/libmxnet.so(std::thread::_Impl)> (std::shared_ptr)> 
>::_M_run()+0x1c) [0x7f547b7b5e30]
   
   
   Aborted (core dumped)
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #13362: Add NHWC layout support to Pooling (cuDNN only)

2018-12-05 Thread GitBox
TaoLv commented on issue #13362: Add NHWC layout support to Pooling (cuDNN only)
URL: https://github.com/apache/incubator-mxnet/pull/13362#issuecomment-444771242
 
 
   Tested this PR on CPU w/ and w/o MKL-DNN  and got error message as below.
   w/o MKL-DNN:
   ```
   Traceback (most recent call last):
 File "test_pool_nhwc.py", line 28, in 
   test_pooling_nhwc()
 File "test_pool_nhwc.py", line 21, in test_pooling_nhwc
   out_nhwc = mx.nd.Pooling(data=inp_nhwc, kernel=kernel, pool_type='max', 
global_pool=False, stride=stride, pad=pad, layout='NHWC')
 File "", line 132, in Pooling
 File "/home/lvtao/Workspace/mxnet/python/mxnet/_ctypes/ndarray.py", line 
92, in _imperative_invoke
   ctypes.byref(out_stypes)))
 File "/home/lvtao/Workspace/mxnet/python/mxnet/base.py", line 252, in 
check_call
   raise MXNetError(py_str(_LIB.MXGetLastError()))
   mxnet.base.MXNetError: [12:35:52] src/operator/nn/./pooling-inl.h:167: Check 
failed: param_.layout.value() == kNCW || param_.layout.value() == kNCHW || 
param_.layout.value() == kNCDHW Need CuDNN for layout support
   ```
   w/o MKL-DNN:
   ```
   terminate called after throwing an instance of 'dmlc::Error'
 what():  [12:28:11] src/engine/./threaded_engine.h:380: std::exception
   A fatal error occurred in asynchronous engine operation. If you do not know 
what caused this error, you can try set environment variable MXNET_ENGINE_TYPE 
to NaiveEngine and run with debugger (i.e. gdb). This will force all operations 
to be synchronous and backtrace will give you the series of calls that lead to 
this error. Remember to set MXNET_ENGINE_TYPE back to empty after debugging.
   
   Stack trace returned 10 entries:
   [bt] (0) 
/home/lvtao/Workspace/mxnet/python/mxnet/../../lib/libmxnet.so(dmlc::StackTrace()+0x42)
 [0x7f547889afb6]
   [bt] (1) 
/home/lvtao/Workspace/mxnet/python/mxnet/../../lib/libmxnet.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x1b)
 [0x7f547889b233]
   [bt] (2) 
/home/lvtao/Workspace/mxnet/python/mxnet/../../lib/libmxnet.so(mxnet::engine::ThreadedEngine::ExecuteOprBlock(mxnet::RunContext,
 mxnet::engine::OprBlock*)+0x66e) [0x7f547b7ad24c]
   [bt] (3) /home/lvtao/Workspace/mxnet/python/mxnet/../../lib/libmxnet.so(void 
mxnet::engine::ThreadedEnginePerDevice::CPUWorker<(dmlc::ConcurrentQueueType)0>(mxnet::Context,
 
mxnet::engine::ThreadedEnginePerDevice::ThreadWorkerBlock<(dmlc::ConcurrentQueueType)0>*,
 std::shared_ptr const&)+0x8b) [0x7f547b7bfe9d]
   [bt] (4) 
/home/lvtao/Workspace/mxnet/python/mxnet/../../lib/libmxnet.so(mxnet::engine::ThreadedEnginePerDevice::PushToExecute(mxnet::engine::OprBlock*,
 bool)::{lambda()#1}::operator()() 
const::{lambda(std::shared_ptr)#1}::operator()(dmlc::ManualEvent)
 const+0x33) [0x7f547b7be22b]
   [bt] (5) 
/home/lvtao/Workspace/mxnet/python/mxnet/../../lib/libmxnet.so(std::_Function_handler), 
mxnet::engine::ThreadedEnginePerDevice::PushToExecute(mxnet::engine::OprBlock*, 
bool)::{lambda()#1}::operator()() 
const::{lambda(std::shared_ptr)#1}>::_M_invoke(std::_Any_data
 const&, std::shared_ptr)+0x4a) [0x7f547b7c2751]
   [bt] (6) 
/home/lvtao/Workspace/mxnet/python/mxnet/../../lib/libmxnet.so(std::function)>::operator()(std::shared_ptr)
 const+0x5c) [0x7f547b7b6142]
   [bt] (7) /home/lvtao/Workspace/mxnet/python/mxnet/../../lib/libmxnet.so(void 
std::_Bind_simple)> 
(std::shared_ptr)>::_M_invoke<0ul>(std::_Index_tuple<0ul>)+0x56)
 [0x7f547b7b6036]
   [bt] (8) 
/home/lvtao/Workspace/mxnet/python/mxnet/../../lib/libmxnet.so(std::_Bind_simple)> 
(std::shared_ptr)>::operator()()+0x1b) [0x7f547b7b5ee1]
   [bt] (9) 
/home/lvtao/Workspace/mxnet/python/mxnet/../../lib/libmxnet.so(std::thread::_Impl)> (std::shared_ptr)> 
>::_M_run()+0x1c) [0x7f547b7b5e30]
   
   
   Aborted (core dumped)
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2018-12-05 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 4f98049  Bump the publish timestamp.
4f98049 is described below

commit 4f98049d79dd9049038c97fdc5780aea8d51484e
Author: mxnet-ci 
AuthorDate: Thu Dec 6 06:53:47 2018 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..c709823
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Thu Dec  6 06:53:47 UTC 2018



[GitHub] aaronmarkham edited a comment on issue #13544: turn on Sphinx warnings as errors

2018-12-05 Thread GitBox
aaronmarkham edited a comment on issue #13544: turn on Sphinx warnings as errors
URL: https://github.com/apache/incubator-mxnet/pull/13544#issuecomment-444761968
 
 
   Well that was a bit baffling... I rebased and the bug fixes were gone. Looks 
like this [commit for 
1.4.x](https://github.com/apache/incubator-mxnet/commit/40db61908000ee86d21aac847ff2225807d6c168#diff-631586dbe3b11092920c7268fdc499fc)
 reverted at least one of the changes in master, including the docs bug fixes. 
I think that commit should be reverted/fixed as that was probably not the 
intention. 
   
   @srochel - please take a look. `docs/api/python/symbol/contrib.md` shouldn't 
be in there. I also see some mkldnn stuff in there that isn't really a version 
bump.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #13544: turn on Sphinx warnings as errors

2018-12-05 Thread GitBox
aaronmarkham commented on issue #13544: turn on Sphinx warnings as errors
URL: https://github.com/apache/incubator-mxnet/pull/13544#issuecomment-444761968
 
 
   Well that was a bit baffling... I rebased and the bug fixes were gone. Looks 
like this [commit for 
1.4.x](https://github.com/apache/incubator-mxnet/commit/40db61908000ee86d21aac847ff2225807d6c168#diff-631586dbe3b11092920c7268fdc499fc)
 reverted a bunch of changes in master, including the docs bug fixes. I think 
that commit should be reverted/fixed as that was probably not the intention. 
   
   @srochel - please take a look. `docs/api/python/symbol/contrib.md` shouldn't 
be in there. I also see some mkldnn stuff in there that isn't really a version 
bump.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv edited a comment on issue #13478: Bumped minor version from 1.4.0 to 1.5.0 on master, updated License file

2018-12-05 Thread GitBox
TaoLv edited a comment on issue #13478: Bumped minor version from 1.4.0 to 
1.5.0 on master, updated License file
URL: https://github.com/apache/incubator-mxnet/pull/13478#issuecomment-444759570
 
 
   @srochel @sergeykolychev Seems this PR was messed up by code rebasing and 
many non-related code was merged into master. Please take a look at the changed 
files. Thanks.
   One problem I notice is #13503 was reverted on master by #13540 but 
re-introduced in by this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #13544: turn on Sphinx warnings as errors

2018-12-05 Thread GitBox
aaronmarkham commented on issue #13544: turn on Sphinx warnings as errors
URL: https://github.com/apache/incubator-mxnet/pull/13544#issuecomment-444759810
 
 
   It failed on the website pipeline. So, yay?
   The failure is on a docs bug we already fixed here:
   
https://github.com/apache/incubator-mxnet/pull/13539/files#diff-631586dbe3b11092920c7268fdc499fc
   Looks like I need a rebase and to try again. Actually a good test, assuming 
it passes after this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #13478: Bumped minor version from 1.4.0 to 1.5.0 on master, updated License file

2018-12-05 Thread GitBox
TaoLv commented on issue #13478: Bumped minor version from 1.4.0 to 1.5.0 on 
master, updated License file
URL: https://github.com/apache/incubator-mxnet/pull/13478#issuecomment-444759570
 
 
   @srochel @sergeykolychev Seems this PR was messed up by code rebasing and 
many non-related code was merged into master. Please take a look at the changed 
files. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Updated docs for randint operator (#13541)

2018-12-05 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new e0ff3c3  Updated docs for randint operator (#13541)
e0ff3c3 is described below

commit e0ff3c36ee171386fef01fb86c54c343e4b04c14
Author: Chaitanya Prakash Bapat 
AuthorDate: Wed Dec 5 21:58:19 2018 -0800

Updated docs for randint operator (#13541)

* updated docs for randint

* added randint in __all__ and reordered acc to categorical then 
alphabetical

* Trigger CI

* minus mxnet.symbol and alphabetical for ndarray,symbol.md

* alphabetical order
---
 docs/api/python/ndarray/ndarray.md | 20 +++-
 docs/api/python/ndarray/random.md  |  6 --
 docs/api/python/symbol/random.md   |  5 +++--
 docs/api/python/symbol/symbol.md   | 19 ++-
 python/mxnet/ndarray/random.py |  2 +-
 python/mxnet/symbol/random.py  |  2 +-
 6 files changed, 30 insertions(+), 24 deletions(-)

diff --git a/docs/api/python/ndarray/ndarray.md 
b/docs/api/python/ndarray/ndarray.md
index 6fcf1d4..6419c4e 100644
--- a/docs/api/python/ndarray/ndarray.md
+++ b/docs/api/python/ndarray/ndarray.md
@@ -587,15 +587,17 @@ The `ndarray` package provides several classes:
 .. autosummary::
 :nosignatures:
 
-mxnet.ndarray.random.uniform
-mxnet.ndarray.random.normal
-mxnet.ndarray.random.gamma
-mxnet.ndarray.random.exponential
-mxnet.ndarray.random.poisson
-mxnet.ndarray.random.negative_binomial
-mxnet.ndarray.random.generalized_negative_binomial
-mxnet.ndarray.random.multinomial
-mxnet.ndarray.random.shuffle
+random.exponential
+random.gamma
+random.generalized_negative_binomial
+random.multinomial
+random.negative_binomial
+random.normal
+random.poisson
+random.randint
+random.randn
+random.shuffle
+random.uniform
 mxnet.random.seed
 ```
 
diff --git a/docs/api/python/ndarray/random.md 
b/docs/api/python/ndarray/random.md
index 3ea611f..60c565d 100644
--- a/docs/api/python/ndarray/random.md
+++ b/docs/api/python/ndarray/random.md
@@ -31,12 +31,14 @@ In the rest of this document, we list routines provided by 
the `ndarray.random`
 exponential
 gamma
 generalized_negative_binomial
+multinomial
 negative_binomial
 normal
 poisson
-uniform
-multinomial
+randint
+randn
 shuffle
+uniform
 mxnet.random.seed
 ```
 
diff --git a/docs/api/python/symbol/random.md b/docs/api/python/symbol/random.md
index b93f641..1ecaf38 100644
--- a/docs/api/python/symbol/random.md
+++ b/docs/api/python/symbol/random.md
@@ -31,12 +31,13 @@ In the rest of this document, we list routines provided by 
the `symbol.random` p
 exponential
 gamma
 generalized_negative_binomial
+multinomial
 negative_binomial
 normal
 poisson
-uniform
-multinomial
+randint
 shuffle
+uniform
 mxnet.random.seed
 ```
 
diff --git a/docs/api/python/symbol/symbol.md b/docs/api/python/symbol/symbol.md
index a4038d7..9eba261 100644
--- a/docs/api/python/symbol/symbol.md
+++ b/docs/api/python/symbol/symbol.md
@@ -586,15 +586,16 @@ Composite multiple symbols into a new one by an operator.
 .. autosummary::
 :nosignatures:
 
-mxnet.symbol.random.uniform
-mxnet.symbol.random.normal
-mxnet.symbol.random.gamma
-mxnet.symbol.random.exponential
-mxnet.symbol.random.poisson
-mxnet.symbol.random.negative_binomial
-mxnet.symbol.random.generalized_negative_binomial
-mxnet.symbol.random.multinomial
-mxnet.symbol.random.shuffle
+random.exponential
+random.gamma
+random.generalized_negative_binomial
+random.multinomial
+random.negative_binomial
+random.normal
+random.poisson
+random.randint
+random.shuffle
+random.uniform
 mxnet.random.seed
 ```
 
diff --git a/python/mxnet/ndarray/random.py b/python/mxnet/ndarray/random.py
index fc8be57..78339a0 100644
--- a/python/mxnet/ndarray/random.py
+++ b/python/mxnet/ndarray/random.py
@@ -25,7 +25,7 @@ from .ndarray import NDArray
 
 __all__ = ['uniform', 'normal', 'randn', 'poisson', 'exponential', 'gamma',
'multinomial', 'negative_binomial', 'generalized_negative_binomial',
-   'shuffle']
+   'shuffle', 'randint']
 
 
 def _random_helper(random, sampler, params, shape, dtype, ctx, out, kwargs):
diff --git a/python/mxnet/symbol/random.py b/python/mxnet/symbol/random.py
index c5940ac..34663cd 100644
--- a/python/mxnet/symbol/random.py
+++ b/python/mxnet/symbol/random.py
@@ -23,7 +23,7 @@ from .symbol import Symbol
 
 
 __all__ = ['uniform', 'normal', 'poisson', 'exponential', 'gamma', 
'multinomial',
-   'negative_binomial', 'generalized_negative_binomial', 'shuffle']
+   'negative_binomial', 

[GitHub] aaronmarkham closed pull request #13541: Updated docs for randint operator

2018-12-05 Thread GitBox
aaronmarkham closed pull request #13541: Updated docs for randint operator
URL: https://github.com/apache/incubator-mxnet/pull/13541
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/api/python/ndarray/ndarray.md 
b/docs/api/python/ndarray/ndarray.md
index 6fcf1d428d2..6419c4ed406 100644
--- a/docs/api/python/ndarray/ndarray.md
+++ b/docs/api/python/ndarray/ndarray.md
@@ -587,15 +587,17 @@ The `ndarray` package provides several classes:
 .. autosummary::
 :nosignatures:
 
-mxnet.ndarray.random.uniform
-mxnet.ndarray.random.normal
-mxnet.ndarray.random.gamma
-mxnet.ndarray.random.exponential
-mxnet.ndarray.random.poisson
-mxnet.ndarray.random.negative_binomial
-mxnet.ndarray.random.generalized_negative_binomial
-mxnet.ndarray.random.multinomial
-mxnet.ndarray.random.shuffle
+random.exponential
+random.gamma
+random.generalized_negative_binomial
+random.multinomial
+random.negative_binomial
+random.normal
+random.poisson
+random.randint
+random.randn
+random.shuffle
+random.uniform
 mxnet.random.seed
 ```
 
diff --git a/docs/api/python/ndarray/random.md 
b/docs/api/python/ndarray/random.md
index 3ea611f5c8e..60c565dd552 100644
--- a/docs/api/python/ndarray/random.md
+++ b/docs/api/python/ndarray/random.md
@@ -31,12 +31,14 @@ In the rest of this document, we list routines provided by 
the `ndarray.random`
 exponential
 gamma
 generalized_negative_binomial
+multinomial
 negative_binomial
 normal
 poisson
-uniform
-multinomial
+randint
+randn
 shuffle
+uniform
 mxnet.random.seed
 ```
 
diff --git a/docs/api/python/symbol/random.md b/docs/api/python/symbol/random.md
index b93f641334f..1ecaf38830f 100644
--- a/docs/api/python/symbol/random.md
+++ b/docs/api/python/symbol/random.md
@@ -31,12 +31,13 @@ In the rest of this document, we list routines provided by 
the `symbol.random` p
 exponential
 gamma
 generalized_negative_binomial
+multinomial
 negative_binomial
 normal
 poisson
-uniform
-multinomial
+randint
 shuffle
+uniform
 mxnet.random.seed
 ```
 
diff --git a/docs/api/python/symbol/symbol.md b/docs/api/python/symbol/symbol.md
index a4038d74174..9eba2618065 100644
--- a/docs/api/python/symbol/symbol.md
+++ b/docs/api/python/symbol/symbol.md
@@ -586,15 +586,16 @@ Composite multiple symbols into a new one by an operator.
 .. autosummary::
 :nosignatures:
 
-mxnet.symbol.random.uniform
-mxnet.symbol.random.normal
-mxnet.symbol.random.gamma
-mxnet.symbol.random.exponential
-mxnet.symbol.random.poisson
-mxnet.symbol.random.negative_binomial
-mxnet.symbol.random.generalized_negative_binomial
-mxnet.symbol.random.multinomial
-mxnet.symbol.random.shuffle
+random.exponential
+random.gamma
+random.generalized_negative_binomial
+random.multinomial
+random.negative_binomial
+random.normal
+random.poisson
+random.randint
+random.shuffle
+random.uniform
 mxnet.random.seed
 ```
 
diff --git a/python/mxnet/ndarray/random.py b/python/mxnet/ndarray/random.py
index fc8be571e2e..78339a02086 100644
--- a/python/mxnet/ndarray/random.py
+++ b/python/mxnet/ndarray/random.py
@@ -25,7 +25,7 @@
 
 __all__ = ['uniform', 'normal', 'randn', 'poisson', 'exponential', 'gamma',
'multinomial', 'negative_binomial', 'generalized_negative_binomial',
-   'shuffle']
+   'shuffle', 'randint']
 
 
 def _random_helper(random, sampler, params, shape, dtype, ctx, out, kwargs):
diff --git a/python/mxnet/symbol/random.py b/python/mxnet/symbol/random.py
index c5940ac96a5..34663cddf02 100644
--- a/python/mxnet/symbol/random.py
+++ b/python/mxnet/symbol/random.py
@@ -23,7 +23,7 @@
 
 
 __all__ = ['uniform', 'normal', 'poisson', 'exponential', 'gamma', 
'multinomial',
-   'negative_binomial', 'generalized_negative_binomial', 'shuffle']
+   'negative_binomial', 'generalized_negative_binomial', 'shuffle', 
'randint']
 
 
 def _random_helper(random, sampler, params, shape, dtype, kwargs):


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha opened a new pull request #13556: build config for maven and pip

2018-12-05 Thread GitBox
szha opened a new pull request #13556: build config for maven and pip
URL: https://github.com/apache/incubator-mxnet/pull/13556
 
 
   ## Description ##
   build config for maven and pip
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] build configs for variants of mxnet


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest removed a comment on issue #13362: Add NHWC layout support to Pooling (cuDNN only)

2018-12-05 Thread GitBox
apeforest removed a comment on issue #13362: Add NHWC layout support to Pooling 
(cuDNN only)
URL: https://github.com/apache/incubator-mxnet/pull/13362#issuecomment-444753750
 
 
   @DickJC123 Thanks for your detailed explanation about the operator selection 
logic. I like the way you used a boolean return type for `Forward()` and 
`Backward()` methods to choose the right operator implementation. Although it 
might be an elegant way, this method is not yet well received in the developer 
community. And all the other operators do not currently implement in this way. 
I am afraid that using it alone in this operator without a clear guideline may 
cause more confusion for prospect developers. Can you still follow the 
traditional way of declaring `Forward()` and `Backward()` function and maybe 
propose this in a separate thread to refactor the `Forward()` and `Backward()` 
with boolean return type? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #13544: turn on Sphinx warnings as errors

2018-12-05 Thread GitBox
aaronmarkham commented on issue #13544: turn on Sphinx warnings as errors
URL: https://github.com/apache/incubator-mxnet/pull/13544#issuecomment-444753836
 
 
   > At what stage in the CI is this check performed? Can you please point to 
the logs where we can check and confirm?
   
   I just added it to the website pipeline in the `deploy_docs()` function. 
Look for the following test in a PR, and that's where the errors will pop up.
   
![2018-12-05_21-27-35](https://user-images.githubusercontent.com/5974205/49563400-96b50100-f8d4-11e8-8fbb-b5e4e484351d.png)
   
   I see that the [website validation 
job](http://jenkins.mxnet-ci.amazon-ml.com/job/mxnet-validation/job/website/) 
has a child 1.4.x job in addition to master. I imagine that will break if it 
also runs the `deploy_docs()` function, because many bugs that we've recently 
fixed in master have not been ported over to that branch.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest commented on issue #13362: Add NHWC layout support to Pooling (cuDNN only)

2018-12-05 Thread GitBox
apeforest commented on issue #13362: Add NHWC layout support to Pooling (cuDNN 
only)
URL: https://github.com/apache/incubator-mxnet/pull/13362#issuecomment-444753750
 
 
   @DickJC123 Thanks for your detailed explanation about the operator selection 
logic. I like the way you used a boolean return type for `Forward()` and 
`Backward()` methods to choose the right operator implementation. Although it 
might be an elegant way, this method is not yet well received in the developer 
community. And all the other operators do not currently implement in this way. 
I am afraid that using it alone in this operator without a clear guideline may 
cause more confusion for prospect developers. Can you still follow the 
traditional way of declaring `Forward()` and `Backward()` function and maybe 
propose this in a separate thread to refactor the `Forward()` and `Backward()` 
with boolean return type? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] SSE4 commented on issue #13400: [MXNET-1229] use OpenBLAS, lapack & OpenCV from conan

2018-12-05 Thread GitBox
SSE4 commented on issue #13400: [MXNET-1229] use OpenBLAS, lapack & OpenCV from 
conan
URL: https://github.com/apache/incubator-mxnet/pull/13400#issuecomment-444753144
 
 
   unix-gpu error is weird:
   ```
   Sending interrupt signal to process
   ```
   any clue why did it fail?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on a change in pull request #13544: turn on Sphinx warnings as errors

2018-12-05 Thread GitBox
aaronmarkham commented on a change in pull request #13544: turn on Sphinx 
warnings as errors
URL: https://github.com/apache/incubator-mxnet/pull/13544#discussion_r239331752
 
 

 ##
 File path: docs/build_version_doc/build_all_version.sh
 ##
 @@ -117,6 +120,8 @@ function checkout () {
   git checkout "$repo_folder" || git branch $repo_folder 
"upstream/$repo_folder" && git checkout "$repo_folder" || exit 1
   if [ $tag == 'master' ]; then
 git pull
+# master gets warnings as errors for Sphinx builds
 
 Review comment:
   I added the `-W` option to the new docs pipeline. This should force it to 
happen any time that runs. And the website build flow will still skip it for 
old versions. I'm not totally familiar with new pipelines, so please let me 
know if you think this is going to be an issue or not.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest commented on issue #13555: [MXNET-1253] fix control_flow_op

2018-12-05 Thread GitBox
apeforest commented on issue #13555: [MXNET-1253] fix control_flow_op
URL: https://github.com/apache/incubator-mxnet/pull/13555#issuecomment-444748084
 
 
   @zheng-da @anirudh2290 @sandeep-krishnamurthy @yuxihu @frankfliu Please 
review


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest opened a new pull request #13555: [MXNET-1253] fix control_flow_op

2018-12-05 Thread GitBox
apeforest opened a new pull request #13555: [MXNET-1253] fix control_flow_op
URL: https://github.com/apache/incubator-mxnet/pull/13555
 
 
   ## Description ##
   Support large array in control_flow_ops
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [X] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [X] Changes are complete (i.e. I finished coding on this PR)
   - [X] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [X] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [X] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - test_large_array.py:test_where()
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] huangzhiyuan commented on a change in pull request #12980: Add reshape op supported by MKL-DNN

2018-12-05 Thread GitBox
huangzhiyuan commented on a change in pull request #12980: Add reshape op 
supported by MKL-DNN
URL: https://github.com/apache/incubator-mxnet/pull/12980#discussion_r239327409
 
 

 ##
 File path: src/operator/tensor/matrix_op.cc
 ##
 @@ -914,7 +972,7 @@ NNVM_REGISTER_OP(depth_to_space)
 .describe(R"code(Rearranges(permutes) data from depth into blocks of spatial 
data.
 Similar to ONNX DepthToSpace operator:
 https://github.com/onnx/onnx/blob/master/docs/Operators.md#DepthToSpace.
-The output is a new tensor where the values from depth dimension are moved in 
spatial blocks 
 
 Review comment:
   Have fixed the white space same as before, could you help review again? 
thanks :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on issue #13438: libc getenv is not threadsafe

2018-12-05 Thread GitBox
anirudh2290 commented on issue #13438: libc getenv is not threadsafe
URL: 
https://github.com/apache/incubator-mxnet/issues/13438#issuecomment-444741884
 
 
   The problem as provided in the article linked in this issue and related 
article here: https://rachelbythebay.com/w/2017/01/30/env/ is that if the main 
thread spawns another thread, which calls setenv and while we call setenv the 
process is forked, the mutex is currently in locked state in the child process 
and it will never be unlocked since there is no thread to release the lock 
which causes it to hang. 
   This can be replicated in MXNet in the following way. Pull the code from 
https://github.com/anirudh2290/mxnet/tree/setenv_issue and build it similar to 
the following:
   ```
   cd build && cmake VERBOSE=1 -DUSE_CUDA=ON -DUSE_CUDNN=ON -DUSE_MKLDNN=ON 
-DUSE_OPENMP=ON -DUSE_OPENCV=OFF -DCMAKE_BUILD_TYPE=Debug -GNinja ..
   ```
   
   Run the following script:
   
   ```
 import multiprocessing
 import os
 import sys
 import mxnet as mx
   
 def mxnet_worker():
  print 'inside mxnet_worker'
   
 mx.base._LIB.MXStartBackgroundThread(mx.base.c_str("dummy"))
 read_process = [multiprocessing.Process(target=mxnet_worker) for i in 
range(8)]
 for p in read_process:
 p.daemon = True
 p.start()
 p.join()
   ```
   
   Now run the script, you will be able to see the process hangs.
   When I attach gdb to the process I see the following:
   
   ```
   #0  __lll_lock_wait_private () at 
../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:95
   #1  0x7fc0fabab99c in __add_to_environ (name=0x7fc093a935fc 
"MXNET_CPU_WORKER_NTHREADS", value=0x7fffec2eff10 "1", combined=0x0,
   replace=1) at setenv.c:133
   ```
   
   which means it is stuck trying to acquire the lock: 
https://github.com/lattera/glibc/blob/master/stdlib/setenv.c#L133
   
   I checked the mxnet codebase to see if we are calling SetEnv anywhere else 
and we dont seem to be calling it anywhere except here. Also, pthread_at_fork 
statement calls `Engine::Get()->Stop()` which would mean that all engine 
threads are suspended. It is still possible that it could be called from other 
multithreaded code in MXNet iterators for example, but I couldnt find it and it 
is unlikely that we are not using dmlc::SetEnv but something else to set env 
vars for mxnet or dmlc-core code. I think it is more likely that the customer 
application spawned a thread, which called `SetEnv` at the same time 
pthread_at_fork was called which let to this behavior. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #13362: Add NHWC layout support to Pooling (cuDNN only)

2018-12-05 Thread GitBox
TaoLv commented on issue #13362: Add NHWC layout support to Pooling (cuDNN only)
URL: https://github.com/apache/incubator-mxnet/pull/13362#issuecomment-444741474
 
 
   Sure. It's a problem of MXNet, not this PR. Do you mind documenting some 
where what does the output layout look like? Then user can know what they need 
to do before sending the output to next layer.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on issue #13544: turn on Sphinx warnings as errors

2018-12-05 Thread GitBox
anirudhacharya commented on issue #13544: turn on Sphinx warnings as errors
URL: https://github.com/apache/incubator-mxnet/pull/13544#issuecomment-444736486
 
 
   At what stage in the CI is this check performed? Can you please point to the 
logs where we can check and confirm?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] DickJC123 commented on issue #13362: Add NHWC layout support to Pooling (cuDNN only)

2018-12-05 Thread GitBox
DickJC123 commented on issue #13362: Add NHWC layout support to Pooling (cuDNN 
only)
URL: https://github.com/apache/incubator-mxnet/pull/13362#issuecomment-444734362
 
 
   The forced transposition I referred to is done currently by the user in the 
python model.  With this new feature, the Transpose operators could get removed 
in favor of a Pooling with the new layout parameter.
   
   I'm a bit sorry that NDArray's don't carry the layout information.  As a 
result, the layout has to travel along with the data ('out-of-band') and be 
passed into the operators along the way.  On the other hand, having multi-input 
operators deal with inconsistent layouts would be a headache.  Anyway, that's a 
discussion to be debated in its own thread.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #13515: MKLDNN Perplexity Issue

2018-12-05 Thread GitBox
pengzhao-intel commented on issue #13515: MKLDNN Perplexity Issue
URL: 
https://github.com/apache/incubator-mxnet/issues/13515#issuecomment-444730942
 
 
   Good to know the problem is fixed :)
   
   Feel free to ping me if anything needs our help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #13544: turn on Sphinx warnings as errors

2018-12-05 Thread GitBox
marcoabreu commented on a change in pull request #13544: turn on Sphinx 
warnings as errors
URL: https://github.com/apache/incubator-mxnet/pull/13544#discussion_r239313917
 
 

 ##
 File path: docs/build_version_doc/build_all_version.sh
 ##
 @@ -117,6 +120,8 @@ function checkout () {
   git checkout "$repo_folder" || git branch $repo_folder 
"upstream/$repo_folder" && git checkout "$repo_folder" || exit 1
   if [ $tag == 'master' ]; then
 git pull
+# master gets warnings as errors for Sphinx builds
 
 Review comment:
   Shouldn't we enable this in general? Otherwise a release branch might run 
into problems.
   
   Also, is this script executed as part of the pr pipeline? We want to fail 
PRs ideally


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #13362: Add NHWC layout support to Pooling (cuDNN only)

2018-12-05 Thread GitBox
marcoabreu commented on issue #13362: Add NHWC layout support to Pooling (cuDNN 
only)
URL: https://github.com/apache/incubator-mxnet/pull/13362#issuecomment-444730273
 
 
   I love that approach about the operator selection, dick!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #13362: Add NHWC layout support to Pooling (cuDNN only)

2018-12-05 Thread GitBox
TaoLv commented on issue #13362: Add NHWC layout support to Pooling (cuDNN only)
URL: https://github.com/apache/incubator-mxnet/pull/13362#issuecomment-444728394
 
 
   > Prior to this PR, in MXNet, the Pooling operator did not have a parameter 
to specify the layout, forcing a transposition always to NCHW.
   
   @DickJC123 Good to know that. Could you explain a bit more about when and 
where this kind of transposition happens?
   
   With this new feature, what's the typical usage in user model? Does user 
need to specify layout in each layer and each operator?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Davdi edited a comment on issue #13526: distributed training van.cc Check failed

2018-12-05 Thread GitBox
Davdi edited a comment on issue #13526: distributed training  van.cc Check 
failed
URL: 
https://github.com/apache/incubator-mxnet/issues/13526#issuecomment-444727742
 
 
   > As I see there are 3 different issues here:
   > 
   > ```
   > File "/usr/local/lib/python3.6/dist-packages/mxnet/base.py", line 252, in 
check_call
   > raise MXNetError(py_str(LIB.MXGetLastError()))
   > mxnet.base.MXNetError: [08:54:25] src/van.cc:291: Check failed: 
(my_node.port) != (-1) bind failed
   > ```
   > 1. Host file -
   >if you say -n 2, there will be 2 worker and 2 server. If you have only 
one line with host and port, all of the processes will try to launch on same 
port.
   >So work around is same as what I suggested earlier. Please use only 
host and let mxnet chose port.
   >If you want chose port yourself, find 4 different ports which are not 
used and use 4 entries in host file.
   > 
   > Ideally you should have multiple hosts for distributed training.
   > 
   > ```
   > `Traceback (most recent call last):
   > File "/userhome/incubator-mxnet/tools/launch.py", line 128, in
   > main()
   > File "/userhome/incubator-mxnet/tools/launch.py", line 109, in main
   > raise RuntimeError('Unknown submission cluster type %s' % args.cluster)
   > RuntimeError: Unknown submission cluster type ssh
   > ```
   > This seems like a launch script issue. Can you try not giving --launcher 
option in command line, and using and use full host file path in -H option
   > 
   > ```
   > usage: image_classification.py [-h] [--dataset DATASET] [--data-dir 
DATA_DIR]
   > [--num-worker NUM_WORKERS]
   > [--batch-size BATCH_SIZE] [--gpus GPUS]
   > [--epochs EPOCHS] [--lr LR]
   > [--momentum MOMENTUM] [--wd WD] [--seed SEED]
   > [--mode MODE] --model MODEL [--use_thumbnail]
   > [--batch-norm] [--use-pretrained]
   > [--prefix PREFIX] [--start-epoch START_EPOCH]
   > [--resume RESUME] [--lr-factor LR_FACTOR]
   > [--lr-steps LR_STEPS] [--dtype DTYPE]
   > [--save-frequency SAVE_FREQUENCY]
   > [--kvstore KVSTORE]
   > [--log-interval LOG_INTERVAL] [--profile]
   > [--builtin-profiler BUILTIN_PROFILER]
   > image_classification.py: error: unrecognized arguments: epochs 1
   > ```
   > This is problem with training code. If it is coming from examples this 
needs to be fixed.
   
   
thanks ,i modify the hosts file  and the content is this
   ps-0
   worker-0
   worker-1
   
   this is ip of ps and worker ,
   and under the folder /root/.ssh/config 
   
   > 
   Host ps-0
 HostName 192.168.113.227
 Port 10015
 User root
 StrictHostKeyChecking no
 UserKnownHostsFile /dev/null
 IdentityFile /root/.ssh/application_1544059068811_0001
   Host worker-0
 HostName 192.168.113.227
 Port 10016
 User root
 StrictHostKeyChecking no
 UserKnownHostsFile /dev/null
 IdentityFile /root/.ssh/application_1544059068811_0001
   Host worker-1
 HostName 192.168.113.226
 Port 10023
 User root
 StrictHostKeyChecking no
 UserKnownHostsFile /dev/null
 IdentityFile /root/.ssh/application_1544059068811_0001
   > 
   
   
   
   but when i run command  
   
   ` ../../tools/launch.py -n 2 -H hosts --launcher ssh python 
image_classification.py --dataset cifar10 --model vgg11 --epochs 1 --kvstore 
dist_sync
   `
   
   it shows the error 
   
   Warning: Permanently added '192.168.113.227' (ECDSA) to the list of known 
hosts.
   Warning: Permanently added '192.168.113.227' (ECDSA) to the list of known 
hosts.
   Warning: Permanently added '192.168.113.227' (ECDSA) to the list of known 
hosts.
   Warning: Permanently added '192.168.113.226' (ECDSA) to the list of known 
hosts.
   root@192.168.113.227's password: root@192.168.113.227's password: 
root@192.168.113.227's password: root@192.168.113.226's password:
   
   
   it seems that i need password but i use the primary key and no password ,and 
when i use `ssh worker-0`
   
   it login succesfully
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Davdi commented on issue #13526: distributed training van.cc Check failed

2018-12-05 Thread GitBox
Davdi commented on issue #13526: distributed training  van.cc Check failed
URL: 
https://github.com/apache/incubator-mxnet/issues/13526#issuecomment-444727742
 
 
   > As I see there are 3 different issues here:
   > 
   > ```
   > File "/usr/local/lib/python3.6/dist-packages/mxnet/base.py", line 252, in 
check_call
   > raise MXNetError(py_str(LIB.MXGetLastError()))
   > mxnet.base.MXNetError: [08:54:25] src/van.cc:291: Check failed: 
(my_node.port) != (-1) bind failed
   > ```
   > 1. Host file -
   >if you say -n 2, there will be 2 worker and 2 server. If you have only 
one line with host and port, all of the processes will try to launch on same 
port.
   >So work around is same as what I suggested earlier. Please use only 
host and let mxnet chose port.
   >If you want chose port yourself, find 4 different ports which are not 
used and use 4 entries in host file.
   > 
   > Ideally you should have multiple hosts for distributed training.
   > 
   > ```
   > `Traceback (most recent call last):
   > File "/userhome/incubator-mxnet/tools/launch.py", line 128, in
   > main()
   > File "/userhome/incubator-mxnet/tools/launch.py", line 109, in main
   > raise RuntimeError('Unknown submission cluster type %s' % args.cluster)
   > RuntimeError: Unknown submission cluster type ssh
   > ```
   > This seems like a launch script issue. Can you try not giving --launcher 
option in command line, and using and use full host file path in -H option
   > 
   > ```
   > usage: image_classification.py [-h] [--dataset DATASET] [--data-dir 
DATA_DIR]
   > [--num-worker NUM_WORKERS]
   > [--batch-size BATCH_SIZE] [--gpus GPUS]
   > [--epochs EPOCHS] [--lr LR]
   > [--momentum MOMENTUM] [--wd WD] [--seed SEED]
   > [--mode MODE] --model MODEL [--use_thumbnail]
   > [--batch-norm] [--use-pretrained]
   > [--prefix PREFIX] [--start-epoch START_EPOCH]
   > [--resume RESUME] [--lr-factor LR_FACTOR]
   > [--lr-steps LR_STEPS] [--dtype DTYPE]
   > [--save-frequency SAVE_FREQUENCY]
   > [--kvstore KVSTORE]
   > [--log-interval LOG_INTERVAL] [--profile]
   > [--builtin-profiler BUILTIN_PROFILER]
   > image_classification.py: error: unrecognized arguments: epochs 1
   > ```
   > This is problem with training code. If it is coming from examples this 
needs to be fixed.
ok thanks ,i modify the hosts file  and the content is this
   
   ps-0
   worker-0
   worker-1
   
   this is ip of ps and worker ,
   and under the folder /root/.ssh/config 
   `Host ps-0
 HostName 192.168.113.227
 Port 10015
 User root
 StrictHostKeyChecking no
 UserKnownHostsFile /dev/null
 IdentityFile /root/.ssh/application_1544059068811_0001
   Host worker-0
 HostName 192.168.113.227
 Port 10016
 User root
 StrictHostKeyChecking no
 UserKnownHostsFile /dev/null
 IdentityFile /root/.ssh/application_1544059068811_0001
   Host worker-1
 HostName 192.168.113.226
 Port 10023
 User root
 StrictHostKeyChecking no
 UserKnownHostsFile /dev/null
 IdentityFile /root/.ssh/application_1544059068811_0001
   `
   
   
   
   but when i run command  
   
   ` ../../tools/launch.py -n 2 -H hosts --launcher ssh python 
image_classification.py --dataset cifar10 --model vgg11 --epochs 1 --kvstore 
dist_sync
   `
   
   it shows the error 
   
   Warning: Permanently added '192.168.113.227' (ECDSA) to the list of known 
hosts.
   Warning: Permanently added '192.168.113.227' (ECDSA) to the list of known 
hosts.
   Warning: Permanently added '192.168.113.227' (ECDSA) to the list of known 
hosts.
   Warning: Permanently added '192.168.113.226' (ECDSA) to the list of known 
hosts.
   root@192.168.113.227's password: root@192.168.113.227's password: 
root@192.168.113.227's password: root@192.168.113.226's password:
   
   
   it seems that i need password but i use the primary key and no password ,and 
when i use `ssh worker-0`
   
   it login succesfully
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tranvanhoa533 commented on issue #12001: SyncBatchNorm problems

2018-12-05 Thread GitBox
tranvanhoa533 commented on issue #12001: SyncBatchNorm problems
URL: 
https://github.com/apache/incubator-mxnet/issues/12001#issuecomment-444726902
 
 
   @kaleidoscopical  Which version of mxnet did you use ? I use mxnet version 
1.3.1 but It still runs into a fail of asnumpy().


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hellonico opened a new pull request #13554: #13385 [Clojure] - Turn examples into integration tests

2018-12-05 Thread GitBox
hellonico opened a new pull request #13554: #13385 [Clojure] - Turn examples 
into integration tests
URL: https://github.com/apache/incubator-mxnet/pull/13554
 
 
   !!! This is work in progress. 
   
   Would:
   - the committed script 
   - adding a bare minimum deftest inside each example
   
   be ok? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Joke09 commented on issue #13545: For inference, I have the same problem. The client send jpg to server, then the server use cv2 to do resize. When put the image data into the mx.nd.array, it

2018-12-05 Thread GitBox
Joke09 commented on issue #13545: For inference, I have the same problem. The 
client send jpg to server, then the server use cv2 to do resize. When put the 
image data into the mx.nd.array, it's very slow. And the Utilization of GPU is 
low too. How to solve it? Thank you!
URL: 
https://github.com/apache/incubator-mxnet/issues/13545#issuecomment-444724175
 
 
   > Hi @Joke09 thanks for your question. I think you might miss this: 
https://mxnet.incubator.apache.org/api/python/image/image.html please take a 
look
   
   I think it works, when the image is in the disk. But image data is numpy 
array in this problem.  
   Sorry, I don't know how to use the image api for numpy array. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ChaiBapchya commented on a change in pull request #13541: Updated docs for randint operator

2018-12-05 Thread GitBox
ChaiBapchya commented on a change in pull request #13541: Updated docs for 
randint operator
URL: https://github.com/apache/incubator-mxnet/pull/13541#discussion_r239306194
 
 

 ##
 File path: docs/api/python/ndarray/random.md
 ##
 @@ -37,6 +37,7 @@ In the rest of this document, we list routines provided by 
the `ndarray.random`
 uniform
 multinomial
 shuffle
+randint
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ChaiBapchya commented on a change in pull request #13541: Updated docs for randint operator

2018-12-05 Thread GitBox
ChaiBapchya commented on a change in pull request #13541: Updated docs for 
randint operator
URL: https://github.com/apache/incubator-mxnet/pull/13541#discussion_r239306068
 
 

 ##
 File path: docs/api/python/symbol/random.md
 ##
 @@ -37,6 +37,7 @@ In the rest of this document, we list routines provided by 
the `symbol.random` p
 uniform
 multinomial
 shuffle
+randint
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ChaiBapchya commented on a change in pull request #13541: Updated docs for randint operator

2018-12-05 Thread GitBox
ChaiBapchya commented on a change in pull request #13541: Updated docs for 
randint operator
URL: https://github.com/apache/incubator-mxnet/pull/13541#discussion_r239306053
 
 

 ##
 File path: docs/api/python/symbol/symbol.md
 ##
 @@ -595,6 +595,7 @@ Composite multiple symbols into a new one by an operator.
 mxnet.symbol.random.generalized_negative_binomial
 mxnet.symbol.random.multinomial
 mxnet.symbol.random.shuffle
+mxnet.symbol.random.randint
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin commented on a change in pull request #12980: Add reshape op supported by MKL-DNN

2018-12-05 Thread GitBox
ZhennanQin commented on a change in pull request #12980: Add reshape op 
supported by MKL-DNN
URL: https://github.com/apache/incubator-mxnet/pull/12980#discussion_r239306007
 
 

 ##
 File path: src/operator/tensor/matrix_op.cc
 ##
 @@ -210,24 +272,18 @@ static void FlattenEx(const nnvm::NodeAttrs& attrs,
 #endif
 }
 
+#if MXNET_USE_MKLDNN == 1
 
 Review comment:
   This is followed new style with other mkldnn op, that is, defining 
InferStorageType function within MKLDNN  macro. Most other ops are refactored 
into this style by Luobao, I guess this one is missing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ChaiBapchya commented on a change in pull request #13541: Updated docs for randint operator

2018-12-05 Thread GitBox
ChaiBapchya commented on a change in pull request #13541: Updated docs for 
randint operator
URL: https://github.com/apache/incubator-mxnet/pull/13541#discussion_r239306040
 
 

 ##
 File path: docs/api/python/ndarray/random.md
 ##
 @@ -37,6 +37,7 @@ In the rest of this document, we list routines provided by 
the `ndarray.random`
 uniform
 multinomial
 shuffle
+randint
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ChaiBapchya commented on a change in pull request #13541: Updated docs for randint operator

2018-12-05 Thread GitBox
ChaiBapchya commented on a change in pull request #13541: Updated docs for 
randint operator
URL: https://github.com/apache/incubator-mxnet/pull/13541#discussion_r239306027
 
 

 ##
 File path: docs/api/python/ndarray/ndarray.md
 ##
 @@ -596,6 +596,7 @@ The `ndarray` package provides several classes:
 mxnet.ndarray.random.generalized_negative_binomial
 mxnet.ndarray.random.multinomial
 mxnet.ndarray.random.shuffle
+mxnet.ndarray.random.randint
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Joke09 edited a comment on issue #13545: For inference, I have the same problem. The client send jpg to server, then the server use cv2 to do resize. When put the image data into the mx.nd.ar

2018-12-05 Thread GitBox
Joke09 edited a comment on issue #13545: For inference, I have the same 
problem. The client send jpg to server, then the server use cv2 to do resize. 
When put the image data into the mx.nd.array, it's very slow. And the 
Utilization of GPU is low too. How to solve it? Thank you!
URL: 
https://github.com/apache/incubator-mxnet/issues/13545#issuecomment-444717557
 
 
   > Hi @Joke09 ,
   > 
   > When you put your image data into the NDArray, can you also try to copy it 
to GPU using `as_in_context` method before passing it to your model ?
   > 
   > 
http://mxnet.incubator.apache.org/test/api/python/ndarray.html#mxnet.ndarray.NDArray.as_in_context
   
   Thank you! It works, But not much.
   Before:
   data = [mx.nd.array(im_arrays), mx.nd.array(im_infos)]
   The shape of im_arrays is [8,3,896,1024]. It take 104ms.
   After:
   data = [mx.nd.array(im_arrays, ctx=mx.gpu(1)), mx.nd.array(im_infos)]
   It take 68ms. I think it's not fast enough.  And it use more memory in GPU.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Joke09 commented on issue #13545: For inference, I have the same problem. The client send jpg to server, then the server use cv2 to do resize. When put the image data into the mx.nd.array, it

2018-12-05 Thread GitBox
Joke09 commented on issue #13545: For inference, I have the same problem. The 
client send jpg to server, then the server use cv2 to do resize. When put the 
image data into the mx.nd.array, it's very slow. And the Utilization of GPU is 
low too. How to solve it? Thank you!
URL: 
https://github.com/apache/incubator-mxnet/issues/13545#issuecomment-444717557
 
 
   > Hi @Joke09 ,
   > 
   > When you put your image data into the NDArray, can you also try to copy it 
to GPU using `as_in_context` method before passing it to your model ?
   > 
   > 
http://mxnet.incubator.apache.org/test/api/python/ndarray.html#mxnet.ndarray.NDArray.as_in_context
   
   Thank you! It works, But not much.
   Before:
   data = [mx.nd.array(im_arrays), mx.nd.array(im_infos)]
   The shape of im_arrays is [8,3,896,1024]. It take 104ms.
   After:
   data = [mx.nd.array(im_arrays, ctx=mx.gpu(1)), mx.nd.array(im_infos)]
   It take 68ms. I think it's not fast enough. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #13529: Improve dev_menu usability, local build and virtualenv

2018-12-05 Thread GitBox
aaronmarkham commented on issue #13529: Improve dev_menu usability, local build 
and virtualenv
URL: https://github.com/apache/incubator-mxnet/pull/13529#issuecomment-444715306
 
 
   The menu is nice. 
   Is defaulting to CUDA=ON the preferred way? I thought OFF was the default 
with make right now...
   I was a little confused with the mixing of cmake_options.yaml and 
cmake_options.yml. Can we have just one reference?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid commented on issue #13523: #13441 [Clojure] Add Spec Validations for the Random namespace

2018-12-05 Thread GitBox
gigasquid commented on issue #13523: #13441 [Clojure] Add Spec Validations for 
the Random namespace
URL: https://github.com/apache/incubator-mxnet/pull/13523#issuecomment-444709860
 
 
   Thanks @hellonico for your work in improving this. I ran through the 
examples and everything looks good. It is good to go when CI is green.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2018-12-05 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new cd215ef  Bump the publish timestamp.
cd215ef is described below

commit cd215ef65760d7179db466bfbc4b1fa613c8528a
Author: mxnet-ci 
AuthorDate: Thu Dec 6 01:02:56 2018 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..3c97b2e
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Thu Dec  6 01:02:56 UTC 2018



[GitHub] aaronmarkham commented on issue #13544: turn on Sphinx warnings as errors

2018-12-05 Thread GitBox
aaronmarkham commented on issue #13544: turn on Sphinx warnings as errors
URL: https://github.com/apache/incubator-mxnet/pull/13544#issuecomment-444708562
 
 
   This is time sensitive. As it ages, we'll have another round of cleaning up 
bugs that are introduced to get a clean start.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hellonico edited a comment on issue #13523: #13441 [Clojure] Add Spec Validations for the Random namespace

2018-12-05 Thread GitBox
hellonico edited a comment on issue #13523: #13441 [Clojure] Add Spec 
Validations for the Random namespace
URL: https://github.com/apache/incubator-mxnet/pull/13523#issuecomment-444707535
 
 
   >  What do you think would be more user friendly? 
   
   number? was a better choice, and is nicer to the user.
   I have updates the specs.
   
   > It's definitely fine to change the example to make it work correctly :)
   
   I am glad you agreed ;) 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hellonico commented on issue #13523: #13441 [Clojure] Add Spec Validations for the Random namespace

2018-12-05 Thread GitBox
hellonico commented on issue #13523: #13441 [Clojure] Add Spec Validations for 
the Random namespace
URL: https://github.com/apache/incubator-mxnet/pull/13523#issuecomment-444707535
 
 
   >  What do you think would be more user friendly? 
   number? was a better choice, and is nicer to the user.
   I have updates the specs.
   
   > It's definitely fine to change the example to make it work correctly :)
   I am glad you agreed ;) 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Simplifications and some fun stuff for the MNIST Gluon tutorial (#13094)

2018-12-05 Thread thomasdelteil
This is an automated email from the ASF dual-hosted git repository.

thomasdelteil pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 8bbac82  Simplifications and some fun stuff for the MNIST Gluon 
tutorial (#13094)
8bbac82 is described below

commit 8bbac827742c21607a863137792f03bd09847419
Author: Holger Kohr 
AuthorDate: Thu Dec 6 01:38:46 2018 +0100

Simplifications and some fun stuff for the MNIST Gluon tutorial (#13094)

* Simplify mnist Gluon tutorial and add mislabelled sample plotting

* Add mnist Gluon tutorial images

* Gluon MNIST tutorial: Use modern Gluon constructs, fix some wordings

* [Gluon] Move to data loaders and improve wording in MNIST tutorial

* Fix broken links

* Fix spelling of mislabeled

* Final rewordings and code simplifications

* Fix things according to review

- Apply hybrid blocks
- Move outputs outside of code blocks and mark for notebooks
  to ignore
- Remove images, use external link
- Fix a few formulations

* Change activations to sigmoid in MNIST tutorial

* Remove superfluous last layer activations in MNIST tutorial
---
 docs/tutorials/gluon/mnist.md | 554 +-
 1 file changed, 332 insertions(+), 222 deletions(-)

diff --git a/docs/tutorials/gluon/mnist.md b/docs/tutorials/gluon/mnist.md
index 5b8a98a..35fb405 100644
--- a/docs/tutorials/gluon/mnist.md
+++ b/docs/tutorials/gluon/mnist.md
@@ -1,24 +1,22 @@
-# Handwritten Digit Recognition
+# Hand-written Digit Recognition
 
-In this tutorial, we'll give you a step by step walk-through of how to build a 
hand-written digit classifier using the 
[MNIST](https://en.wikipedia.org/wiki/MNIST_database) dataset.
+In this tutorial, we'll give you a step-by-step walkthrough of building a 
hand-written digit classifier using the 
[MNIST](https://en.wikipedia.org/wiki/MNIST_database) dataset.
 
-MNIST is a widely used dataset for the hand-written digit classification task. 
It consists of 70,000 labeled 28x28 pixel grayscale images of hand-written 
digits. The dataset is split into 60,000 training images and 10,000 test 
images. There are 10 classes (one for each of the 10 digits). The task at hand 
is to train a model using the 60,000 training images and subsequently test its 
classification accuracy on the 10,000 test images.
+MNIST is a widely used dataset for the hand-written digit classification task. 
It consists of 70,000 labeled grayscale images of hand-written digits, each 
28x28 pixels in size. The dataset is split into 60,000 training images and 
10,000 test images. There are 10 classes (one for each of the 10 digits). The 
task at hand is to train a model that can correctly classify the images into 
the digits they represent. The 60,000 training images are used to fit the 
model, and its performance in ter [...]
 
 
![png](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/example/mnist.png)
 
 **Figure 1:** Sample images from the MNIST dataset.
 
-This tutorial uses MXNet's new high-level interface, gluon package to 
implement MLP using
-imperative fashion.
-
-This is based on the Mnist tutorial with symbolic approach. You can find it 
[here](http://mxnet.io/tutorials/python/mnist.html).
+This tutorial uses MXNet's high-level *Gluon* interface to implement neural 
networks in an imperative fashion. It is based on [the corresponding tutorial 
written with the symbolic 
approach](https://mxnet.incubator.apache.org/tutorials/python/mnist.html).
 
 ## Prerequisites
-To complete this tutorial, we need:
 
-- MXNet. See the instructions for your operating system in [Setup and 
Installation](http://mxnet.io/install/index.html).
+To complete this tutorial, you need:
 
-- [Python Requests](http://docs.python-requests.org/en/master/) and [Jupyter 
Notebook](http://jupyter.org/index.html).
+- MXNet. See the instructions for your operating system in [Setup and 
Installation](https://mxnet.incubator.apache.org/install/index.html).
+- The Python [`requests`](http://docs.python-requests.org/en/master/) library.
+- (Optional) The [Jupyter Notebook](https://jupyter.org/index.html) software 
for interactively running the provided `.ipynb` file.
 
 ```
 $ pip install requests jupyter
@@ -26,308 +24,420 @@ $ pip install requests jupyter
 
 ## Loading Data
 
-Before we define the model, let's first fetch the 
[MNIST](http://yann.lecun.com/exdb/mnist/) dataset.
+The following code downloads the MNIST dataset to the default location 
(`.mxnet/datasets/mnist/` in your home directory) and creates `Dataset` objects 
`train_data` and `val_data` for training and validation, respectively.
+These objects can later be used to get one image or a batch of images at a 
time, together with their corresponding labels.
 
-The following source code downloads and 

[GitHub] ThomasDelteil closed pull request #13094: Simplifications and some fun stuff for the MNIST Gluon tutorial

2018-12-05 Thread GitBox
ThomasDelteil closed pull request #13094: Simplifications and some fun stuff 
for the MNIST Gluon tutorial
URL: https://github.com/apache/incubator-mxnet/pull/13094
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/tutorials/gluon/mnist.md b/docs/tutorials/gluon/mnist.md
index 5b8a98a3d66..35fb40521f6 100644
--- a/docs/tutorials/gluon/mnist.md
+++ b/docs/tutorials/gluon/mnist.md
@@ -1,24 +1,22 @@
-# Handwritten Digit Recognition
+# Hand-written Digit Recognition
 
-In this tutorial, we'll give you a step by step walk-through of how to build a 
hand-written digit classifier using the 
[MNIST](https://en.wikipedia.org/wiki/MNIST_database) dataset.
+In this tutorial, we'll give you a step-by-step walkthrough of building a 
hand-written digit classifier using the 
[MNIST](https://en.wikipedia.org/wiki/MNIST_database) dataset.
 
-MNIST is a widely used dataset for the hand-written digit classification task. 
It consists of 70,000 labeled 28x28 pixel grayscale images of hand-written 
digits. The dataset is split into 60,000 training images and 10,000 test 
images. There are 10 classes (one for each of the 10 digits). The task at hand 
is to train a model using the 60,000 training images and subsequently test its 
classification accuracy on the 10,000 test images.
+MNIST is a widely used dataset for the hand-written digit classification task. 
It consists of 70,000 labeled grayscale images of hand-written digits, each 
28x28 pixels in size. The dataset is split into 60,000 training images and 
10,000 test images. There are 10 classes (one for each of the 10 digits). The 
task at hand is to train a model that can correctly classify the images into 
the digits they represent. The 60,000 training images are used to fit the 
model, and its performance in terms of classification accuracy is subsequently 
validated on the 10,000 test images.
 
 
![png](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/example/mnist.png)
 
 **Figure 1:** Sample images from the MNIST dataset.
 
-This tutorial uses MXNet's new high-level interface, gluon package to 
implement MLP using
-imperative fashion.
-
-This is based on the Mnist tutorial with symbolic approach. You can find it 
[here](http://mxnet.io/tutorials/python/mnist.html).
+This tutorial uses MXNet's high-level *Gluon* interface to implement neural 
networks in an imperative fashion. It is based on [the corresponding tutorial 
written with the symbolic 
approach](https://mxnet.incubator.apache.org/tutorials/python/mnist.html).
 
 ## Prerequisites
-To complete this tutorial, we need:
 
-- MXNet. See the instructions for your operating system in [Setup and 
Installation](http://mxnet.io/install/index.html).
+To complete this tutorial, you need:
 
-- [Python Requests](http://docs.python-requests.org/en/master/) and [Jupyter 
Notebook](http://jupyter.org/index.html).
+- MXNet. See the instructions for your operating system in [Setup and 
Installation](https://mxnet.incubator.apache.org/install/index.html).
+- The Python [`requests`](http://docs.python-requests.org/en/master/) library.
+- (Optional) The [Jupyter Notebook](https://jupyter.org/index.html) software 
for interactively running the provided `.ipynb` file.
 
 ```
 $ pip install requests jupyter
@@ -26,308 +24,420 @@ $ pip install requests jupyter
 
 ## Loading Data
 
-Before we define the model, let's first fetch the 
[MNIST](http://yann.lecun.com/exdb/mnist/) dataset.
+The following code downloads the MNIST dataset to the default location 
(`.mxnet/datasets/mnist/` in your home directory) and creates `Dataset` objects 
`train_data` and `val_data` for training and validation, respectively.
+These objects can later be used to get one image or a batch of images at a 
time, together with their corresponding labels.
 
-The following source code downloads and loads the images and the corresponding 
labels into memory.
+We also immediately apply the `transform_first()` method and supply a function 
that moves the channel axis of the images to the beginning (`(28, 28, 1) -> (1, 
28, 28)`), casts them to `float32` and rescales them from `[0, 255]` to `[0, 
1]`.
+The name `transform_first` reflects the fact that these datasets contain 
images and labels, and that the transform should only be applied to the first 
of each `(image, label)` pair.
 
 ```python
 import mxnet as mx
 
-# Fixing the random seed
+# Select a fixed random seed for reproducibility
 mx.random.seed(42)
 
-mnist = mx.test_utils.get_mnist()
+def data_xform(data):
+"""Move channel axis to the beginning, cast to float32, and normalize to 
[0, 1]."""
+return nd.moveaxis(data, 2, 0).astype('float32') / 255
+
+train_data = mx.gluon.data.vision.MNIST(train=True).transform_first(data_xform)
+val_data = 

[GitHub] vandanavk removed a comment on issue #13553: [WIP] ONNX test code cleanup

2018-12-05 Thread GitBox
vandanavk removed a comment on issue #13553: [WIP] ONNX test code cleanup
URL: https://github.com/apache/incubator-mxnet/pull/13553#issuecomment-444704079
 
 
   @mxnet-label-bot add [ONNX, pr-work-in-progress]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on issue #13553: [WIP] ONNX test code cleanup

2018-12-05 Thread GitBox
vandanavk commented on issue #13553: [WIP] ONNX test code cleanup
URL: https://github.com/apache/incubator-mxnet/pull/13553#issuecomment-444704079
 
 
   @mxnet-label-bot add [ONNX, pr-work-in-progress]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk opened a new pull request #13553: [WIP] ONNX test code cleanup

2018-12-05 Thread GitBox
vandanavk opened a new pull request #13553: [WIP] ONNX test code cleanup
URL: https://github.com/apache/incubator-mxnet/pull/13553
 
 
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on a change in pull request #13534: add cpp example inception to nightly test

2018-12-05 Thread GitBox
roywei commented on a change in pull request #13534: add cpp example inception 
to nightly test
URL: https://github.com/apache/incubator-mxnet/pull/13534#discussion_r239289338
 
 

 ##
 File path: cpp-package/example/mlp.cpp
 ##
 @@ -144,7 +144,7 @@ void MLP() {
grad_req_type, aux_states);
 
   std::cout << "Training" << std::endl;
-  int max_iters = 2;
+  int max_iters = 10;
 
 Review comment:
   addressed comments and turns out it's iterating on a synthetic patterned 
data of size 128, it will take around 15000 epochs to get 90% acc. Renamed the 
variables and added comment


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on issue #13550: Fixes infinite loop using imagedetiter

2018-12-05 Thread GitBox
zhreshold commented on issue #13550: Fixes infinite loop using imagedetiter
URL: https://github.com/apache/incubator-mxnet/pull/13550#issuecomment-444700527
 
 
   LGTM, suggest to cherry-pick into v1.4.x as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zachgk commented on a change in pull request #13493: [MXNET-1224]: improve scala maven jni build.

2018-12-05 Thread GitBox
zachgk commented on a change in pull request #13493: [MXNET-1224]: improve 
scala maven jni build.
URL: https://github.com/apache/incubator-mxnet/pull/13493#discussion_r239286858
 
 

 ##
 File path: scala-package/pom.xml
 ##
 @@ -39,6 +39,8 @@
 2.11.8
 2.11
 
+g++
+$
 
 Review comment:
   Should this be dollar?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] DickJC123 commented on issue #13362: Add NHWC layout support to Pooling (cuDNN only)

2018-12-05 Thread GitBox
DickJC123 commented on issue #13362: Add NHWC layout support to Pooling (cuDNN 
only)
URL: https://github.com/apache/incubator-mxnet/pull/13362#issuecomment-444698761
 
 
   @apeforest Re: bool return of Forward(), Backward(), etc.
   
   As you know, MXNet has been moving to a more imperative style of processing. 
 Before that, all operators were subclasses of Operator, which defined 
Forward() and Backward() as returning void.  Now, operators like Pooling are 
their own class and can define Forward() and Backward() as they desire.  In 
addition, with every operator, we face the chore of selecting the best 
implementation (cudnn, cuda, mkldnn, etc.).  Rather than code the selection 
logic in a central place, I recommend each implementation be asked (in a 
favored order) whether it supports the operation, given its parameters.  With 
the cudnn convolution operator back in the "pre-imperative era", I added a 
'Supports(..., param, ...) method.  Sadly, this tended to repeat a lot of the 
checks and analysis done by the actual Forward() and Backward() calls.  Rather 
than adding a repetitive Supports() method to Pooling, I realized how much 
cleaner it was to just have Forward() or Backward() return a flag indicating 
whether the function could be performed.  This allows the implementation 
selector to cleanly fall back to another implementation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ptrendx commented on a change in pull request #13362: Add NHWC layout support to Pooling (cuDNN only)

2018-12-05 Thread GitBox
ptrendx commented on a change in pull request #13362: Add NHWC layout support 
to Pooling (cuDNN only)
URL: https://github.com/apache/incubator-mxnet/pull/13362#discussion_r239280157
 
 

 ##
 File path: tests/python/gpu/test_operator_gpu.py
 ##
 @@ -608,6 +608,52 @@ def test_convolution_versions():
 
 
 @with_seed()
+def test_pooling_with_convention():
+# While the float32 and float64 output is reliably consistent, float16 
departs occasionally.
+# We compare cpu and gpu results only within a given precision.
+for data_type in [np.float64, np.float32, np.float16]:
+ctx_list = [{'ctx': mx.gpu(0), 'pool_data': (2, 2, 10, 10), 
'type_dict': {'pool_data': data_type}},
+{'ctx': mx.cpu(0), 'pool_data': (2, 2, 10, 10), 
'type_dict': {'pool_data': data_type}}]
+sym = mx.sym.Pooling(kernel=(3,3), pool_type='max', 
pooling_convention='valid', name='pool')
+check_consistency(sym, ctx_list)
+
+sym = mx.sym.Pooling(kernel=(3,3), pool_type='max', 
pooling_convention='full', name='pool')
+check_consistency(sym, ctx_list)
+
+sym = mx.sym.Pooling(kernel=(300,300), pool_type='max', 
global_pool=True, name='pool')
+check_consistency(sym, ctx_list)
+
+
+@with_seed()
+@assert_raises_cudnn_not_satisfied(min_version='7.0.1')
 
 Review comment:
   Hmm, skipping this test if cudnn version is not satisfied was more or less 
my understanding of what this decorator is doing. Is it wrong? What should I do 
here instead?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ptrendx commented on a change in pull request #13362: Add NHWC layout support to Pooling (cuDNN only)

2018-12-05 Thread GitBox
ptrendx commented on a change in pull request #13362: Add NHWC layout support 
to Pooling (cuDNN only)
URL: https://github.com/apache/incubator-mxnet/pull/13362#discussion_r239279756
 
 

 ##
 File path: src/operator/nn/pooling.cu
 ##
 @@ -116,10 +118,12 @@ void PoolingGradCompute(const nnvm::NodeAttrs& 
attrs,
   switch (param.pool_type) {
 case pool_enum::kMaxPooling:
 case pool_enum::kAvgPooling:
-  GetCuDNNPoolingOp(param).Backward(ctx, inputs[ograd_idx],
+  if (GetCuDNNPoolingOp(param).Backward(ctx, inputs[ograd_idx],
inputs[in_data_idx], 
inputs[out_data_idx],
-   req[0], outputs[0]);
-  return;
+   req[0], outputs[0])) {
+return;
+  }
+  break;
 
 Review comment:
   No - if Backward returns false, program will exit the switch statement and 
proceed to non-cuDNN implementation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ptrendx commented on a change in pull request #13362: Add NHWC layout support to Pooling (cuDNN only)

2018-12-05 Thread GitBox
ptrendx commented on a change in pull request #13362: Add NHWC layout support 
to Pooling (cuDNN only)
URL: https://github.com/apache/incubator-mxnet/pull/13362#discussion_r239277593
 
 

 ##
 File path: python/mxnet/gluon/nn/conv_layers.py
 ##
 @@ -738,12 +741,13 @@ class MaxPool1D(_Pooling):
 """
 def __init__(self, pool_size=2, strides=None, padding=0, layout='NCW',
  ceil_mode=False, **kwargs):
-assert layout == 'NCW', "Only supports 'NCW' layout for now"
+assert layout in ('NCW', 'NWC'),\
+"Only NCW and NWC layouts are valid for 1D"
 
 Review comment:
   Will do.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] DickJC123 commented on issue #13362: Add NHWC layout support to Pooling (cuDNN only)

2018-12-05 Thread GitBox
DickJC123 commented on issue #13362: Add NHWC layout support to Pooling (cuDNN 
only)
URL: https://github.com/apache/incubator-mxnet/pull/13362#issuecomment-444690325
 
 
   @TaoLv , providing details and motivation of the PR (partial duplicate of 
info just added by @ptrendx ):
   
   In MXNet, layout is not something that is stored with the NDArray.  Some 
operators, like pointwise ones, don't even care about the layout, and will 
produce the same output regardless of layout.  Other operators, like 
Convolution, Batchnorm and Pooling will need to be told the layout.  
Convolution supports a limited number of layouts via the 'layout' parameter, 
e.g. layout='NHWC'.  Batchnorm doesn't need to know everything about the 
layout, just which dimension is the 'C' dimension.  For this, the Batchnorm op 
accepts the axis parameter, e.g. axis=3 for NHWC batchnorm.  Prior to this PR, 
in MXNet, the Pooling operator did not have a parameter to specify the layout, 
forcing a transposition always to NCHW.
   
   We have two goals with this PR:
   - Create a way to inform Pooling of the layout, in the style of the 
Convolution 'layout' parameter, thereby allowing direct use of the 
arbitrary-layout Pooling support offered by cudnn, and
   - Enable MXNet to support an end-to-end processing of NHWC-layout data 
(i.e. no transposes), which is particularly efficient in mixed-precision on 
Volta Tensor Cores.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ptrendx commented on a change in pull request #13362: Add NHWC layout support to Pooling (cuDNN only)

2018-12-05 Thread GitBox
ptrendx commented on a change in pull request #13362: Add NHWC layout support 
to Pooling (cuDNN only)
URL: https://github.com/apache/incubator-mxnet/pull/13362#discussion_r239277149
 
 

 ##
 File path: src/operator/nn/cudnn/cudnn_pooling-inl.h
 ##
 @@ -165,55 +170,78 @@ class CuDNNPoolingOp {
 } else {
   LOG(FATAL) << "Only support 2D or 3D pooling";
 }
+return true;
   }
 
  private:
-  inline void Init(mshadow::Stream *s, const TBlob _data,
+  // Return boolean saying whether pooling configuration is supported
+  inline bool Init(mshadow::Stream *s, const TBlob _data,
   const TBlob _data) {
 using namespace mshadow;
+bool is_supported = true;
 #if CUDNN_MAJOR >= 5
 nan_prop_ = CUDNN_NOT_PROPAGATE_NAN;
 #endif
 if (param_.kernel.ndim() == 2) {
   // 2d conv
+  CHECK(param_.layout.value() == mshadow::kNCHW ||
+param_.layout.value() == mshadow::kNHWC) << "Need 2D layout";
+  cudnnTensorFormat_t cudnn_layout =
+  (param_.layout.value() == mshadow::kNCHW) ? CUDNN_TENSOR_NCHW
+: CUDNN_TENSOR_NHWC;
   Tensor data = in_data.get(s);
   Tensor out = out_data.get(s);
-  mshadow::Shape<4> dshape = data.shape_;
+  // Perform shape calculations in a standard (NCHW) layout space
+  mshadow::Shape<4> dshape_nchw = (param_.layout.value() == 
mshadow::kNHWC) ?
+  ConvertLayout(data.shape_, 
mshadow::kNHWC, mshadow::kNCHW) :
+  data.shape_;
+  mshadow::Shape<4> oshape_nchw = (param_.layout.value() == 
mshadow::kNHWC) ?
+  ConvertLayout(out.shape_, 
mshadow::kNHWC, mshadow::kNCHW) :
+  out.shape_;
   CUDNN_CALL(cudnnSetTensor4dDescriptor(in_desc_,
-CUDNN_TENSOR_NCHW,
+cudnn_layout,
 dtype_,
-data.shape_[0],
-data.shape_[1],
-data.shape_[2],
-data.shape_[3]));
+dshape_nchw[0],
+dshape_nchw[1],
+dshape_nchw[2],
+dshape_nchw[3]));
   CUDNN_CALL(cudnnSetTensor4dDescriptor(out_desc_,
-CUDNN_TENSOR_NCHW,
+cudnn_layout,
 dtype_,
-out.shape_[0],
-out.shape_[1],
-out.shape_[2],
-out.shape_[3]));
+oshape_nchw[0],
+oshape_nchw[1],
+oshape_nchw[2],
+oshape_nchw[3]));
+  int window_height = param_.global_pool ? dshape_nchw[2] : 
param_.kernel[0];
+  int window_width = param_.global_pool ? dshape_nchw[3] : 
param_.kernel[1];
+  // CuDNN v7.1.4 backprop kernel doesn't support window sizes 9 and above.
+  #if CUDNN_VERSION == 7104
+  is_supported = window_height <= 8 && window_width <= 8;
 
 Review comment:
   Ok, will do.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leleamol commented on a change in pull request #13294: [MXNET-1083] Add the example to demonstrate the inference workflow using C++ API

2018-12-05 Thread GitBox
leleamol commented on a change in pull request #13294: [MXNET-1083] Add the 
example to demonstrate the inference workflow using C++ API
URL: https://github.com/apache/incubator-mxnet/pull/13294#discussion_r239276470
 
 

 ##
 File path: cpp-package/example/README.md
 ##
 @@ -2,7 +2,8 @@
 
 ## Building C++ examples
 
-The examples are built while building the MXNet library and cpp-package from 
source . However, they can be built manually as follows
+The examples in this folder demonstrate the **training** workflow. The 
**inference workflow** related examples can be found in 
[inference]()
 folder.
 
 Review comment:
   Agree.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leleamol commented on a change in pull request #13294: [MXNET-1083] Add the example to demonstrate the inference workflow using C++ API

2018-12-05 Thread GitBox
leleamol commented on a change in pull request #13294: [MXNET-1083] Add the 
example to demonstrate the inference workflow using C++ API
URL: https://github.com/apache/incubator-mxnet/pull/13294#discussion_r239276259
 
 

 ##
 File path: cpp-package/example/inference/README.md
 ##
 @@ -0,0 +1,35 @@
+# MXNet C++ Package Inference Workflow Examples
+
+## Building C++ Inference examples
+
+The examples in this folder demonstrate the **inference** workflow.
+To build examples use following commands:
+
+-  Release: **make all**
+-  Debug: **make debug all**
+
+
+## Examples demonstrating inference workflow
+
+This directory contains following examples. In order to run the examples, 
ensure that the path to the MXNet shared library is added to the OS specific 
environment variable viz. **LD\_LIBRARY\_PATH** for Linux, Mac and Ubuntu OS 
and **PATH** for Windows OS.
 
 Review comment:
   The example command is included in the README below.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leleamol commented on a change in pull request #13294: [MXNET-1083] Add the example to demonstrate the inference workflow using C++ API

2018-12-05 Thread GitBox
leleamol commented on a change in pull request #13294: [MXNET-1083] Add the 
example to demonstrate the inference workflow using C++ API
URL: https://github.com/apache/incubator-mxnet/pull/13294#discussion_r239275736
 
 

 ##
 File path: cpp-package/example/inference/inception_inference.cpp
 ##
 @@ -0,0 +1,414 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*
+ * This example demonstrates image classification workflow with pre-trained 
models using MXNet C++ API.
+ * The example performs following tasks.
+ * 1. Load the pre-trained model,
+ * 2. Load the parameters of pre-trained model,
+ * 3. Load the image to be classified  in to NDArray.
+ * 4. Normalize the image using the mean of images that were used for training.
+ * 5. Run the forward pass and predict the input image.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "mxnet-cpp/MxNetCpp.h"
+#include 
+
+using namespace mxnet::cpp;
+
+
+/*
+ * class Predictor
+ *
+ * This class encapsulates the functionality to load the model, process input 
image and run the forward pass.
+ */
+
+class Predictor {
+ public:
+Predictor() {}
+Predictor(const std::string& model_json,
+  const std::string& model_params,
+  const Shape& input_shape,
+  bool gpu_context_type = false,
+  const std::string& synset_file = "",
+  const std::string& mean_image_file = "");
+void PredictImage(const std::string& image_file);
+~Predictor();
+
+ private:
+void LoadModel(const std::string& model_json_file);
+void LoadParameters(const std::string& model_parameters_file);
+void LoadSynset(const std::string& synset_file);
+NDArray LoadInputImage(const std::string& image_file);
+void LoadMeanImageData();
+void NormalizeInput(const std::string& mean_image_file);
+inline bool FileExists(const std::string& name) {
+struct stat buffer;
+return (stat(name.c_str(), ) == 0);
+}
+NDArray mean_img;
+std::map args_map;
+std::map aux_map;
+std::vector output_labels;
+Symbol net;
+Executor *executor;
+Shape input_shape;
+NDArray mean_image_data;
+Context global_ctx = Context::cpu();
+std::string mean_image_file;
+};
+
+
+/*
+ * The constructor takes following parameters as input:
+ * 1. model_json:  The model in json formatted file.
+ * 2. model_params: File containing model parameters
+ * 3. synset_file: File containing the list of image labels
+ * 4. input_shape: Shape of input data to the model. Since this class will be 
running one inference at a time,
+ * the input shape is required to be in format Shape(1, 
number_of_channels, height, width)
+ * The input image will be resized to (height x width) size before running the 
inference.
+ * The constructor will:
+ *  1. Load the model and parameter files.
+ *  2. Load the synset file.
+ *  3. Invoke the SimpleBind to bind the input argument to the model and 
create an executor.
+ *
+ *  The SimpleBind is expected to be invoked only once.
+ */
+Predictor::Predictor(const std::string& model_json,
+ const std::string& model_params,
+ const Shape& input_shape,
+ bool gpu_context_type,
+ const std::string& synset_file,
+ const std::string& mean_image_file):
+ input_shape(input_shape),
+ mean_image_file(mean_image_file) {
+  if (gpu_context_type) {
+global_ctx = Context::gpu();
+  }
+  // Load the model
+  LoadModel(model_json);
+
+  // Load the model parameters.
+  LoadParameters(model_params);
+
+  /*
+   * The data will be used to output the exact label that matches highest 
output of the model.
+   */
+  LoadSynset(synset_file);
+
+  /*
+   * Load the mean image data if specified.
+   */
+  if (!mean_image_file.empty()) {
+LoadMeanImageData();
+  } else {
+LG << "Mean image file for normalizing the input is not provide."
+   << " It may affect the accuracy of the prediction.";
+  }
+
+  // Create an executor after binding the model to input parameters.
+  args_map["data"] = NDArray(input_shape, 

[GitHub] leleamol commented on a change in pull request #13294: [MXNET-1083] Add the example to demonstrate the inference workflow using C++ API

2018-12-05 Thread GitBox
leleamol commented on a change in pull request #13294: [MXNET-1083] Add the 
example to demonstrate the inference workflow using C++ API
URL: https://github.com/apache/incubator-mxnet/pull/13294#discussion_r239276098
 
 

 ##
 File path: cpp-package/example/inference/inception_inference.cpp
 ##
 @@ -0,0 +1,414 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*
+ * This example demonstrates image classification workflow with pre-trained 
models using MXNet C++ API.
+ * The example performs following tasks.
+ * 1. Load the pre-trained model,
+ * 2. Load the parameters of pre-trained model,
+ * 3. Load the image to be classified  in to NDArray.
+ * 4. Normalize the image using the mean of images that were used for training.
+ * 5. Run the forward pass and predict the input image.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "mxnet-cpp/MxNetCpp.h"
+#include 
+
+using namespace mxnet::cpp;
+
+
+/*
+ * class Predictor
+ *
+ * This class encapsulates the functionality to load the model, process input 
image and run the forward pass.
+ */
+
+class Predictor {
+ public:
+Predictor() {}
+Predictor(const std::string& model_json,
+  const std::string& model_params,
+  const Shape& input_shape,
+  bool gpu_context_type = false,
+  const std::string& synset_file = "",
+  const std::string& mean_image_file = "");
+void PredictImage(const std::string& image_file);
+~Predictor();
 
 Review comment:
   It is done in destructor.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leleamol commented on a change in pull request #13294: [MXNET-1083] Add the example to demonstrate the inference workflow using C++ API

2018-12-05 Thread GitBox
leleamol commented on a change in pull request #13294: [MXNET-1083] Add the 
example to demonstrate the inference workflow using C++ API
URL: https://github.com/apache/incubator-mxnet/pull/13294#discussion_r239275908
 
 

 ##
 File path: cpp-package/example/inference/inception_inference.cpp
 ##
 @@ -0,0 +1,414 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*
+ * This example demonstrates image classification workflow with pre-trained 
models using MXNet C++ API.
+ * The example performs following tasks.
+ * 1. Load the pre-trained model,
+ * 2. Load the parameters of pre-trained model,
+ * 3. Load the image to be classified  in to NDArray.
+ * 4. Normalize the image using the mean of images that were used for training.
+ * 5. Run the forward pass and predict the input image.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "mxnet-cpp/MxNetCpp.h"
+#include 
+
+using namespace mxnet::cpp;
+
+
+/*
+ * class Predictor
+ *
+ * This class encapsulates the functionality to load the model, process input 
image and run the forward pass.
+ */
+
+class Predictor {
+ public:
+Predictor() {}
+Predictor(const std::string& model_json,
+  const std::string& model_params,
+  const Shape& input_shape,
+  bool gpu_context_type = false,
+  const std::string& synset_file = "",
+  const std::string& mean_image_file = "");
+void PredictImage(const std::string& image_file);
+~Predictor();
+
+ private:
+void LoadModel(const std::string& model_json_file);
+void LoadParameters(const std::string& model_parameters_file);
+void LoadSynset(const std::string& synset_file);
+NDArray LoadInputImage(const std::string& image_file);
+void LoadMeanImageData();
+void NormalizeInput(const std::string& mean_image_file);
+inline bool FileExists(const std::string& name) {
+struct stat buffer;
+return (stat(name.c_str(), ) == 0);
+}
+NDArray mean_img;
+std::map args_map;
+std::map aux_map;
+std::vector output_labels;
+Symbol net;
+Executor *executor;
+Shape input_shape;
+NDArray mean_image_data;
+Context global_ctx = Context::cpu();
+std::string mean_image_file;
+};
+
+
+/*
+ * The constructor takes following parameters as input:
+ * 1. model_json:  The model in json formatted file.
+ * 2. model_params: File containing model parameters
+ * 3. synset_file: File containing the list of image labels
+ * 4. input_shape: Shape of input data to the model. Since this class will be 
running one inference at a time,
+ * the input shape is required to be in format Shape(1, 
number_of_channels, height, width)
+ * The input image will be resized to (height x width) size before running the 
inference.
+ * The constructor will:
+ *  1. Load the model and parameter files.
+ *  2. Load the synset file.
+ *  3. Invoke the SimpleBind to bind the input argument to the model and 
create an executor.
+ *
+ *  The SimpleBind is expected to be invoked only once.
+ */
+Predictor::Predictor(const std::string& model_json,
+ const std::string& model_params,
+ const Shape& input_shape,
+ bool gpu_context_type,
+ const std::string& synset_file,
+ const std::string& mean_image_file):
+ input_shape(input_shape),
+ mean_image_file(mean_image_file) {
+  if (gpu_context_type) {
+global_ctx = Context::gpu();
+  }
+  // Load the model
+  LoadModel(model_json);
+
+  // Load the model parameters.
+  LoadParameters(model_params);
+
+  /*
+   * The data will be used to output the exact label that matches highest 
output of the model.
+   */
+  LoadSynset(synset_file);
+
+  /*
+   * Load the mean image data if specified.
+   */
+  if (!mean_image_file.empty()) {
+LoadMeanImageData();
+  } else {
+LG << "Mean image file for normalizing the input is not provide."
+   << " It may affect the accuracy of the prediction.";
+  }
+
+  // Create an executor after binding the model to input parameters.
+  args_map["data"] = NDArray(input_shape, 

[GitHub] ptrendx commented on issue #13362: Add NHWC layout support to Pooling (cuDNN only)

2018-12-05 Thread GitBox
ptrendx commented on issue #13362: Add NHWC layout support to Pooling (cuDNN 
only)
URL: https://github.com/apache/incubator-mxnet/pull/13362#issuecomment-444689260
 
 
   @Vikas89  I took a stab at modifying PR description to include motivation 
and testing that we did.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leleamol commented on a change in pull request #13294: [MXNET-1083] Add the example to demonstrate the inference workflow using C++ API

2018-12-05 Thread GitBox
leleamol commented on a change in pull request #13294: [MXNET-1083] Add the 
example to demonstrate the inference workflow using C++ API
URL: https://github.com/apache/incubator-mxnet/pull/13294#discussion_r239276048
 
 

 ##
 File path: cpp-package/example/inference/inception_inference.cpp
 ##
 @@ -0,0 +1,414 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*
+ * This example demonstrates image classification workflow with pre-trained 
models using MXNet C++ API.
+ * The example performs following tasks.
+ * 1. Load the pre-trained model,
+ * 2. Load the parameters of pre-trained model,
+ * 3. Load the image to be classified  in to NDArray.
+ * 4. Normalize the image using the mean of images that were used for training.
+ * 5. Run the forward pass and predict the input image.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "mxnet-cpp/MxNetCpp.h"
+#include 
+
+using namespace mxnet::cpp;
+
+
+/*
+ * class Predictor
+ *
+ * This class encapsulates the functionality to load the model, process input 
image and run the forward pass.
+ */
+
+class Predictor {
+ public:
+Predictor() {}
+Predictor(const std::string& model_json,
+  const std::string& model_params,
+  const Shape& input_shape,
+  bool gpu_context_type = false,
+  const std::string& synset_file = "",
+  const std::string& mean_image_file = "");
+void PredictImage(const std::string& image_file);
+~Predictor();
+
+ private:
+void LoadModel(const std::string& model_json_file);
+void LoadParameters(const std::string& model_parameters_file);
+void LoadSynset(const std::string& synset_file);
+NDArray LoadInputImage(const std::string& image_file);
+void LoadMeanImageData();
+void NormalizeInput(const std::string& mean_image_file);
+inline bool FileExists(const std::string& name) {
+struct stat buffer;
+return (stat(name.c_str(), ) == 0);
+}
+NDArray mean_img;
+std::map args_map;
+std::map aux_map;
+std::vector output_labels;
+Symbol net;
+Executor *executor;
+Shape input_shape;
+NDArray mean_image_data;
+Context global_ctx = Context::cpu();
+std::string mean_image_file;
+};
+
+
+/*
+ * The constructor takes following parameters as input:
+ * 1. model_json:  The model in json formatted file.
+ * 2. model_params: File containing model parameters
+ * 3. synset_file: File containing the list of image labels
+ * 4. input_shape: Shape of input data to the model. Since this class will be 
running one inference at a time,
+ * the input shape is required to be in format Shape(1, 
number_of_channels, height, width)
+ * The input image will be resized to (height x width) size before running the 
inference.
+ * The constructor will:
+ *  1. Load the model and parameter files.
+ *  2. Load the synset file.
+ *  3. Invoke the SimpleBind to bind the input argument to the model and 
create an executor.
+ *
+ *  The SimpleBind is expected to be invoked only once.
+ */
+Predictor::Predictor(const std::string& model_json,
+ const std::string& model_params,
+ const Shape& input_shape,
+ bool gpu_context_type,
+ const std::string& synset_file,
+ const std::string& mean_image_file):
+ input_shape(input_shape),
+ mean_image_file(mean_image_file) {
+  if (gpu_context_type) {
+global_ctx = Context::gpu();
+  }
+  // Load the model
+  LoadModel(model_json);
+
+  // Load the model parameters.
+  LoadParameters(model_params);
+
+  /*
+   * The data will be used to output the exact label that matches highest 
output of the model.
+   */
+  LoadSynset(synset_file);
+
+  /*
+   * Load the mean image data if specified.
+   */
+  if (!mean_image_file.empty()) {
+LoadMeanImageData();
+  } else {
+LG << "Mean image file for normalizing the input is not provide."
+   << " It may affect the accuracy of the prediction.";
+  }
+
+  // Create an executor after binding the model to input parameters.
+  args_map["data"] = NDArray(input_shape, 

[GitHub] leleamol commented on a change in pull request #13294: [MXNET-1083] Add the example to demonstrate the inference workflow using C++ API

2018-12-05 Thread GitBox
leleamol commented on a change in pull request #13294: [MXNET-1083] Add the 
example to demonstrate the inference workflow using C++ API
URL: https://github.com/apache/incubator-mxnet/pull/13294#discussion_r239275884
 
 

 ##
 File path: cpp-package/example/inference/inception_inference.cpp
 ##
 @@ -0,0 +1,414 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*
+ * This example demonstrates image classification workflow with pre-trained 
models using MXNet C++ API.
+ * The example performs following tasks.
+ * 1. Load the pre-trained model,
+ * 2. Load the parameters of pre-trained model,
+ * 3. Load the image to be classified  in to NDArray.
+ * 4. Normalize the image using the mean of images that were used for training.
+ * 5. Run the forward pass and predict the input image.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "mxnet-cpp/MxNetCpp.h"
+#include 
+
+using namespace mxnet::cpp;
+
+
+/*
+ * class Predictor
+ *
+ * This class encapsulates the functionality to load the model, process input 
image and run the forward pass.
+ */
+
+class Predictor {
+ public:
+Predictor() {}
+Predictor(const std::string& model_json,
+  const std::string& model_params,
+  const Shape& input_shape,
+  bool gpu_context_type = false,
+  const std::string& synset_file = "",
+  const std::string& mean_image_file = "");
+void PredictImage(const std::string& image_file);
+~Predictor();
+
+ private:
+void LoadModel(const std::string& model_json_file);
+void LoadParameters(const std::string& model_parameters_file);
+void LoadSynset(const std::string& synset_file);
+NDArray LoadInputImage(const std::string& image_file);
+void LoadMeanImageData();
+void NormalizeInput(const std::string& mean_image_file);
+inline bool FileExists(const std::string& name) {
+struct stat buffer;
+return (stat(name.c_str(), ) == 0);
+}
+NDArray mean_img;
+std::map args_map;
+std::map aux_map;
+std::vector output_labels;
+Symbol net;
+Executor *executor;
+Shape input_shape;
+NDArray mean_image_data;
+Context global_ctx = Context::cpu();
+std::string mean_image_file;
+};
+
+
+/*
+ * The constructor takes following parameters as input:
+ * 1. model_json:  The model in json formatted file.
+ * 2. model_params: File containing model parameters
+ * 3. synset_file: File containing the list of image labels
+ * 4. input_shape: Shape of input data to the model. Since this class will be 
running one inference at a time,
+ * the input shape is required to be in format Shape(1, 
number_of_channels, height, width)
+ * The input image will be resized to (height x width) size before running the 
inference.
+ * The constructor will:
+ *  1. Load the model and parameter files.
+ *  2. Load the synset file.
+ *  3. Invoke the SimpleBind to bind the input argument to the model and 
create an executor.
+ *
+ *  The SimpleBind is expected to be invoked only once.
+ */
+Predictor::Predictor(const std::string& model_json,
+ const std::string& model_params,
+ const Shape& input_shape,
+ bool gpu_context_type,
+ const std::string& synset_file,
+ const std::string& mean_image_file):
+ input_shape(input_shape),
+ mean_image_file(mean_image_file) {
+  if (gpu_context_type) {
+global_ctx = Context::gpu();
+  }
+  // Load the model
+  LoadModel(model_json);
+
+  // Load the model parameters.
+  LoadParameters(model_params);
+
+  /*
+   * The data will be used to output the exact label that matches highest 
output of the model.
+   */
+  LoadSynset(synset_file);
+
+  /*
+   * Load the mean image data if specified.
+   */
+  if (!mean_image_file.empty()) {
+LoadMeanImageData();
+  } else {
+LG << "Mean image file for normalizing the input is not provide."
+   << " It may affect the accuracy of the prediction.";
+  }
+
+  // Create an executor after binding the model to input parameters.
+  args_map["data"] = NDArray(input_shape, 

[GitHub] leleamol commented on a change in pull request #13294: [MXNET-1083] Add the example to demonstrate the inference workflow using C++ API

2018-12-05 Thread GitBox
leleamol commented on a change in pull request #13294: [MXNET-1083] Add the 
example to demonstrate the inference workflow using C++ API
URL: https://github.com/apache/incubator-mxnet/pull/13294#discussion_r239275057
 
 

 ##
 File path: cpp-package/example/inference/inception_inference.cpp
 ##
 @@ -0,0 +1,414 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*
+ * This example demonstrates image classification workflow with pre-trained 
models using MXNet C++ API.
+ * The example performs following tasks.
+ * 1. Load the pre-trained model,
+ * 2. Load the parameters of pre-trained model,
+ * 3. Load the image to be classified  in to NDArray.
+ * 4. Normalize the image using the mean of images that were used for training.
+ * 5. Run the forward pass and predict the input image.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "mxnet-cpp/MxNetCpp.h"
+#include 
+
+using namespace mxnet::cpp;
+
+
+/*
+ * class Predictor
+ *
+ * This class encapsulates the functionality to load the model, process input 
image and run the forward pass.
+ */
+
+class Predictor {
+ public:
+Predictor() {}
+Predictor(const std::string& model_json,
+  const std::string& model_params,
+  const Shape& input_shape,
+  bool gpu_context_type = false,
+  const std::string& synset_file = "",
+  const std::string& mean_image_file = "");
+void PredictImage(const std::string& image_file);
+~Predictor();
+
+ private:
+void LoadModel(const std::string& model_json_file);
+void LoadParameters(const std::string& model_parameters_file);
+void LoadSynset(const std::string& synset_file);
+NDArray LoadInputImage(const std::string& image_file);
+void LoadMeanImageData();
+void NormalizeInput(const std::string& mean_image_file);
+inline bool FileExists(const std::string& name) {
+struct stat buffer;
+return (stat(name.c_str(), ) == 0);
+}
+NDArray mean_img;
+std::map args_map;
+std::map aux_map;
+std::vector output_labels;
+Symbol net;
+Executor *executor;
+Shape input_shape;
+NDArray mean_image_data;
+Context global_ctx = Context::cpu();
+std::string mean_image_file;
+};
+
+
+/*
+ * The constructor takes following parameters as input:
+ * 1. model_json:  The model in json formatted file.
+ * 2. model_params: File containing model parameters
+ * 3. synset_file: File containing the list of image labels
+ * 4. input_shape: Shape of input data to the model. Since this class will be 
running one inference at a time,
+ * the input shape is required to be in format Shape(1, 
number_of_channels, height, width)
+ * The input image will be resized to (height x width) size before running the 
inference.
+ * The constructor will:
+ *  1. Load the model and parameter files.
+ *  2. Load the synset file.
+ *  3. Invoke the SimpleBind to bind the input argument to the model and 
create an executor.
+ *
+ *  The SimpleBind is expected to be invoked only once.
+ */
+Predictor::Predictor(const std::string& model_json,
+ const std::string& model_params,
+ const Shape& input_shape,
+ bool gpu_context_type,
+ const std::string& synset_file,
+ const std::string& mean_image_file):
+ input_shape(input_shape),
+ mean_image_file(mean_image_file) {
+  if (gpu_context_type) {
+global_ctx = Context::gpu();
+  }
+  // Load the model
+  LoadModel(model_json);
+
+  // Load the model parameters.
+  LoadParameters(model_params);
+
+  /*
+   * The data will be used to output the exact label that matches highest 
output of the model.
+   */
+  LoadSynset(synset_file);
+
+  /*
+   * Load the mean image data if specified.
+   */
+  if (!mean_image_file.empty()) {
+LoadMeanImageData();
+  } else {
+LG << "Mean image file for normalizing the input is not provide."
+   << " It may affect the accuracy of the prediction.";
+  }
+
+  // Create an executor after binding the model to input parameters.
+  args_map["data"] = NDArray(input_shape, 

[GitHub] leleamol commented on a change in pull request #13294: [MXNET-1083] Add the example to demonstrate the inference workflow using C++ API

2018-12-05 Thread GitBox
leleamol commented on a change in pull request #13294: [MXNET-1083] Add the 
example to demonstrate the inference workflow using C++ API
URL: https://github.com/apache/incubator-mxnet/pull/13294#discussion_r239274844
 
 

 ##
 File path: cpp-package/example/inference/inception_inference.cpp
 ##
 @@ -0,0 +1,414 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*
+ * This example demonstrates image classification workflow with pre-trained 
models using MXNet C++ API.
+ * The example performs following tasks.
+ * 1. Load the pre-trained model,
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leleamol commented on a change in pull request #13294: [MXNET-1083] Add the example to demonstrate the inference workflow using C++ API

2018-12-05 Thread GitBox
leleamol commented on a change in pull request #13294: [MXNET-1083] Add the 
example to demonstrate the inference workflow using C++ API
URL: https://github.com/apache/incubator-mxnet/pull/13294#discussion_r239274873
 
 

 ##
 File path: cpp-package/example/inference/inception_inference.cpp
 ##
 @@ -0,0 +1,414 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*
+ * This example demonstrates image classification workflow with pre-trained 
models using MXNet C++ API.
+ * The example performs following tasks.
+ * 1. Load the pre-trained model,
+ * 2. Load the parameters of pre-trained model,
+ * 3. Load the image to be classified  in to NDArray.
+ * 4. Normalize the image using the mean of images that were used for training.
+ * 5. Run the forward pass and predict the input image.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "mxnet-cpp/MxNetCpp.h"
+#include 
+
+using namespace mxnet::cpp;
+
+
+/*
+ * class Predictor
+ *
+ * This class encapsulates the functionality to load the model, process input 
image and run the forward pass.
+ */
+
+class Predictor {
+ public:
+Predictor() {}
+Predictor(const std::string& model_json,
+  const std::string& model_params,
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leleamol commented on a change in pull request #13294: [MXNET-1083] Add the example to demonstrate the inference workflow using C++ API

2018-12-05 Thread GitBox
leleamol commented on a change in pull request #13294: [MXNET-1083] Add the 
example to demonstrate the inference workflow using C++ API
URL: https://github.com/apache/incubator-mxnet/pull/13294#discussion_r239274909
 
 

 ##
 File path: cpp-package/example/inference/inception_inference.cpp
 ##
 @@ -0,0 +1,414 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*
+ * This example demonstrates image classification workflow with pre-trained 
models using MXNet C++ API.
+ * The example performs following tasks.
+ * 1. Load the pre-trained model,
+ * 2. Load the parameters of pre-trained model,
+ * 3. Load the image to be classified  in to NDArray.
+ * 4. Normalize the image using the mean of images that were used for training.
+ * 5. Run the forward pass and predict the input image.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "mxnet-cpp/MxNetCpp.h"
+#include 
+
+using namespace mxnet::cpp;
+
+
+/*
+ * class Predictor
+ *
+ * This class encapsulates the functionality to load the model, process input 
image and run the forward pass.
+ */
+
+class Predictor {
+ public:
+Predictor() {}
+Predictor(const std::string& model_json,
+  const std::string& model_params,
+  const Shape& input_shape,
+  bool gpu_context_type = false,
+  const std::string& synset_file = "",
+  const std::string& mean_image_file = "");
+void PredictImage(const std::string& image_file);
+~Predictor();
+
+ private:
+void LoadModel(const std::string& model_json_file);
+void LoadParameters(const std::string& model_parameters_file);
+void LoadSynset(const std::string& synset_file);
+NDArray LoadInputImage(const std::string& image_file);
+void LoadMeanImageData();
+void NormalizeInput(const std::string& mean_image_file);
+inline bool FileExists(const std::string& name) {
+struct stat buffer;
+return (stat(name.c_str(), ) == 0);
+}
+NDArray mean_img;
+std::map args_map;
+std::map aux_map;
+std::vector output_labels;
+Symbol net;
+Executor *executor;
+Shape input_shape;
+NDArray mean_image_data;
+Context global_ctx = Context::cpu();
+std::string mean_image_file;
+};
+
+
+/*
+ * The constructor takes following parameters as input:
+ * 1. model_json:  The model in json formatted file.
+ * 2. model_params: File containing model parameters
+ * 3. synset_file: File containing the list of image labels
+ * 4. input_shape: Shape of input data to the model. Since this class will be 
running one inference at a time,
+ * the input shape is required to be in format Shape(1, 
number_of_channels, height, width)
+ * The input image will be resized to (height x width) size before running the 
inference.
+ * The constructor will:
+ *  1. Load the model and parameter files.
+ *  2. Load the synset file.
+ *  3. Invoke the SimpleBind to bind the input argument to the model and 
create an executor.
+ *
+ *  The SimpleBind is expected to be invoked only once.
+ */
+Predictor::Predictor(const std::string& model_json,
+ const std::string& model_params,
+ const Shape& input_shape,
+ bool gpu_context_type,
+ const std::string& synset_file,
+ const std::string& mean_image_file):
+ input_shape(input_shape),
+ mean_image_file(mean_image_file) {
+  if (gpu_context_type) {
+global_ctx = Context::gpu();
+  }
+  // Load the model
+  LoadModel(model_json);
+
+  // Load the model parameters.
+  LoadParameters(model_params);
+
+  /*
+   * The data will be used to output the exact label that matches highest 
output of the model.
+   */
+  LoadSynset(synset_file);
+
+  /*
+   * Load the mean image data if specified.
+   */
+  if (!mean_image_file.empty()) {
+LoadMeanImageData();
+  } else {
+LG << "Mean image file for normalizing the input is not provide."
+   << " It may affect the accuracy of the prediction.";
+  }
+
+  // Create an executor after binding the model to input parameters.
+  args_map["data"] = NDArray(input_shape, 

[GitHub] leleamol commented on a change in pull request #13294: [MXNET-1083] Add the example to demonstrate the inference workflow using C++ API

2018-12-05 Thread GitBox
leleamol commented on a change in pull request #13294: [MXNET-1083] Add the 
example to demonstrate the inference workflow using C++ API
URL: https://github.com/apache/incubator-mxnet/pull/13294#discussion_r239274813
 
 

 ##
 File path: cpp-package/example/inference/README.md
 ##
 @@ -0,0 +1,35 @@
+# MXNet C++ Package Inference Workflow Examples
+
+## Building C++ Inference examples
+
+The examples in this folder demonstrate the **inference** workflow.
+To build examples use following commands:
+
+-  Release: **make all**
+-  Debug: **make debug all**
+
+
+## Examples demonstrating inference workflow
+
+This directory contains following examples. In order to run the examples, 
ensure that the path to the MXNet shared library is added to the OS specific 
environment variable viz. **LD\_LIBRARY\_PATH** for Linux, Mac and Ubuntu OS 
and **PATH** for Windows OS.
+
+### 
[inception_inference.cpp]()
+
+This example demonstrates image classification workflow with pre-trained 
models using MXNet C++ API. The command line parameters the example can accept 
are as shown below:
+
+   ```
+   ./inception_inference --help
+   Usage:
+   inception_inference --symbol   
--params  --image 

[GitHub] sandeep-krishnamurthy commented on a change in pull request #13362: Add NHWC layout support to Pooling (cuDNN only)

2018-12-05 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #13362: Add NHWC 
layout support to Pooling (cuDNN only)
URL: https://github.com/apache/incubator-mxnet/pull/13362#discussion_r239271822
 
 

 ##
 File path: src/operator/nn/pooling.cu
 ##
 @@ -116,10 +118,12 @@ void PoolingGradCompute(const nnvm::NodeAttrs& 
attrs,
   switch (param.pool_type) {
 case pool_enum::kMaxPooling:
 case pool_enum::kAvgPooling:
-  GetCuDNNPoolingOp(param).Backward(ctx, inputs[ograd_idx],
+  if (GetCuDNNPoolingOp(param).Backward(ctx, inputs[ograd_idx],
inputs[in_data_idx], 
inputs[out_data_idx],
-   req[0], outputs[0]);
-  return;
+   req[0], outputs[0])) {
+return;
+  }
+  break;
 
 Review comment:
   should we handle else case here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #13362: Add NHWC layout support to Pooling (cuDNN only)

2018-12-05 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #13362: Add NHWC 
layout support to Pooling (cuDNN only)
URL: https://github.com/apache/incubator-mxnet/pull/13362#discussion_r239272144
 
 

 ##
 File path: tests/python/gpu/test_operator_gpu.py
 ##
 @@ -608,6 +608,52 @@ def test_convolution_versions():
 
 
 @with_seed()
+def test_pooling_with_convention():
+# While the float32 and float64 output is reliably consistent, float16 
departs occasionally.
+# We compare cpu and gpu results only within a given precision.
+for data_type in [np.float64, np.float32, np.float16]:
+ctx_list = [{'ctx': mx.gpu(0), 'pool_data': (2, 2, 10, 10), 
'type_dict': {'pool_data': data_type}},
+{'ctx': mx.cpu(0), 'pool_data': (2, 2, 10, 10), 
'type_dict': {'pool_data': data_type}}]
+sym = mx.sym.Pooling(kernel=(3,3), pool_type='max', 
pooling_convention='valid', name='pool')
+check_consistency(sym, ctx_list)
+
+sym = mx.sym.Pooling(kernel=(3,3), pool_type='max', 
pooling_convention='full', name='pool')
+check_consistency(sym, ctx_list)
+
+sym = mx.sym.Pooling(kernel=(300,300), pool_type='max', 
global_pool=True, name='pool')
+check_consistency(sym, ctx_list)
+
+
+@with_seed()
+@assert_raises_cudnn_not_satisfied(min_version='7.0.1')
 
 Review comment:
   should it rather skip this test if cudnn version is < 7.0.1 ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #13362: Add NHWC layout support to Pooling (cuDNN only)

2018-12-05 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #13362: Add NHWC 
layout support to Pooling (cuDNN only)
URL: https://github.com/apache/incubator-mxnet/pull/13362#discussion_r239259258
 
 

 ##
 File path: python/mxnet/gluon/nn/conv_layers.py
 ##
 @@ -738,12 +741,13 @@ class MaxPool1D(_Pooling):
 """
 def __init__(self, pool_size=2, strides=None, padding=0, layout='NCW',
  ceil_mode=False, **kwargs):
-assert layout == 'NCW', "Only supports 'NCW' layout for now"
+assert layout in ('NCW', 'NWC'),\
+"Only NCW and NWC layouts are valid for 1D"
 
 Review comment:
   nit: 1D Pooling? 
   Same comment across 2D and 3D


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest commented on a change in pull request #13362: Add NHWC layout support to Pooling (cuDNN only)

2018-12-05 Thread GitBox
apeforest commented on a change in pull request #13362: Add NHWC layout support 
to Pooling (cuDNN only)
URL: https://github.com/apache/incubator-mxnet/pull/13362#discussion_r239265544
 
 

 ##
 File path: src/operator/nn/cudnn/cudnn_pooling-inl.h
 ##
 @@ -165,55 +170,78 @@ class CuDNNPoolingOp {
 } else {
   LOG(FATAL) << "Only support 2D or 3D pooling";
 }
+return true;
   }
 
  private:
-  inline void Init(mshadow::Stream *s, const TBlob _data,
+  // Return boolean saying whether pooling configuration is supported
+  inline bool Init(mshadow::Stream *s, const TBlob _data,
   const TBlob _data) {
 using namespace mshadow;
+bool is_supported = true;
 #if CUDNN_MAJOR >= 5
 nan_prop_ = CUDNN_NOT_PROPAGATE_NAN;
 #endif
 if (param_.kernel.ndim() == 2) {
   // 2d conv
+  CHECK(param_.layout.value() == mshadow::kNCHW ||
+param_.layout.value() == mshadow::kNHWC) << "Need 2D layout";
+  cudnnTensorFormat_t cudnn_layout =
+  (param_.layout.value() == mshadow::kNCHW) ? CUDNN_TENSOR_NCHW
+: CUDNN_TENSOR_NHWC;
   Tensor data = in_data.get(s);
   Tensor out = out_data.get(s);
-  mshadow::Shape<4> dshape = data.shape_;
+  // Perform shape calculations in a standard (NCHW) layout space
+  mshadow::Shape<4> dshape_nchw = (param_.layout.value() == 
mshadow::kNHWC) ?
+  ConvertLayout(data.shape_, 
mshadow::kNHWC, mshadow::kNCHW) :
+  data.shape_;
+  mshadow::Shape<4> oshape_nchw = (param_.layout.value() == 
mshadow::kNHWC) ?
+  ConvertLayout(out.shape_, 
mshadow::kNHWC, mshadow::kNCHW) :
+  out.shape_;
   CUDNN_CALL(cudnnSetTensor4dDescriptor(in_desc_,
-CUDNN_TENSOR_NCHW,
+cudnn_layout,
 dtype_,
-data.shape_[0],
-data.shape_[1],
-data.shape_[2],
-data.shape_[3]));
+dshape_nchw[0],
+dshape_nchw[1],
+dshape_nchw[2],
+dshape_nchw[3]));
   CUDNN_CALL(cudnnSetTensor4dDescriptor(out_desc_,
-CUDNN_TENSOR_NCHW,
+cudnn_layout,
 dtype_,
-out.shape_[0],
-out.shape_[1],
-out.shape_[2],
-out.shape_[3]));
+oshape_nchw[0],
+oshape_nchw[1],
+oshape_nchw[2],
+oshape_nchw[3]));
+  int window_height = param_.global_pool ? dshape_nchw[2] : 
param_.kernel[0];
+  int window_width = param_.global_pool ? dshape_nchw[3] : 
param_.kernel[1];
+  // CuDNN v7.1.4 backprop kernel doesn't support window sizes 9 and above.
+  #if CUDNN_VERSION == 7104
+  is_supported = window_height <= 8 && window_width <= 8;
 
 Review comment:
   Yes, if it does not violate any nvidia license rules, it would be great to 
add the link here as comment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ChaiBapchya commented on a change in pull request #13543: Chi_square_check for discrete distribution fix

2018-12-05 Thread GitBox
ChaiBapchya commented on a change in pull request #13543: Chi_square_check for 
discrete distribution fix
URL: https://github.com/apache/incubator-mxnet/pull/13543#discussion_r239260946
 
 

 ##
 File path: python/mxnet/test_utils.py
 ##
 @@ -1911,12 +1911,15 @@ def chi_square_check(generator, buckets, probs, 
nsamples=100):
 if continuous_dist:
 sample_bucket_ids = np.searchsorted(buckets_npy, samples, side='right')
 else:
-sample_bucket_ids = samples
+sample_bucket_ids = np.array(samples)
 if continuous_dist:
 sample_bucket_ids = sample_bucket_ids // 2
 obs_freq = np.zeros(shape=len(buckets), dtype=np.int)
-for i in range(len(buckets)):
-obs_freq[i] = (sample_bucket_ids == i).sum()
+for i, _ in enumerate(buckets):
+if continuous_dist:
+obs_freq[i] = (sample_bucket_ids == i).sum()
+else:
+obs_freq[i] = (sample_bucket_ids == buckets[i]).sum()
 
 Review comment:
   nope coz for discrete_distribution bucketing is different from the way its 
done in continuous.
   for continuous buckets are given like [0,4],[5,9]
   for discrete buckets are numbers [0,1,2,3,4,5,6,7,8,9]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ptrendx commented on issue #13362: Add NHWC layout support to Pooling (cuDNN only)

2018-12-05 Thread GitBox
ptrendx commented on issue #13362: Add NHWC layout support to Pooling (cuDNN 
only)
URL: https://github.com/apache/incubator-mxnet/pull/13362#issuecomment-444670966
 
 
   @apeforest @Vikas89 Not sure I understand the question. Are you asking why 
do we need NHWC support for pooling?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ptrendx commented on issue #13362: Add NHWC layout support to Pooling (cuDNN only)

2018-12-05 Thread GitBox
ptrendx commented on issue #13362: Add NHWC layout support to Pooling (cuDNN 
only)
URL: https://github.com/apache/incubator-mxnet/pull/13362#issuecomment-444669921
 
 
   @TaoLv 
   Re MKLDNN: Per title of this PR this is enabling NHWC support only for cuDNN 
path. I don't know whether MKLDNN supports NHWC pooling.
   Re quantized pooling: that file seems to assume NCHW layout. What would you 
like me to do with it?
   Re layout parameter: Yes, other operators expose layout information (in 
fact, pooling already does as well) - see convolutions (e.g. here 
https://mxnet.incubator.apache.org/api/python/gluon/nn.html#mxnet.gluon.nn.Conv2D)
 and batchnorm (axis parameter). NDArray does not know its layout and for many 
use cases it would not make sense. That is why layout is exposed as an operator 
parameter.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   >