[GitHub] [incubator-mxnet] szha commented on issue #18662: out of memory issue while using mxnet with sockeye

2020-07-06 Thread GitBox


szha commented on issue #18662:
URL: 
https://github.com/apache/incubator-mxnet/issues/18662#issuecomment-654032873


   @MrRaghav thanks for creating the issue. What model of GPU are you using? 
What's the GPU memory size?
   Also, have you tried using `export MXNET_GPU_MEM_POOL_TYPE=Round`? 
https://mxnet.apache.org/api/faq/env_var#memory-options
   ```
   Round: A memory pool that always rounds the requested memory size and 
allocates memory of the rounded size. MXNET_GPU_MEM_POOL_ROUND_LINEAR_CUTOFF 
defines how to round up a memory size. Caching and allocating buffered memory 
works in the same way as the naive memory pool.
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] szha edited a comment on issue #18662: out of memory issue while using mxnet with sockeye

2020-07-06 Thread GitBox


szha edited a comment on issue #18662:
URL: 
https://github.com/apache/incubator-mxnet/issues/18662#issuecomment-654032873


   @MrRaghav thanks for creating the issue. What model of GPU are you using? 
What's the GPU memory size?
   Also, have you tried using `export MXNET_GPU_MEM_POOL_TYPE=Round`? 
https://mxnet.apache.org/api/faq/env_var#memory-options
   
   Round: A memory pool that always rounds the requested memory size and 
allocates memory of the rounded size. MXNET_GPU_MEM_POOL_ROUND_LINEAR_CUTOFF 
defines how to round up a memory size. Caching and allocating buffered memory 
works in the same way as the naive memory pool.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] bgawrych commented on pull request #18158: [numpy] fix npx.softmax for 0-sized inputs

2020-07-06 Thread GitBox


bgawrych commented on pull request #18158:
URL: https://github.com/apache/incubator-mxnet/pull/18158#issuecomment-654098082


   @Yiyan66 @yzhliu Do you think this fix should be cherry-picked to 1.x?
   Without it my change https://github.com/apache/incubator-mxnet/pull/18602 is 
failing with floating point exception



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-mxnet] tag 1.7.0.rc0 created (now 477affe)

2020-07-06 Thread taolv
This is an automated email from the ASF dual-hosted git repository.

taolv pushed a change to tag 1.7.0.rc0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


  at 477affe  (commit)
No new revisions were added by this update.



[incubator-mxnet] tag 1.7.0.rc0 created (now 477affe)

2020-07-06 Thread taolv
This is an automated email from the ASF dual-hosted git repository.

taolv pushed a change to tag 1.7.0.rc0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


  at 477affe  (commit)
No new revisions were added by this update.



[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-07-06 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 3eee77c  Bump the publish timestamp.
3eee77c is described below

commit 3eee77cb368c653d0f8302bd7e00621648832e29
Author: mxnet-ci 
AuthorDate: Mon Jul 6 06:42:25 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..b905f13
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Mon Jul  6 06:42:25 UTC 2020



[incubator-mxnet-site] branch asf-site updated: Publish triggered by CI

2020-07-06 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new a2004b4  Publish triggered by CI
a2004b4 is described below

commit a2004b4e20339789e72d340dd3b4f89de4d9709a
Author: mxnet-ci 
AuthorDate: Mon Jul 6 06:42:20 2020 +

Publish triggered by CI
---
 date.txt | 1 -
 feed.xml | 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/date.txt b/date.txt
deleted file mode 100644
index 947d918..000
--- a/date.txt
+++ /dev/null
@@ -1 +0,0 @@
-Mon Jul  6 00:41:17 UTC 2020
diff --git a/feed.xml b/feed.xml
index 5f92c71..649ef1f 100644
--- a/feed.xml
+++ b/feed.xml
@@ -1 +1 @@
-http://www.w3.org/2005/Atom; >https://jekyllrb.com/; 
version="4.0.0">Jekyllhttps://mxnet.apache.org/feed.xml; rel="self" type="application/atom+xml" 
/>https://mxnet.apache.org/; rel="alternate" type="text/html" 
/>2020-07-06T00:30:31+00:00https://mxnet.apache.org/feed.xmlApache MXNetA flexible and efficient library for 
deep [...]
\ No newline at end of file
+http://www.w3.org/2005/Atom; >https://jekyllrb.com/; 
version="4.0.0">Jekyllhttps://mxnet.apache.org/feed.xml; rel="self" type="application/atom+xml" 
/>https://mxnet.apache.org/; rel="alternate" type="text/html" 
/>2020-07-06T06:31:09+00:00https://mxnet.apache.org/feed.xmlApache MXNetA flexible and efficient library for 
deep [...]
\ No newline at end of file



[GitHub] [incubator-mxnet] cenggokhan closed issue #18030: Model load problem in 64 bit devices

2020-07-06 Thread GitBox


cenggokhan closed issue #18030:
URL: https://github.com/apache/incubator-mxnet/issues/18030


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] cosmincatalin commented on issue #18655: MXNet for Scala 2.12 and 2.13

2020-07-06 Thread GitBox


cosmincatalin commented on issue #18655:
URL: 
https://github.com/apache/incubator-mxnet/issues/18655#issuecomment-654082313


   I've tinkered a little with building from source, but wasn't very 
successful, I guess I need to focus on it more. To answer your second question, 
yes, I am interested specifically in the CPU packages.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-mxnet] tag 1.7.0.rc0 created (now 477affe)

2020-07-06 Thread taolv
This is an automated email from the ASF dual-hosted git repository.

taolv pushed a change to tag 1.7.0.rc0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


  at 477affe  (commit)
No new revisions were added by this update.



[GitHub] [incubator-mxnet] MrRaghav commented on issue #18662: out of memory issue while using mxnet with sockeye

2020-07-06 Thread GitBox


MrRaghav commented on issue #18662:
URL: 
https://github.com/apache/incubator-mxnet/issues/18662#issuecomment-654201954


   Hello, please find the information in following points -
   
   1) I am using **rtx2080ti**
   2) To run sockeye, I used 3 GPUs and specified device ids. Memory of GPUs is 
as follows
   
   _username@server:~/username/sockeye$ nvidia-smi --format=csv 
--query-gpu=memory.total
   memory.total [MiB]
   11019 MiB
   11019 MiB
   11019 MiB_
   
   3) regarding export command, I tried running sockeye.train like below:
   
   _export MXNET_GPU_MEM_POOL_TYPE=Round_
   
   _python3 -m sockeye.train -s trained.BPE.de -t trained.BPE.en -vs 
dev.BPE.de -vt dev.BPE.en --shared-vocab \
   --device-ids -3  --max-checkpoints 3 -o 
model_
   
   But, still got the same error.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-mxnet-site] branch asf-site updated: Publish triggered by CI

2020-07-06 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new c4ac656  Publish triggered by CI
c4ac656 is described below

commit c4ac65623517f92420946b682f0ae2601a76792f
Author: mxnet-ci 
AuthorDate: Mon Jul 6 12:53:31 2020 +

Publish triggered by CI
---
 date.txt | 1 -
 feed.xml | 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/date.txt b/date.txt
deleted file mode 100644
index b905f13..000
--- a/date.txt
+++ /dev/null
@@ -1 +0,0 @@
-Mon Jul  6 06:42:25 UTC 2020
diff --git a/feed.xml b/feed.xml
index 649ef1f..b09155a 100644
--- a/feed.xml
+++ b/feed.xml
@@ -1 +1 @@
-http://www.w3.org/2005/Atom; >https://jekyllrb.com/; 
version="4.0.0">Jekyllhttps://mxnet.apache.org/feed.xml; rel="self" type="application/atom+xml" 
/>https://mxnet.apache.org/; rel="alternate" type="text/html" 
/>2020-07-06T06:31:09+00:00https://mxnet.apache.org/feed.xmlApache MXNetA flexible and efficient library for 
deep [...]
\ No newline at end of file
+http://www.w3.org/2005/Atom; >https://jekyllrb.com/; 
version="4.0.0">Jekyllhttps://mxnet.apache.org/feed.xml; rel="self" type="application/atom+xml" 
/>https://mxnet.apache.org/; rel="alternate" type="text/html" 
/>2020-07-06T12:30:16+00:00https://mxnet.apache.org/feed.xmlApache MXNetA flexible and efficient library for 
deep [...]
\ No newline at end of file



[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-07-06 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 382e599  Bump the publish timestamp.
382e599 is described below

commit 382e59901c02ba3e6a54d3561ad549b89dbe9dad
Author: mxnet-ci 
AuthorDate: Mon Jul 6 12:53:36 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..9c29e97
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Mon Jul  6 12:53:36 UTC 2020



svn commit: r40320 - in /dev/incubator/mxnet/1.7.0.rc0: ./ apache-mxnet-src-1.7.0.rc0-incubating.tar.gz apache-mxnet-src-1.7.0.rc0-incubating.tar.gz.asc apache-mxnet-src-1.7.0.rc0-incubating.tar.gz.sh

2020-07-06 Thread taolv
Author: taolv
Date: Mon Jul  6 13:26:45 2020
New Revision: 40320

Log:
Add mxnet-1.7.0.rc0

Added:
dev/incubator/mxnet/1.7.0.rc0/
dev/incubator/mxnet/1.7.0.rc0/apache-mxnet-src-1.7.0.rc0-incubating.tar.gz  
 (with props)

dev/incubator/mxnet/1.7.0.rc0/apache-mxnet-src-1.7.0.rc0-incubating.tar.gz.asc  
 (with props)

dev/incubator/mxnet/1.7.0.rc0/apache-mxnet-src-1.7.0.rc0-incubating.tar.gz.sha512

Added: 
dev/incubator/mxnet/1.7.0.rc0/apache-mxnet-src-1.7.0.rc0-incubating.tar.gz
==
Binary file - no diff available.

Propchange: 
dev/incubator/mxnet/1.7.0.rc0/apache-mxnet-src-1.7.0.rc0-incubating.tar.gz
--
svn:mime-type = application/x-gzip

Added: 
dev/incubator/mxnet/1.7.0.rc0/apache-mxnet-src-1.7.0.rc0-incubating.tar.gz.asc
==
Binary file - no diff available.

Propchange: 
dev/incubator/mxnet/1.7.0.rc0/apache-mxnet-src-1.7.0.rc0-incubating.tar.gz.asc
--
svn:mime-type = application/pgp-signature

Added: 
dev/incubator/mxnet/1.7.0.rc0/apache-mxnet-src-1.7.0.rc0-incubating.tar.gz.sha512
==
--- 
dev/incubator/mxnet/1.7.0.rc0/apache-mxnet-src-1.7.0.rc0-incubating.tar.gz.sha512
 (added)
+++ 
dev/incubator/mxnet/1.7.0.rc0/apache-mxnet-src-1.7.0.rc0-incubating.tar.gz.sha512
 Mon Jul  6 13:26:45 2020
@@ -0,0 +1 @@
+67401ff5d0ed3e84cbf82e7d2903caea07c21627c1dcf7af708a74b8e85ecb1c36df37bf20c2aa1666bbe71dcae855b5c3e7863de6abccd3a89516c5e3ea58da
  apache-mxnet-src-1.7.0.rc0-incubating.tar.gz




[GitHub] [incubator-mxnet] Yiyan66 commented on pull request #18660: [numpy][do not review] fix flaky mixed precision binary error

2020-07-06 Thread GitBox


Yiyan66 commented on pull request #18660:
URL: https://github.com/apache/incubator-mxnet/pull/18660#issuecomment-654217844


   @mxnet-bot run ci [centos-cpu]



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #18660: [numpy][do not review] fix flaky mixed precision binary error

2020-07-06 Thread GitBox


mxnet-bot commented on pull request #18660:
URL: https://github.com/apache/incubator-mxnet/pull/18660#issuecomment-654217914


   Jenkins CI successfully triggered : [centos-cpu]



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] Kh4L commented on a change in pull request #18405: Add deleting of args aux aux to Partition API

2020-07-06 Thread GitBox


Kh4L commented on a change in pull request #18405:
URL: https://github.com/apache/incubator-mxnet/pull/18405#discussion_r450375033



##
File path: python/mxnet/gluon/block.py
##
@@ -1040,41 +1040,62 @@ def _build_cache(self, *args):
 warnings.warn("Parameter %s is not used by any computation. "
   "Is this intended?"%unused, stacklevel=4)
 
-data_indices = []
-param_indices = []
-self._cached_op_args = []
-for i, name in enumerate(input_names):
-if name in data_names:
-data_indices.append(i)
-self._cached_op_args.append((True, data_names[name]))
-else:
-param_indices.append(i)
-self._cached_op_args.append((False, params[name]))
-flags = [('data_indices', data_indices), ('param_indices', 
param_indices)] + \
-self._flags
-
 args, _ = _flatten(args, "input")
 try:
-for is_arg, i in self._cached_op_args:
-if not is_arg:
-i.data()
+for name in input_names:
+if name in params:
+params[name].data()
 except DeferredInitializationError:
 self._deferred_infer_shape(*args)
-for is_arg, i in self._cached_op_args:
-if not is_arg:
-i._finish_deferred_init()
+for name in input_names:
+if name in params:
+params[name]._finish_deferred_init()
 
+arg_dict, aux_dict = dict(), dict()
 if self._backend:
 ctx = args[0].context
 # get list of params in the order of out.list_arguments
-arg_dict = {name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
-for name in out.list_arguments()}
-aux_dict = {name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
-for name in out.list_auxiliary_states()}
+arg_dict.update({name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
+ for name in out.list_arguments()})
+aux_dict.update({name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
+ for name in out.list_auxiliary_states()})
 # Partition the graph.
 out = out.optimize_for(self._backend, arg_dict, aux_dict, ctx, 
**self._backend_opts)
+
 #update cached graph with partitioned graph
 self._cached_graph = data, out
+
+input_names = out.list_inputs()
+data_indices = []
+param_indices = []
+self._cached_op_args = []
+for i, name in enumerate(input_names):
+pair = None
+if name in data_names:
+data_indices.append(i)
+pair = (True, data_names[name])
+else:
+param_indices.append(i)
+if name in params:
+param = params[name]
+else:
+assert self._backend, "Parameter " + name + " is missing 
from block params"
+if name in arg_dict or name:
+param_data = arg_dict[name]
+elif name in aux_dict:
+param_data = aux_dict[name]
+else:
+raise RuntimeError('Expected inputs missing from arg 
and aux after partioning. '

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] szha commented on issue #18662: out of memory issue while using mxnet with sockeye

2020-07-06 Thread GitBox


szha commented on issue #18662:
URL: 
https://github.com/apache/incubator-mxnet/issues/18662#issuecomment-654383639


   @fhieber do you have recommendation on how to run sockeye on the above GPU?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] leezu commented on a change in pull request #18625: Enable Large Tensor Support: Stage 1

2020-07-06 Thread GitBox


leezu commented on a change in pull request #18625:
URL: https://github.com/apache/incubator-mxnet/pull/18625#discussion_r450407053



##
File path: ci/docker/runtime_functions.sh
##
@@ -512,6 +519,7 @@ build_ubuntu_gpu_clang10_werror() {
-DMXNET_CUDA_ARCH="$CI_CMAKE_CUDA_ARCH" \
-DCMAKE_BUILD_TYPE="RelWithDebInfo" \
-DUSE_CPP_PACKAGE=OFF \
+   -DUSE_INT64_TENSOR_SIZE=OFF \

Review comment:
   What's the reason for splitting this up into multiple stages? The 
current PR is only changing the default settings (enable the setting by 
default). As long as there are bugs in various environments (eg with clang), 
you will break users. Thus I recommend including all required fixes in this PR. 
Also note that this clang build here has nothing to do with `MKL_USE_ILP64`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] leezu commented on a change in pull request #18625: Enable Large Tensor Support: Stage 1

2020-07-06 Thread GitBox


leezu commented on a change in pull request #18625:
URL: https://github.com/apache/incubator-mxnet/pull/18625#discussion_r450407053



##
File path: ci/docker/runtime_functions.sh
##
@@ -512,6 +519,7 @@ build_ubuntu_gpu_clang10_werror() {
-DMXNET_CUDA_ARCH="$CI_CMAKE_CUDA_ARCH" \
-DCMAKE_BUILD_TYPE="RelWithDebInfo" \
-DUSE_CPP_PACKAGE=OFF \
+   -DUSE_INT64_TENSOR_SIZE=OFF \

Review comment:
   What's the reason for splitting this up into multiple stages? The 
current PR is only changing the default settings (enable the setting by 
default). As long as there are bugs in various environments (eg with clang), 
you will break users. Thus I recommend including all required fixes in this PR. 
Also note that this clang build here is unrelated to `MKL_USE_ILP64`. Thanks 
for your work on the large tensor support!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] leezu commented on a change in pull request #18625: Enable Large Tensor Support: Stage 1

2020-07-06 Thread GitBox


leezu commented on a change in pull request #18625:
URL: https://github.com/apache/incubator-mxnet/pull/18625#discussion_r450407053



##
File path: ci/docker/runtime_functions.sh
##
@@ -512,6 +519,7 @@ build_ubuntu_gpu_clang10_werror() {
-DMXNET_CUDA_ARCH="$CI_CMAKE_CUDA_ARCH" \
-DCMAKE_BUILD_TYPE="RelWithDebInfo" \
-DUSE_CPP_PACKAGE=OFF \
+   -DUSE_INT64_TENSOR_SIZE=OFF \

Review comment:
   What's the reason for splitting this up into multiple stages? The 
current PR is only changing the default settings (enable the setting by 
default). As long as there are bugs in various environments (eg with clang), 
you will break users, because these users won't know that the default changed 
and that they now need to opt into the small tensor case. Thus I recommend 
including all required fixes in this PR. Also note that this clang build here 
is unrelated to `MKL_USE_ILP64`. Thanks for your work on the large tensor 
support!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-07-06 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 641764b  Bump the publish timestamp.
641764b is described below

commit 641764b0e6e990b0ec872aaf7755a23036b521cb
Author: mxnet-ci 
AuthorDate: Mon Jul 6 18:40:53 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..8f55ea6
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Mon Jul  6 18:40:53 UTC 2020



[incubator-mxnet-site] branch asf-site updated: Publish triggered by CI

2020-07-06 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new d2bacf1  Publish triggered by CI
d2bacf1 is described below

commit d2bacf109db931e22da112204965542f378c29e5
Author: mxnet-ci 
AuthorDate: Mon Jul 6 18:40:48 2020 +

Publish triggered by CI
---
 date.txt | 1 -
 feed.xml | 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/date.txt b/date.txt
deleted file mode 100644
index 9c29e97..000
--- a/date.txt
+++ /dev/null
@@ -1 +0,0 @@
-Mon Jul  6 12:53:36 UTC 2020
diff --git a/feed.xml b/feed.xml
index b09155a..4a19274 100644
--- a/feed.xml
+++ b/feed.xml
@@ -1 +1 @@
-http://www.w3.org/2005/Atom; >https://jekyllrb.com/; 
version="4.0.0">Jekyllhttps://mxnet.apache.org/feed.xml; rel="self" type="application/atom+xml" 
/>https://mxnet.apache.org/; rel="alternate" type="text/html" 
/>2020-07-06T12:30:16+00:00https://mxnet.apache.org/feed.xmlApache MXNetA flexible and efficient library for 
deep [...]
\ No newline at end of file
+http://www.w3.org/2005/Atom; >https://jekyllrb.com/; 
version="4.0.0">Jekyllhttps://mxnet.apache.org/feed.xml; rel="self" type="application/atom+xml" 
/>https://mxnet.apache.org/; rel="alternate" type="text/html" 
/>2020-07-06T18:30:18+00:00https://mxnet.apache.org/feed.xmlApache MXNetA flexible and efficient library for 
deep [...]
\ No newline at end of file



[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #18405: Add deleting of args aux aux to Partition API

2020-07-06 Thread GitBox


samskalicky commented on a change in pull request #18405:
URL: https://github.com/apache/incubator-mxnet/pull/18405#discussion_r450346162



##
File path: python/mxnet/gluon/block.py
##
@@ -1040,41 +1040,62 @@ def _build_cache(self, *args):
 warnings.warn("Parameter %s is not used by any computation. "
   "Is this intended?"%unused, stacklevel=4)
 
-data_indices = []
-param_indices = []
-self._cached_op_args = []
-for i, name in enumerate(input_names):
-if name in data_names:
-data_indices.append(i)
-self._cached_op_args.append((True, data_names[name]))
-else:
-param_indices.append(i)
-self._cached_op_args.append((False, params[name]))
-flags = [('data_indices', data_indices), ('param_indices', 
param_indices)] + \
-self._flags
-
 args, _ = _flatten(args, "input")
 try:
-for is_arg, i in self._cached_op_args:
-if not is_arg:
-i.data()
+for name in input_names:
+if name in params:
+params[name].data()
 except DeferredInitializationError:
 self._deferred_infer_shape(*args)
-for is_arg, i in self._cached_op_args:
-if not is_arg:
-i._finish_deferred_init()
+for name in input_names:
+if name in params:
+params[name]._finish_deferred_init()
 
+arg_dict, aux_dict = dict(), dict()
 if self._backend:
 ctx = args[0].context
 # get list of params in the order of out.list_arguments
-arg_dict = {name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
-for name in out.list_arguments()}
-aux_dict = {name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
-for name in out.list_auxiliary_states()}
+arg_dict.update({name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
+ for name in out.list_arguments()})
+aux_dict.update({name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
+ for name in out.list_auxiliary_states()})
 # Partition the graph.
 out = out.optimize_for(self._backend, arg_dict, aux_dict, ctx, 
**self._backend_opts)
+
 #update cached graph with partitioned graph
 self._cached_graph = data, out
+
+input_names = out.list_inputs()
+data_indices = []
+param_indices = []
+self._cached_op_args = []
+for i, name in enumerate(input_names):
+pair = None
+if name in data_names:
+data_indices.append(i)
+pair = (True, data_names[name])
+else:
+param_indices.append(i)

Review comment:
   nit: can we move this above the if/else since we do it in both?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] rondogency commented on issue #18665: Using ShiftScale operation (Similar to BatchNorm) in MXNet (Customer question/inquiry)

2020-07-06 Thread GitBox


rondogency commented on issue #18665:
URL: 
https://github.com/apache/incubator-mxnet/issues/18665#issuecomment-654368718


   Hi, are you using Gluon API? according to the doc it is supported 
https://github.com/apache/incubator-mxnet/blob/v1.7.x/python/mxnet/gluon/nn/basic_layers.py#L306,
 if it's not then it would be a bug in Gluon
   
   Also which MXNet version are you using?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] Kh4L commented on a change in pull request #18405: Add deleting of args aux aux to Partition API

2020-07-06 Thread GitBox


Kh4L commented on a change in pull request #18405:
URL: https://github.com/apache/incubator-mxnet/pull/18405#discussion_r450376590



##
File path: python/mxnet/gluon/block.py
##
@@ -1437,6 +1460,23 @@ def hybrid_forward(self, F, x, *args, **kwargs):
 # pylint: disable= invalid-name
 raise NotImplementedError
 
+def reset_ctx(self, ctx):
+"""Re-assign all Parameters to other contexts. If the Block is 
hybridized, it will reset the _cached_op_args.
+
+Parameters
+--
+ctx : Context or list of Context, default 
:py:meth:`context.current_context()`.
+Assign Parameter to given context. If ctx is a list of Context, a
+copy will be made for each context.
+"""
+params = self.collect_params()
+if self._cached_op:
+for p in self._cached_op_args:
+# resetting parameters creating by the partitioning backend
+if p.name not in params:
+p.reset_ctx(ctx)
+for p in params.values():

Review comment:
   > Although i guess if we delete a param, then it will still be in params 
but not in _cached_op_args. 
   That is the reason why
   We don't want to do any additional reset_ctx: reset is costly because it 
copies NDArrays 
   
   I will add some comments to clarify





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] sammieghabra commented on issue #18665: Using ShiftScale operation (Similar to BatchNorm) in MXNet (Customer question/inquiry)

2020-07-06 Thread GitBox


sammieghabra commented on issue #18665:
URL: 
https://github.com/apache/incubator-mxnet/issues/18665#issuecomment-654371413


   @rondogency 
   
   Thank you so much for your response.
   
   As mentioned in the description of this issue, I am aware of the 
`use_global_stats` flag in batch norm. However, as I stated in the description, 
running mean and running var do not get updated if that flag is true for 
[training](https://github.com/apache/incubator-mxnet/blob/master/src/operator/nn/batch_norm.cc#L260-L271).
 For our case, we need those values to be updated for training. 
   
   I am using version 1.5 of MXNet



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] ptrendx commented on pull request #18622: [WIP] Use RTC for elementwise and broadcast ops

2020-07-06 Thread GitBox


ptrendx commented on pull request #18622:
URL: https://github.com/apache/incubator-mxnet/pull/18622#issuecomment-654373793


   @leezu Thanks for listing the files - I am now looking into the contents of 
the right cross compilation package. I think you are right that everything 
should already be there and it is possibly just a package that was not 
installed in the Dockerfile itself, so I should be able to do this in the PR 
:-). 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] stu1130 commented on pull request #18504: [Improvement] Invoke mkldnn and cudnn BatchNorm when axis != 1

2020-07-06 Thread GitBox


stu1130 commented on pull request #18504:
URL: https://github.com/apache/incubator-mxnet/pull/18504#issuecomment-654391355


   @TaoLv could you help merge it if the PR looks good to you?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #18504: [Improvement] Invoke mkldnn and cudnn BatchNorm when axis != 1

2020-07-06 Thread GitBox


mxnet-bot commented on pull request #18504:
URL: https://github.com/apache/incubator-mxnet/pull/18504#issuecomment-654390942


   None of the jobs entered are supported. 
   Jobs entered by user: [macosx-x86_64]
   CI supported Jobs: [centos-gpu, miscellaneous, windows-cpu, windows-gpu, 
website, clang, unix-gpu, centos-cpu, sanity, unix-cpu, edge]
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] stu1130 commented on pull request #18504: [Improvement] Invoke mkldnn and cudnn BatchNorm when axis != 1

2020-07-06 Thread GitBox


stu1130 commented on pull request #18504:
URL: https://github.com/apache/incubator-mxnet/pull/18504#issuecomment-654390908


   @mxnet-bot run ci [macosx-x86_64]



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #18405: Add deleting of args aux aux to Partition API

2020-07-06 Thread GitBox


samskalicky commented on a change in pull request #18405:
URL: https://github.com/apache/incubator-mxnet/pull/18405#discussion_r450358036



##
File path: python/mxnet/gluon/block.py
##
@@ -1437,6 +1460,23 @@ def hybrid_forward(self, F, x, *args, **kwargs):
 # pylint: disable= invalid-name
 raise NotImplementedError
 
+def reset_ctx(self, ctx):
+"""Re-assign all Parameters to other contexts. If the Block is 
hybridized, it will reset the _cached_op_args.
+
+Parameters
+--
+ctx : Context or list of Context, default 
:py:meth:`context.current_context()`.
+Assign Parameter to given context. If ctx is a list of Context, a
+copy will be made for each context.
+"""
+params = self.collect_params()
+if self._cached_op:
+for p in self._cached_op_args:
+# resetting parameters creating by the partitioning backend
+if p.name not in params:
+p.reset_ctx(ctx)
+for p in params.values():

Review comment:
   Although i guess if we delete a param, then it will still be in `params` 
but not in `_cached_op_args`. And the context check will fail if we dont do all 
the params, so i guess this makes sense. Maybe we should have a comment that 
`params` and `_cached_op_args` might contain unique params (ie. there is no 
superset)?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #18405: Add deleting of args aux aux to Partition API

2020-07-06 Thread GitBox


samskalicky commented on a change in pull request #18405:
URL: https://github.com/apache/incubator-mxnet/pull/18405#discussion_r450345405



##
File path: python/mxnet/gluon/block.py
##
@@ -1040,41 +1040,62 @@ def _build_cache(self, *args):
 warnings.warn("Parameter %s is not used by any computation. "
   "Is this intended?"%unused, stacklevel=4)
 
-data_indices = []
-param_indices = []
-self._cached_op_args = []
-for i, name in enumerate(input_names):
-if name in data_names:
-data_indices.append(i)
-self._cached_op_args.append((True, data_names[name]))
-else:
-param_indices.append(i)
-self._cached_op_args.append((False, params[name]))
-flags = [('data_indices', data_indices), ('param_indices', 
param_indices)] + \
-self._flags
-
 args, _ = _flatten(args, "input")
 try:
-for is_arg, i in self._cached_op_args:
-if not is_arg:
-i.data()
+for name in input_names:
+if name in params:
+params[name].data()
 except DeferredInitializationError:
 self._deferred_infer_shape(*args)
-for is_arg, i in self._cached_op_args:
-if not is_arg:
-i._finish_deferred_init()
+for name in input_names:
+if name in params:
+params[name]._finish_deferred_init()
 
+arg_dict, aux_dict = dict(), dict()
 if self._backend:
 ctx = args[0].context
 # get list of params in the order of out.list_arguments
-arg_dict = {name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
-for name in out.list_arguments()}
-aux_dict = {name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
-for name in out.list_auxiliary_states()}
+arg_dict.update({name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
+ for name in out.list_arguments()})

Review comment:
   makes sense, thanks!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #18405: Add deleting of args aux aux to Partition API

2020-07-06 Thread GitBox


samskalicky commented on a change in pull request #18405:
URL: https://github.com/apache/incubator-mxnet/pull/18405#discussion_r450348025



##
File path: python/mxnet/gluon/block.py
##
@@ -1040,41 +1040,62 @@ def _build_cache(self, *args):
 warnings.warn("Parameter %s is not used by any computation. "
   "Is this intended?"%unused, stacklevel=4)
 
-data_indices = []
-param_indices = []
-self._cached_op_args = []
-for i, name in enumerate(input_names):
-if name in data_names:
-data_indices.append(i)
-self._cached_op_args.append((True, data_names[name]))
-else:
-param_indices.append(i)
-self._cached_op_args.append((False, params[name]))
-flags = [('data_indices', data_indices), ('param_indices', 
param_indices)] + \
-self._flags
-
 args, _ = _flatten(args, "input")
 try:
-for is_arg, i in self._cached_op_args:
-if not is_arg:
-i.data()
+for name in input_names:
+if name in params:
+params[name].data()
 except DeferredInitializationError:
 self._deferred_infer_shape(*args)
-for is_arg, i in self._cached_op_args:
-if not is_arg:
-i._finish_deferred_init()
+for name in input_names:
+if name in params:
+params[name]._finish_deferred_init()
 
+arg_dict, aux_dict = dict(), dict()
 if self._backend:
 ctx = args[0].context
 # get list of params in the order of out.list_arguments
-arg_dict = {name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
-for name in out.list_arguments()}
-aux_dict = {name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
-for name in out.list_auxiliary_states()}
+arg_dict.update({name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
+ for name in out.list_arguments()})
+aux_dict.update({name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
+ for name in out.list_auxiliary_states()})
 # Partition the graph.
 out = out.optimize_for(self._backend, arg_dict, aux_dict, ctx, 
**self._backend_opts)
+
 #update cached graph with partitioned graph
 self._cached_graph = data, out
+
+input_names = out.list_inputs()
+data_indices = []
+param_indices = []
+self._cached_op_args = []
+for i, name in enumerate(input_names):
+pair = None
+if name in data_names:
+data_indices.append(i)
+pair = (True, data_names[name])
+else:
+param_indices.append(i)
+if name in params:
+param = params[name]
+else:
+assert self._backend, "Parameter " + name + " is missing 
from block params"

Review comment:
   Is this the case that should never happen? When a param name is not in 
`params` but `_backend` is not set? If so can we change the error message and 
maybe leave a comment?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #18405: Add deleting of args aux aux to Partition API

2020-07-06 Thread GitBox


samskalicky commented on a change in pull request #18405:
URL: https://github.com/apache/incubator-mxnet/pull/18405#discussion_r450352450



##
File path: python/mxnet/gluon/block.py
##
@@ -1437,6 +1460,23 @@ def hybrid_forward(self, F, x, *args, **kwargs):
 # pylint: disable= invalid-name
 raise NotImplementedError
 
+def reset_ctx(self, ctx):
+"""Re-assign all Parameters to other contexts. If the Block is 
hybridized, it will reset the _cached_op_args.
+
+Parameters
+--
+ctx : Context or list of Context, default 
:py:meth:`context.current_context()`.
+Assign Parameter to given context. If ctx is a list of Context, a
+copy will be made for each context.
+"""
+params = self.collect_params()
+if self._cached_op:
+for p in self._cached_op_args:
+# resetting parameters creating by the partitioning backend
+if p.name not in params:
+p.reset_ctx(ctx)
+for p in params.values():

Review comment:
   Can you explain why we need to loop over `_cached_op_args` and only 
reset the params not in `params`, and then loop again over `params` and then 
reset them there instead? Is it possible to do the work in the 2nd loop in the 
first loop?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] chinakook commented on issue #18646: BatchNorm with axis=-1 is much slower than axis=1

2020-07-06 Thread GitBox


chinakook commented on issue #18646:
URL: 
https://github.com/apache/incubator-mxnet/issues/18646#issuecomment-654296913


   I think NHWC layout is very important in point cloud algorithms.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #18405: Add deleting of args aux aux to Partition API

2020-07-06 Thread GitBox


samskalicky commented on a change in pull request #18405:
URL: https://github.com/apache/incubator-mxnet/pull/18405#discussion_r450349091



##
File path: python/mxnet/gluon/block.py
##
@@ -1040,41 +1040,62 @@ def _build_cache(self, *args):
 warnings.warn("Parameter %s is not used by any computation. "
   "Is this intended?"%unused, stacklevel=4)
 
-data_indices = []
-param_indices = []
-self._cached_op_args = []
-for i, name in enumerate(input_names):
-if name in data_names:
-data_indices.append(i)
-self._cached_op_args.append((True, data_names[name]))
-else:
-param_indices.append(i)
-self._cached_op_args.append((False, params[name]))
-flags = [('data_indices', data_indices), ('param_indices', 
param_indices)] + \
-self._flags
-
 args, _ = _flatten(args, "input")
 try:
-for is_arg, i in self._cached_op_args:
-if not is_arg:
-i.data()
+for name in input_names:
+if name in params:
+params[name].data()
 except DeferredInitializationError:
 self._deferred_infer_shape(*args)
-for is_arg, i in self._cached_op_args:
-if not is_arg:
-i._finish_deferred_init()
+for name in input_names:
+if name in params:
+params[name]._finish_deferred_init()
 
+arg_dict, aux_dict = dict(), dict()
 if self._backend:
 ctx = args[0].context
 # get list of params in the order of out.list_arguments
-arg_dict = {name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
-for name in out.list_arguments()}
-aux_dict = {name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
-for name in out.list_auxiliary_states()}
+arg_dict.update({name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
+ for name in out.list_arguments()})
+aux_dict.update({name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
+ for name in out.list_auxiliary_states()})
 # Partition the graph.
 out = out.optimize_for(self._backend, arg_dict, aux_dict, ctx, 
**self._backend_opts)
+
 #update cached graph with partitioned graph
 self._cached_graph = data, out
+
+input_names = out.list_inputs()
+data_indices = []
+param_indices = []
+self._cached_op_args = []
+for i, name in enumerate(input_names):
+pair = None
+if name in data_names:
+data_indices.append(i)
+pair = (True, data_names[name])
+else:
+param_indices.append(i)
+if name in params:
+param = params[name]
+else:
+assert self._backend, "Parameter " + name + " is missing 
from block params"
+if name in arg_dict or name:
+param_data = arg_dict[name]
+elif name in aux_dict:
+param_data = aux_dict[name]
+else:
+raise RuntimeError('Expected inputs missing from arg 
and aux after partioning. '

Review comment:
   Is this the case when the backend added a param to the graph but not to 
the arg/aux dict? If so, can we change the error message to say something like 
"param  was added to the graph but the tensor was not added to args/aux"?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] leezu commented on pull request #18622: [WIP] Use RTC for elementwise and broadcast ops

2020-07-06 Thread GitBox


leezu commented on pull request #18622:
URL: https://github.com/apache/incubator-mxnet/pull/18622#issuecomment-654353049


   @ptrendx these are the files available in the S3 bucket. You may need to 
edit the Dockerfile to install one of the debs:
   
   ```
   % aws s3 ls --recursive s3://mxnet-ci-prod-private-slave-data/nvidia 

  6s ~
   2020-04-11 02:46:49  0 nvidia/
   2020-04-11 02:47:05  0 nvidia/sdkm_downloads/
   2020-04-11 03:07:04  237414931 
nvidia/sdkm_downloads/Jetson-210_Linux_R32.3.1_aarch64.tbz2
   2020-04-11 03:03:33  288481398 
nvidia/sdkm_downloads/NVIDIA_Nsight_Graphics_L4T_2019.5.19322.deb
   2020-04-11 03:24:005350765 
nvidia/sdkm_downloads/NVIDIA_VisionWorks_References.zip
   2020-04-11 03:17:26  113581742 
nvidia/sdkm_downloads/NsightSystems-linux-public-2019.6.2.6-3ffb807.deb
   2020-04-11 03:24:081022926 
nvidia/sdkm_downloads/OpenCV-4.1.1-2-gd5a58aa75-aarch64-dev.deb
   2020-04-11 03:23:599920774 
nvidia/sdkm_downloads/OpenCV-4.1.1-2-gd5a58aa75-aarch64-libs.deb
   2020-04-11 03:24:13  14240 
nvidia/sdkm_downloads/OpenCV-4.1.1-2-gd5a58aa75-aarch64-licenses.deb
   2020-04-11 03:24:072350538 
nvidia/sdkm_downloads/OpenCV-4.1.1-2-gd5a58aa75-aarch64-python.deb
   2020-04-11 03:24:11 192038 
nvidia/sdkm_downloads/OpenCV-4.1.1-2-gd5a58aa75-aarch64-samples.deb
   2020-04-11 03:24:091027754 
nvidia/sdkm_downloads/OpenCV-4.1.1-2-gd5a58aa75-x86_64-dev.deb
   2020-04-11 03:21:58   45701978 
nvidia/sdkm_downloads/OpenCV-4.1.1-2-gd5a58aa75-x86_64-libs.deb
   2020-04-11 03:24:12  18306 
nvidia/sdkm_downloads/OpenCV-4.1.1-2-gd5a58aa75-x86_64-licenses.deb
   2020-04-11 03:24:032576742 
nvidia/sdkm_downloads/OpenCV-4.1.1-2-gd5a58aa75-x86_64-python.deb
   2020-04-11 03:24:11 192030 
nvidia/sdkm_downloads/OpenCV-4.1.1-2-gd5a58aa75-x86_64-samples.deb
   2020-04-11 03:24:11 143245 
nvidia/sdkm_downloads/SDKM_logs_JetPack_4.3_Linux_for_Jetson_TX1_2020-04-10_17-43-23.zip
   2020-04-11 02:51:29 1263228889 
nvidia/sdkm_downloads/Tegra_Linux_Sample-Root-Filesystem_R32.3.1_aarch64.tbz2
   2020-04-11 02:51:29  394475774 
nvidia/sdkm_downloads/cuda-repo-cross-aarch64-10-0-local-10.0.326_1.0-1_all.deb
   2020-04-11 02:51:29 1000333292 
nvidia/sdkm_downloads/cuda-repo-l4t-10-0-local-10.0.326_1.0-1_arm64.deb
   2020-04-11 02:51:29 1582667320 
nvidia/sdkm_downloads/cuda-repo-ubuntu1804-10-0-local-10.0.326-410.108_1.0-1_amd64.deb
   2020-04-11 03:22:38   25538902 nvidia/sdkm_downloads/devtools_docs.zip
   2020-04-11 03:24:13  16108 
nvidia/sdkm_downloads/graphsurgeon-tf_6.0.1-1+cuda10.0_arm64.deb
   2020-04-11 03:13:51  161324708 
nvidia/sdkm_downloads/libcudnn7-dev_7.6.3.28-1+cuda10.0_arm64.deb
   2020-04-11 03:23:556627420 
nvidia/sdkm_downloads/libcudnn7-doc_7.6.3.28-1+cuda10.0_arm64.deb
   2020-04-11 03:13:18  181721612 
nvidia/sdkm_downloads/libcudnn7_7.6.3.28-1+cuda10.0_arm64.deb
   2020-04-11 03:24:13  12596 
nvidia/sdkm_downloads/libnvidia-container-tools_0.9.0_beta.1_arm64.deb
   2020-04-11 03:24:12  42744 
nvidia/sdkm_downloads/libnvidia-container0_0.9.0_beta.1_arm64.deb
   2020-04-11 03:24:12  69900 
nvidia/sdkm_downloads/libnvinfer-bin_6.0.1-1+cuda10.0_arm64.deb
   2020-04-11 03:20:36   57300292 
nvidia/sdkm_downloads/libnvinfer-dev_6.0.1-1+cuda10.0_arm64.deb
   2020-04-11 03:24:024460940 
nvidia/sdkm_downloads/libnvinfer-doc_6.0.1-1+cuda10.0_all.deb
   2020-04-11 03:24:081546362 
nvidia/sdkm_downloads/libnvinfer-plugin-dev_6.0.1-1+cuda10.0_arm64.deb
   2020-04-11 03:24:061489314 
nvidia/sdkm_downloads/libnvinfer-plugin6_6.0.1-1+cuda10.0_arm64.deb
   2020-04-11 02:51:29  451737338 
nvidia/sdkm_downloads/libnvinfer-samples_6.0.1-1+cuda10.0_all.deb
   2020-04-11 03:21:10   56443400 
nvidia/sdkm_downloads/libnvinfer6_6.0.1-1+cuda10.0_arm64.deb
   2020-04-11 03:24:11 164522 
nvidia/sdkm_downloads/libnvonnxparsers-dev_6.0.1-1+cuda10.0_arm64.deb
   2020-04-11 03:24:09 578042 
nvidia/sdkm_downloads/libnvonnxparsers6_6.0.1-1+cuda10.0_arm64.deb
   2020-04-11 03:24:10 538398 
nvidia/sdkm_downloads/libnvparsers-dev_6.0.1-1+cuda10.0_arm64.deb
   2020-04-11 03:24:08 707550 
nvidia/sdkm_downloads/libnvparsers6_6.0.1-1+cuda10.0_arm64.deb
   2020-04-11 03:19:50   66561472 
nvidia/sdkm_downloads/libvisionworks-repo_1.6.0.500n_amd64.deb
   2020-04-11 03:20:07   60780704 
nvidia/sdkm_downloads/libvisionworks-repo_1.6.0.500n_arm64.deb
   2020-04-11 03:22:59   20597586 
nvidia/sdkm_downloads/libvisionworks-sfm-repo_0.90.4_amd64.deb
   2020-04-11 03:23:08   20216850 
nvidia/sdkm_downloads/libvisionworks-sfm-repo_0.90.4_arm64.deb
   2020-04-11 03:23:29   13113238 
nvidia/sdkm_downloads/libvisionworks-tracking-repo_0.88.2_amd64.deb
   2020-04-11 03:23:56   13107966 
nvidia/sdkm_downloads/libvisionworks-tracking-repo_0.88.2_arm64.deb
   2020-04-11 03:24:13  14842 

[GitHub] [incubator-mxnet] leezu edited a comment on pull request #18622: [WIP] Use RTC for elementwise and broadcast ops

2020-07-06 Thread GitBox


leezu edited a comment on pull request #18622:
URL: https://github.com/apache/incubator-mxnet/pull/18622#issuecomment-654353049


   @ptrendx these are the files available in the S3 bucket. They are the files 
obtained from the SDK Manager. Is the file you are looking for included? If 
not, if you provide the file we can add it to the bucket.
   
   ```
   % aws s3 ls --recursive s3://mxnet-ci-prod-private-slave-data/nvidia 

  6s ~
   2020-04-11 02:46:49  0 nvidia/
   2020-04-11 02:47:05  0 nvidia/sdkm_downloads/
   2020-04-11 03:07:04  237414931 
nvidia/sdkm_downloads/Jetson-210_Linux_R32.3.1_aarch64.tbz2
   2020-04-11 03:03:33  288481398 
nvidia/sdkm_downloads/NVIDIA_Nsight_Graphics_L4T_2019.5.19322.deb
   2020-04-11 03:24:005350765 
nvidia/sdkm_downloads/NVIDIA_VisionWorks_References.zip
   2020-04-11 03:17:26  113581742 
nvidia/sdkm_downloads/NsightSystems-linux-public-2019.6.2.6-3ffb807.deb
   2020-04-11 03:24:081022926 
nvidia/sdkm_downloads/OpenCV-4.1.1-2-gd5a58aa75-aarch64-dev.deb
   2020-04-11 03:23:599920774 
nvidia/sdkm_downloads/OpenCV-4.1.1-2-gd5a58aa75-aarch64-libs.deb
   2020-04-11 03:24:13  14240 
nvidia/sdkm_downloads/OpenCV-4.1.1-2-gd5a58aa75-aarch64-licenses.deb
   2020-04-11 03:24:072350538 
nvidia/sdkm_downloads/OpenCV-4.1.1-2-gd5a58aa75-aarch64-python.deb
   2020-04-11 03:24:11 192038 
nvidia/sdkm_downloads/OpenCV-4.1.1-2-gd5a58aa75-aarch64-samples.deb
   2020-04-11 03:24:091027754 
nvidia/sdkm_downloads/OpenCV-4.1.1-2-gd5a58aa75-x86_64-dev.deb
   2020-04-11 03:21:58   45701978 
nvidia/sdkm_downloads/OpenCV-4.1.1-2-gd5a58aa75-x86_64-libs.deb
   2020-04-11 03:24:12  18306 
nvidia/sdkm_downloads/OpenCV-4.1.1-2-gd5a58aa75-x86_64-licenses.deb
   2020-04-11 03:24:032576742 
nvidia/sdkm_downloads/OpenCV-4.1.1-2-gd5a58aa75-x86_64-python.deb
   2020-04-11 03:24:11 192030 
nvidia/sdkm_downloads/OpenCV-4.1.1-2-gd5a58aa75-x86_64-samples.deb
   2020-04-11 03:24:11 143245 
nvidia/sdkm_downloads/SDKM_logs_JetPack_4.3_Linux_for_Jetson_TX1_2020-04-10_17-43-23.zip
   2020-04-11 02:51:29 1263228889 
nvidia/sdkm_downloads/Tegra_Linux_Sample-Root-Filesystem_R32.3.1_aarch64.tbz2
   2020-04-11 02:51:29  394475774 
nvidia/sdkm_downloads/cuda-repo-cross-aarch64-10-0-local-10.0.326_1.0-1_all.deb
   2020-04-11 02:51:29 1000333292 
nvidia/sdkm_downloads/cuda-repo-l4t-10-0-local-10.0.326_1.0-1_arm64.deb
   2020-04-11 02:51:29 1582667320 
nvidia/sdkm_downloads/cuda-repo-ubuntu1804-10-0-local-10.0.326-410.108_1.0-1_amd64.deb
   2020-04-11 03:22:38   25538902 nvidia/sdkm_downloads/devtools_docs.zip
   2020-04-11 03:24:13  16108 
nvidia/sdkm_downloads/graphsurgeon-tf_6.0.1-1+cuda10.0_arm64.deb
   2020-04-11 03:13:51  161324708 
nvidia/sdkm_downloads/libcudnn7-dev_7.6.3.28-1+cuda10.0_arm64.deb
   2020-04-11 03:23:556627420 
nvidia/sdkm_downloads/libcudnn7-doc_7.6.3.28-1+cuda10.0_arm64.deb
   2020-04-11 03:13:18  181721612 
nvidia/sdkm_downloads/libcudnn7_7.6.3.28-1+cuda10.0_arm64.deb
   2020-04-11 03:24:13  12596 
nvidia/sdkm_downloads/libnvidia-container-tools_0.9.0_beta.1_arm64.deb
   2020-04-11 03:24:12  42744 
nvidia/sdkm_downloads/libnvidia-container0_0.9.0_beta.1_arm64.deb
   2020-04-11 03:24:12  69900 
nvidia/sdkm_downloads/libnvinfer-bin_6.0.1-1+cuda10.0_arm64.deb
   2020-04-11 03:20:36   57300292 
nvidia/sdkm_downloads/libnvinfer-dev_6.0.1-1+cuda10.0_arm64.deb
   2020-04-11 03:24:024460940 
nvidia/sdkm_downloads/libnvinfer-doc_6.0.1-1+cuda10.0_all.deb
   2020-04-11 03:24:081546362 
nvidia/sdkm_downloads/libnvinfer-plugin-dev_6.0.1-1+cuda10.0_arm64.deb
   2020-04-11 03:24:061489314 
nvidia/sdkm_downloads/libnvinfer-plugin6_6.0.1-1+cuda10.0_arm64.deb
   2020-04-11 02:51:29  451737338 
nvidia/sdkm_downloads/libnvinfer-samples_6.0.1-1+cuda10.0_all.deb
   2020-04-11 03:21:10   56443400 
nvidia/sdkm_downloads/libnvinfer6_6.0.1-1+cuda10.0_arm64.deb
   2020-04-11 03:24:11 164522 
nvidia/sdkm_downloads/libnvonnxparsers-dev_6.0.1-1+cuda10.0_arm64.deb
   2020-04-11 03:24:09 578042 
nvidia/sdkm_downloads/libnvonnxparsers6_6.0.1-1+cuda10.0_arm64.deb
   2020-04-11 03:24:10 538398 
nvidia/sdkm_downloads/libnvparsers-dev_6.0.1-1+cuda10.0_arm64.deb
   2020-04-11 03:24:08 707550 
nvidia/sdkm_downloads/libnvparsers6_6.0.1-1+cuda10.0_arm64.deb
   2020-04-11 03:19:50   66561472 
nvidia/sdkm_downloads/libvisionworks-repo_1.6.0.500n_amd64.deb
   2020-04-11 03:20:07   60780704 
nvidia/sdkm_downloads/libvisionworks-repo_1.6.0.500n_arm64.deb
   2020-04-11 03:22:59   20597586 
nvidia/sdkm_downloads/libvisionworks-sfm-repo_0.90.4_amd64.deb
   2020-04-11 03:23:08   20216850 
nvidia/sdkm_downloads/libvisionworks-sfm-repo_0.90.4_arm64.deb
   2020-04-11 03:23:29   13113238 
nvidia/sdkm_downloads/libvisionworks-tracking-repo_0.88.2_amd64.deb
   2020-04-11 03:23:56   13107966 

[GitHub] [incubator-mxnet] chinakook commented on issue #18591: CPU memory leak when running inference on model with GPU

2020-07-06 Thread GitBox


chinakook commented on issue #18591:
URL: 
https://github.com/apache/incubator-mxnet/issues/18591#issuecomment-654306719


   This api may not support multi thread safety.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #18405: Add deleting of args aux aux to Partition API

2020-07-06 Thread GitBox


samskalicky commented on a change in pull request #18405:
URL: https://github.com/apache/incubator-mxnet/pull/18405#discussion_r450346162



##
File path: python/mxnet/gluon/block.py
##
@@ -1040,41 +1040,62 @@ def _build_cache(self, *args):
 warnings.warn("Parameter %s is not used by any computation. "
   "Is this intended?"%unused, stacklevel=4)
 
-data_indices = []
-param_indices = []
-self._cached_op_args = []
-for i, name in enumerate(input_names):
-if name in data_names:
-data_indices.append(i)
-self._cached_op_args.append((True, data_names[name]))
-else:
-param_indices.append(i)
-self._cached_op_args.append((False, params[name]))
-flags = [('data_indices', data_indices), ('param_indices', 
param_indices)] + \
-self._flags
-
 args, _ = _flatten(args, "input")
 try:
-for is_arg, i in self._cached_op_args:
-if not is_arg:
-i.data()
+for name in input_names:
+if name in params:
+params[name].data()
 except DeferredInitializationError:
 self._deferred_infer_shape(*args)
-for is_arg, i in self._cached_op_args:
-if not is_arg:
-i._finish_deferred_init()
+for name in input_names:
+if name in params:
+params[name]._finish_deferred_init()
 
+arg_dict, aux_dict = dict(), dict()
 if self._backend:
 ctx = args[0].context
 # get list of params in the order of out.list_arguments
-arg_dict = {name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
-for name in out.list_arguments()}
-aux_dict = {name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
-for name in out.list_auxiliary_states()}
+arg_dict.update({name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
+ for name in out.list_arguments()})
+aux_dict.update({name:args[data_names[name]] if name in 
data_names.keys() else params[name].data()
+ for name in out.list_auxiliary_states()})
 # Partition the graph.
 out = out.optimize_for(self._backend, arg_dict, aux_dict, ctx, 
**self._backend_opts)
+
 #update cached graph with partitioned graph
 self._cached_graph = data, out
+
+input_names = out.list_inputs()
+data_indices = []
+param_indices = []
+self._cached_op_args = []
+for i, name in enumerate(input_names):
+pair = None
+if name in data_names:
+data_indices.append(i)
+pair = (True, data_names[name])
+else:
+param_indices.append(i)

Review comment:
   nit: can we move this above the if/else since we do it in both?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] xidulu commented on pull request #18403: Gluon.probability

2020-07-06 Thread GitBox


xidulu commented on pull request #18403:
URL: https://github.com/apache/incubator-mxnet/pull/18403#issuecomment-654344692


   @leezu
   Issue solved by adding `garbage_expected` flag: 
https://github.com/apache/incubator-mxnet/pull/18403/files#diff-a6e158f7742ece308155a95c165505f6R2206



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #18405: Add deleting of args aux aux to Partition API

2020-07-06 Thread GitBox


samskalicky commented on a change in pull request #18405:
URL: https://github.com/apache/incubator-mxnet/pull/18405#discussion_r450360528



##
File path: python/mxnet/symbol/symbol.py
##
@@ -1544,8 +1544,35 @@ def optimize_for(self, backend, args=None, aux=None, 
ctx=None, **kwargs):
 raise RuntimeError('Cannot add new aux in optimize_for since aux 
is None\n' +
'Provide a dictionary to the aux argument to 
optimize_for')
 
-# return modified symbol
-return Symbol(out)
+new_sym = Symbol(out)
+
+arg_names = self.list_arguments()
+new_arg_names = new_sym.list_arguments()
+deleted_arg_names = set([item for item in arg_names
+ if item not in set(new_arg_names)])
+
+if len(deleted_arg_names) > 0:
+if args is not None:
+for a_n in deleted_arg_names:
+if a_n in args:
+args.pop(a_n)
+else:
+warnings.warn('optimize_for deleted some argument. \n' +
+  'Provide a dictionary to the arg argument to 
optimize_for')

Review comment:
   Can we clarify this error message to something like:
   ```
   A param was deleted during optimization, but no args dictionary was 
provided. Please ensure that your model weights match the newly optimized 
model. 
   ```
   Should we print the names of the deleted_arg_names in the message too (or is 
that overkill)?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #18405: Add deleting of args aux aux to Partition API

2020-07-06 Thread GitBox


samskalicky commented on a change in pull request #18405:
URL: https://github.com/apache/incubator-mxnet/pull/18405#discussion_r450360528



##
File path: python/mxnet/symbol/symbol.py
##
@@ -1544,8 +1544,35 @@ def optimize_for(self, backend, args=None, aux=None, 
ctx=None, **kwargs):
 raise RuntimeError('Cannot add new aux in optimize_for since aux 
is None\n' +
'Provide a dictionary to the aux argument to 
optimize_for')
 
-# return modified symbol
-return Symbol(out)
+new_sym = Symbol(out)
+
+arg_names = self.list_arguments()
+new_arg_names = new_sym.list_arguments()
+deleted_arg_names = set([item for item in arg_names
+ if item not in set(new_arg_names)])
+
+if len(deleted_arg_names) > 0:
+if args is not None:
+for a_n in deleted_arg_names:
+if a_n in args:
+args.pop(a_n)
+else:
+warnings.warn('optimize_for deleted some argument. \n' +
+  'Provide a dictionary to the arg argument to 
optimize_for')

Review comment:
   Can we clarify this error message to something like:
   ```
   A param was deleted during optimization, but no args dictionary was 
provided. 
   Please ensure that your model weights match the newly optimized model. 
   ```
   Should we print the names of the deleted_arg_names in the message too (or is 
that overkill)?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] chinakook commented on issue #18643: ndarray.contrib.boolean_mask can not be hybridize

2020-07-06 Thread GitBox


chinakook commented on issue #18643:
URL: 
https://github.com/apache/incubator-mxnet/issues/18643#issuecomment-654293552


   Actually, Loss is used explicitly, so it can be coded as Block, not 
HybridBlock.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] ptrendx commented on pull request #18622: [WIP] Use RTC for elementwise and broadcast ops

2020-07-06 Thread GitBox


ptrendx commented on pull request #18622:
URL: https://github.com/apache/incubator-mxnet/pull/18622#issuecomment-654316485


   @ChaiBapchya The Jetson CI pipeline fails with not being able to find NVRTC 
- I confirmed it is part of jetpack, so I guess it is not included in the 
package of libs you have in S3. Can you update it? (And yes, I'm aware of issue 
#18637).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] sammieghabra opened a new issue #18665: Using ShiftScale operation (Similar to BatchNorm) in MXNet (Customer question/inquiry)

2020-07-06 Thread GitBox


sammieghabra opened a new issue #18665:
URL: https://github.com/apache/incubator-mxnet/issues/18665


   ## Description
   Hi MXNet team,
   
   My team wants to implement a layer in MXNet to implement a ShiftScale 
operator. This layer is similar to batchnorm - however, we want `moving_mean` 
and `moving_var` to be used instead of `data_mean` and `data_var` to compute 
the output of the layer. I see that batch norm has a flag `use_global_stats`, 
and in the [mxnet docs](http://beta.mxnet.io/r/api/mx.symbol.BatchNorm.html), 
it seems that setting this flag to be true would be something similar to what 
I'm trying to do. However, upon inspecting the [batch-norm 
code](https://github.com/apache/incubator-mxnet/blob/master/src/operator/nn/batch_norm.cc#L260-L271),
 it seems that running_mean and running_var won't be updated if that flag is 
set to true for training. 
   
   1. Is there a reason why from a design perspective setting this 
`use_global_stats` flag to be true wont update the running mean and running 
var? 
   2. We would like to support this shift scale layer during training. So what 
my proposal is to do is to add another flag to the `batchNorm` operator to be 
"use_shift_scale", which would simply replace mean and var with running mean 
and running var when updating the weights. Is this something that MXNet team 
would be ok  with?
   3. We also plan to train with more than one instance - will the running_mean 
and running_var parameters be the same across instances? 
   
   Thanks
   Sammie



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #18405: Add deleting of args aux aux to Partition API

2020-07-06 Thread GitBox


samskalicky commented on a change in pull request #18405:
URL: https://github.com/apache/incubator-mxnet/pull/18405#discussion_r450360818



##
File path: python/mxnet/symbol/symbol.py
##
@@ -1544,8 +1544,35 @@ def optimize_for(self, backend, args=None, aux=None, 
ctx=None, **kwargs):
 raise RuntimeError('Cannot add new aux in optimize_for since aux 
is None\n' +
'Provide a dictionary to the aux argument to 
optimize_for')
 
-# return modified symbol
-return Symbol(out)
+new_sym = Symbol(out)
+
+arg_names = self.list_arguments()
+new_arg_names = new_sym.list_arguments()
+deleted_arg_names = set([item for item in arg_names
+ if item not in set(new_arg_names)])
+
+if len(deleted_arg_names) > 0:
+if args is not None:
+for a_n in deleted_arg_names:
+if a_n in args:
+args.pop(a_n)
+else:
+warnings.warn('optimize_for deleted some argument. \n' +
+  'Provide a dictionary to the arg argument to 
optimize_for')
+aux_names = self.list_auxiliary_states()
+new_aux_names = new_sym.list_auxiliary_states()
+deleted_aux_names = set([item for item in aux_names
+ if item not in set(new_aux_names)])
+if len(deleted_aux_names) > 0:
+if aux is not None:
+for a_n in deleted_aux_names:
+if a_n in aux:
+aux.pop(a_n)
+else:
+warnings.warn('optimize_for deleted some aux argument. \n' +
+  'Provide a dictionary to the aux argument to 
optimize_for')

Review comment:
   Same as above for args





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] leezu merged pull request #18639: User Feedback Widget

2020-07-06 Thread GitBox


leezu merged pull request #18639:
URL: https://github.com/apache/incubator-mxnet/pull/18639


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] ys2843 commented on pull request #18639: User Feedback Widget

2020-07-06 Thread GitBox


ys2843 commented on pull request #18639:
URL: https://github.com/apache/incubator-mxnet/pull/18639#issuecomment-654522781


   > @ys2843 can you please send an email to the mailing list requesting 
guidance on how to disclose the use of google analytics? @aaronmarkham made the 
point that the website is not compliant with GDPR.
   > We should ask ASF to consider including whatever guidance they give, into 
their policies.
   
   Will do. I will share the info with team later.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-mxnet] branch master updated: User Feedback Widget (#18639)

2020-07-06 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 54c0155  User Feedback Widget (#18639)
54c0155 is described below

commit 54c0155b7581f5e10b1469a17ddf127d3c75e156
Author: Yang Shi 
AuthorDate: Mon Jul 6 17:01:42 2020 -0700

User Feedback Widget (#18639)

* user feedback widget implementation

* add user feedback widget to python docs site

* update margin

* add apache license

* one more license

* turn off feedback widget on python site

* update copy

* format

* add event value field

* turn on widget on Python site
---
 docs/python_docs/_static/feedback.css  | 37 
 docs/python_docs/_static/feedback.js   | 33 ++
 .../themes/mx-theme/mxtheme/feedback.html  | 10 ++
 .../themes/mx-theme/mxtheme/layout.html|  5 ++-
 docs/static_site/src/_includes/feedback.html   | 10 ++
 docs/static_site/src/_includes/head.html   |  3 ++
 docs/static_site/src/_layouts/page_api.html|  3 ++
 docs/static_site/src/_sass/feedback.scss   | 39 ++
 docs/static_site/src/assets/js/feedback.js | 33 ++
 docs/static_site/src/assets/main.scss  |  1 +
 10 files changed, 173 insertions(+), 1 deletion(-)

diff --git a/docs/python_docs/_static/feedback.css 
b/docs/python_docs/_static/feedback.css
new file mode 100644
index 000..b4a64ec
--- /dev/null
+++ b/docs/python_docs/_static/feedback.css
@@ -0,0 +1,37 @@
+.feedback-container {
+  text-align: center;
+}
+
+.feedback-answer-container {
+  display: inline-block;
+}
+
+.feedback-question {
+  display: inline-block;
+  padding: 0.5em 1em 0.5em 1em;
+}
+
+.feedback-answer {
+  display: inline-block;
+  padding: 0.5em 1em 0.5em 1em;
+  color: #048ccc;
+  cursor: pointer;
+}
+
+.feedback-answer:hover {
+  color: #ff;
+  background-color: #048ccc;
+}
+
+.feedback-thank-you {
+  display: none;
+  padding: 0.5em 1em 0.5em 1em;
+}
+
+.feedback-hr-top {
+  margin-top: 50px;
+}
+
+.feedback-hr-bottom {
+  margin-bottom: 30px;
+}
diff --git a/docs/python_docs/_static/feedback.js 
b/docs/python_docs/_static/feedback.js
new file mode 100644
index 000..f454237
--- /dev/null
+++ b/docs/python_docs/_static/feedback.js
@@ -0,0 +1,33 @@
+/*!
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+$(document).ready(function() {
+  $(".feedback-answer").on("click", function () {
+$(".feedback-question").remove();
+$(".feedback-answer-container").remove();
+$(".feedback-thank-you").show();
+ga("send", {
+  hitType: "event",
+  eventCategory: "Did this page help you?",
+  eventAction: $(this).attr("data-response"),
+  eventLabel: window.location.pathname || "unknown",
+  eventValue: $(this).attr("data-response") === "yes" ? 1 : 0
+});
+  });
+});
diff --git a/docs/python_docs/themes/mx-theme/mxtheme/feedback.html 
b/docs/python_docs/themes/mx-theme/mxtheme/feedback.html
new file mode 100644
index 000..8a9b1b5
--- /dev/null
+++ b/docs/python_docs/themes/mx-theme/mxtheme/feedback.html
@@ -0,0 +1,10 @@
+
+
+Did this page help you?
+
+Yes
+No
+
+Thanks for your feedback!
+
+
diff --git a/docs/python_docs/themes/mx-theme/mxtheme/layout.html 
b/docs/python_docs/themes/mx-theme/mxtheme/layout.html
index 03189c8..994da38 100644
--- a/docs/python_docs/themes/mx-theme/mxtheme/layout.html
+++ b/docs/python_docs/themes/mx-theme/mxtheme/layout.html
@@ -64,10 +64,12 @@
 '_static/sphinx_materialdesign_theme.css',
 '_static/fontawesome/all.css',
 '_static/fonts.css',
+'_static/feedback.css',
 ] %}
 
 {%- block header %}
-   
+
+
 {% endblock %}
 {%- block relbar1 %}{% endblock %}
 {%- block relbar2 %}{% include "relations.html" %}{% endblock %}
@@ -85,6 +87,7 @@
 {%- block document %}
 
 {% block body %} {% endblock %}
+{% include "feedback.html" %}
 
 

[incubator-mxnet] branch master updated: User Feedback Widget (#18639)

2020-07-06 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 54c0155  User Feedback Widget (#18639)
54c0155 is described below

commit 54c0155b7581f5e10b1469a17ddf127d3c75e156
Author: Yang Shi 
AuthorDate: Mon Jul 6 17:01:42 2020 -0700

User Feedback Widget (#18639)

* user feedback widget implementation

* add user feedback widget to python docs site

* update margin

* add apache license

* one more license

* turn off feedback widget on python site

* update copy

* format

* add event value field

* turn on widget on Python site
---
 docs/python_docs/_static/feedback.css  | 37 
 docs/python_docs/_static/feedback.js   | 33 ++
 .../themes/mx-theme/mxtheme/feedback.html  | 10 ++
 .../themes/mx-theme/mxtheme/layout.html|  5 ++-
 docs/static_site/src/_includes/feedback.html   | 10 ++
 docs/static_site/src/_includes/head.html   |  3 ++
 docs/static_site/src/_layouts/page_api.html|  3 ++
 docs/static_site/src/_sass/feedback.scss   | 39 ++
 docs/static_site/src/assets/js/feedback.js | 33 ++
 docs/static_site/src/assets/main.scss  |  1 +
 10 files changed, 173 insertions(+), 1 deletion(-)

diff --git a/docs/python_docs/_static/feedback.css 
b/docs/python_docs/_static/feedback.css
new file mode 100644
index 000..b4a64ec
--- /dev/null
+++ b/docs/python_docs/_static/feedback.css
@@ -0,0 +1,37 @@
+.feedback-container {
+  text-align: center;
+}
+
+.feedback-answer-container {
+  display: inline-block;
+}
+
+.feedback-question {
+  display: inline-block;
+  padding: 0.5em 1em 0.5em 1em;
+}
+
+.feedback-answer {
+  display: inline-block;
+  padding: 0.5em 1em 0.5em 1em;
+  color: #048ccc;
+  cursor: pointer;
+}
+
+.feedback-answer:hover {
+  color: #ff;
+  background-color: #048ccc;
+}
+
+.feedback-thank-you {
+  display: none;
+  padding: 0.5em 1em 0.5em 1em;
+}
+
+.feedback-hr-top {
+  margin-top: 50px;
+}
+
+.feedback-hr-bottom {
+  margin-bottom: 30px;
+}
diff --git a/docs/python_docs/_static/feedback.js 
b/docs/python_docs/_static/feedback.js
new file mode 100644
index 000..f454237
--- /dev/null
+++ b/docs/python_docs/_static/feedback.js
@@ -0,0 +1,33 @@
+/*!
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+$(document).ready(function() {
+  $(".feedback-answer").on("click", function () {
+$(".feedback-question").remove();
+$(".feedback-answer-container").remove();
+$(".feedback-thank-you").show();
+ga("send", {
+  hitType: "event",
+  eventCategory: "Did this page help you?",
+  eventAction: $(this).attr("data-response"),
+  eventLabel: window.location.pathname || "unknown",
+  eventValue: $(this).attr("data-response") === "yes" ? 1 : 0
+});
+  });
+});
diff --git a/docs/python_docs/themes/mx-theme/mxtheme/feedback.html 
b/docs/python_docs/themes/mx-theme/mxtheme/feedback.html
new file mode 100644
index 000..8a9b1b5
--- /dev/null
+++ b/docs/python_docs/themes/mx-theme/mxtheme/feedback.html
@@ -0,0 +1,10 @@
+
+
+Did this page help you?
+
+Yes
+No
+
+Thanks for your feedback!
+
+
diff --git a/docs/python_docs/themes/mx-theme/mxtheme/layout.html 
b/docs/python_docs/themes/mx-theme/mxtheme/layout.html
index 03189c8..994da38 100644
--- a/docs/python_docs/themes/mx-theme/mxtheme/layout.html
+++ b/docs/python_docs/themes/mx-theme/mxtheme/layout.html
@@ -64,10 +64,12 @@
 '_static/sphinx_materialdesign_theme.css',
 '_static/fontawesome/all.css',
 '_static/fonts.css',
+'_static/feedback.css',
 ] %}
 
 {%- block header %}
-   
+
+
 {% endblock %}
 {%- block relbar1 %}{% endblock %}
 {%- block relbar2 %}{% include "relations.html" %}{% endblock %}
@@ -85,6 +87,7 @@
 {%- block document %}
 
 {% block body %} {% endblock %}
+{% include "feedback.html" %}
 
 

[GitHub] [incubator-mxnet] ys2843 closed issue #18548: Merge beta.mxnet.io To MXNet Official Website

2020-07-06 Thread GitBox


ys2843 closed issue #18548:
URL: https://github.com/apache/incubator-mxnet/issues/18548


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] ys2843 commented on issue #18548: Merge beta.mxnet.io To MXNet Official Website

2020-07-06 Thread GitBox


ys2843 commented on issue #18548:
URL: 
https://github.com/apache/incubator-mxnet/issues/18548#issuecomment-654523856


   Project completed. Closing the issue.
   1. Created a new R docs website and merged to v1.6 website - Done
   2. Created user feedback widget - Done
   3. Redirect traffic from beta site to official website - Done
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-07-06 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 5b3f84a  Bump the publish timestamp.
5b3f84a is described below

commit 5b3f84a97a5983a4ed7d4d6fc127f9ee5d9e8fe6
Author: mxnet-ci 
AuthorDate: Tue Jul 7 00:50:32 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..286c0ba
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue Jul  7 00:50:32 UTC 2020



[GitHub] [incubator-mxnet] rondogency commented on issue #18665: Using ShiftScale operation (Similar to BatchNorm) in MXNet (Customer question/inquiry)

2020-07-06 Thread GitBox


rondogency commented on issue #18665:
URL: 
https://github.com/apache/incubator-mxnet/issues/18665#issuecomment-654546431


   @zhreshold @sxjscience FYI



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] rondogency commented on issue #18665: Using ShiftScale operation (Similar to BatchNorm) in MXNet (Customer question/inquiry)

2020-07-06 Thread GitBox


rondogency commented on issue #18665:
URL: 
https://github.com/apache/incubator-mxnet/issues/18665#issuecomment-654546346


   can confirm currently MXNet doesn't support both update global running 
mean/var and use running mean/var to calculate grads in one pass of Batchnorm, 
while Keras-MXNet can support it by passing mean/var as arguments to Batchnorm 
as a workaround
   
   adding the support would still have constraints, especially cudnn doesn't 
support it so it cannot run with the best performance; some other frameworks 
like caffe doesn't support this either
   
   it can be implemented by modifying batch_norm.cc/cu and add 
"use_shift_scale" option to control it
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] andrei5055 commented on pull request #18582: Refactoring of Pooled Storage Manager classes

2020-07-06 Thread GitBox


andrei5055 commented on pull request #18582:
URL: https://github.com/apache/incubator-mxnet/pull/18582#issuecomment-654562254


   Here is first graph for GPU Storage Managers:
   
![image](https://user-images.githubusercontent.com/7293680/86691969-30b3c280-bfbe-11ea-9101-0e2ef18b2210.png)
   
   I will construct similar graphs for CPU, CPU_PINNED and Threaded Engines.
   My benchmarks are based on [that 
script](https://github.com/apache/incubator-mxnet/issues/17335). As you could 
see, some lines are almost identical, as it should be.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-mxnet] leezu opened a new issue #18666: Version selector install guides

2020-07-06 Thread GitBox


leezu opened a new issue #18666:
URL: https://github.com/apache/incubator-mxnet/issues/18666


   ## Description
   Currently the get started page points to a single build from source guide. 
As the get started page will be shared among all releases 
(https://github.com/apache/incubator-mxnet/pull/18658), the selector should be 
updated to include the corresponding install guide based on the user's 
selection.
   
   
![image](https://user-images.githubusercontent.com/946903/86656357-444e3180-bf9c-11ea-933f-585db3660402.png)
   
   For example, if the user selects version 1.5.1 here, the recommended 
installation guide for 1.5.1 should be shown inline instead of referring to a 
general (all-versions) install guide.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org