[GitHub] [incubator-mxnet] connorgoggins commented on issue #17798: [Website 2.0] General Version Dropdown

2020-03-23 Thread GitBox
connorgoggins commented on issue #17798: [Website 2.0] General Version Dropdown
URL: 
https://github.com/apache/incubator-mxnet/issues/17798#issuecomment-602983199
 
 
   Preview of the new website with my changes is available here: 
http://ec2-3-19-223-185.us-east-2.compute.amazonaws.com/


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on issue #17798: [Website 2.0] General Version Dropdown

2020-03-23 Thread GitBox
aaronmarkham commented on issue #17798: [Website 2.0] General Version Dropdown
URL: 
https://github.com/apache/incubator-mxnet/issues/17798#issuecomment-602979201
 
 
   > Thanks for the update! I've been meaning to ask: what's the part in the 
website that requires a webserver that a static hosting solution cannot replace?
   > 
   > @aaronmarkham @ThomasDelteil feel free to chime in
   
   The Apache hosting is static. But, I think what you're looking for is the 
switch away from S3...
   As far as s3 goes, it's not meant to be a web server and various things 
don't work, like assumed index pages in roots of folders which make for a lot 
of broken links and other problems that web devs would expect to work. Jekyll, 
for one, wouldn't work out of the box with s3... The friendly/vanity urls won't 
work Maybe with loads of tweaking you could get it to work. IDK. 
   I do miss the previews autogenerated with a PR's CI run. I think we could 
still do previews with s3 as long as cloudfront or something else sits in front 
of it. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #17891: [DNNL] Enable primitive cache in build

2020-03-23 Thread GitBox
TaoLv commented on issue #17891: [DNNL] Enable primitive cache in build
URL: https://github.com/apache/incubator-mxnet/pull/17891#issuecomment-602978926
 
 
   Thank you @leezu. The document will be rendered to the page here: 
https://intel.github.io/mkl-dnn/dev_guide_primitive_cache.html. I added the 
link to PR description.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] MonicaGu closed issue #17863: Inconsistent results in gluon.nn.BatchNorm with autograd.record()

2020-03-23 Thread GitBox
MonicaGu closed issue #17863: Inconsistent results in gluon.nn.BatchNorm with 
autograd.record()
URL: https://github.com/apache/incubator-mxnet/issues/17863
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] MonicaGu commented on issue #17863: Inconsistent results in gluon.nn.BatchNorm with autograd.record()

2020-03-23 Thread GitBox
MonicaGu commented on issue #17863: Inconsistent results in gluon.nn.BatchNorm 
with autograd.record()
URL: 
https://github.com/apache/incubator-mxnet/issues/17863#issuecomment-602976658
 
 
   OK I see. Thank you.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] connorgoggins commented on issue #17798: [Website 2.0] General Version Dropdown

2020-03-23 Thread GitBox
connorgoggins commented on issue #17798: [Website 2.0] General Version Dropdown
URL: 
https://github.com/apache/incubator-mxnet/issues/17798#issuecomment-602976215
 
 
   @szha thanks for your comment! The primary components that require a dynamic 
hosting solution are the API reference docs for each language in the latest 
versions (master, v1.6, and v1.x - future releases). For example, since we use 
Sphinx to generate the Python docs for the latest versions of the MXNet API, we 
need a build pipeline for the latest versions of the Python MXNet API that 
leverages Sphinx. We use other tools to dynamically build the docs for other 
languages (C++, Java, Julia, etc.), and they need to be called during the build 
process as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha commented on issue #17890: ndarray.cc:640 Check failed: !is_view

2020-03-23 Thread GitBox
szha commented on issue #17890: ndarray.cc:640 Check failed: !is_view
URL: 
https://github.com/apache/incubator-mxnet/issues/17890#issuecomment-602975167
 
 
   > We don't release mxnet-mkl nightly build anymore
   
   I'm guessing this is also the culprit for mxnet-cu102mkl. @caishanli could 
you confirm the versions for your mxnet-cu102 and mxnet-cu102mkl setup?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha commented on issue #17798: [Website 2.0] General Version Dropdown

2020-03-23 Thread GitBox
szha commented on issue #17798: [Website 2.0] General Version Dropdown
URL: 
https://github.com/apache/incubator-mxnet/issues/17798#issuecomment-602967783
 
 
   Thanks for the update! I've been meaning to ask: what's the part in the 
website that requires a webserver that a static hosting solution cannot replace?
   
   @aaronmarkham @ThomasDelteil feel free to chime in


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #17890: ndarray.cc:640 Check failed: !is_view

2020-03-23 Thread GitBox
TaoLv commented on issue #17890: ndarray.cc:640 Check failed: !is_view
URL: 
https://github.com/apache/incubator-mxnet/issues/17890#issuecomment-602967109
 
 
   We don't release mxnet-mkl nightly build anymore. MKL-DNN has been enabled 
in the linux cpu build. You can try the latest one by:
   ```
   pip install --pre mxnet -f https://dist.mxnet.io/python/cpu
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham opened a new issue #17895: update CentOS installation docs

2020-03-23 Thread GitBox
aaronmarkham opened a new issue #17895: update CentOS installation docs
URL: https://github.com/apache/incubator-mxnet/issues/17895
 
 
   ## Description
   The [install instructions for 
CentOS](https://github.com/apache/incubator-mxnet/blob/master/docs/static_site/src/pages/get_started/centos_setup.md)
 are a couple years old. 
   
   Check out the [CI scripts as a 
guide](https://github.com/apache/incubator-mxnet/blob/master/ci/docker/Dockerfile.build.centos7_cpu)
 to see what should be updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-03-23 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new c1512b6  Bump the publish timestamp.
c1512b6 is described below

commit c1512b641ab2eb10e4cccdea11294cf069dc01e7
Author: mxnet-ci 
AuthorDate: Tue Mar 24 00:45:27 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..43529dc
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue Mar 24 00:45:27 UTC 2020



[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #17894: [OpPerf] Fix axis_shape and function mismatch for LTS

2020-03-23 Thread GitBox
ChaiBapchya commented on issue #17894: [OpPerf] Fix axis_shape and function 
mismatch for LTS
URL: https://github.com/apache/incubator-mxnet/pull/17894#issuecomment-602931552
 
 
   @mxnet-bot run ci [sanity]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #17894: [OpPerf] Fix axis_shape and function mismatch for LTS

2020-03-23 Thread GitBox
ChaiBapchya commented on issue #17894: [OpPerf] Fix axis_shape and function 
mismatch for LTS
URL: https://github.com/apache/incubator-mxnet/pull/17894#issuecomment-602911795
 
 
   @access2rohit Thanks for reminding


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya opened a new pull request #17894: [OpPerf] Fix axis_shape and function mismatch for LTS

2020-03-23 Thread GitBox
ChaiBapchya opened a new pull request #17894: [OpPerf] Fix axis_shape and 
function mismatch for LTS
URL: https://github.com/apache/incubator-mxnet/pull/17894
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] connorgoggins commented on issue #17798: [Website 2.0] General Version Dropdown

2020-03-23 Thread GitBox
connorgoggins commented on issue #17798: [Website 2.0] General Version Dropdown
URL: 
https://github.com/apache/incubator-mxnet/issues/17798#issuecomment-602833630
 
 
   Update on progress so far:
   
   - Fixed broken components of static artifacts for old versions 
(internal/external links, menus, etc.)
   - Added missing supplemental content (missing tutorials, docs, etc.) to 
static artifacts for old versions
   - Implemented working general version dropdown menu capable of switching 
between old artifacts
   - Finished general version dropdown for master website (styling and 
functionality) - tested in browser w/inline changes to HTML/CSS, Jekyll build 
with changes passing on Jenkins
   
   After I obtain the artifact of the full master website build with my changes 
from Jenkins, I will deploy the files on an EC2 instance with public access 
over a specific port. You will then be able to preview my changes and provide 
feedback.
   
   @sandeep-krishnamurthy @aaronmarkham @szha @sojiadeshina @leezu  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience opened a new issue #17893: [Bug][Numpy] Wrong gradient of np.where

2020-03-23 Thread GitBox
sxjscience opened a new issue #17893: [Bug][Numpy] Wrong gradient of np.where
URL: https://github.com/apache/incubator-mxnet/issues/17893
 
 
   ## Description
   Example 1: Using np.where(array, array, scalar)
   
   ```python
   import mxnet as mx
   mx.npx.set_np()
   
   a = mx.np.array([1, 0, 1])
   b = mx.np.array([2, 3, 4])
   
   b.attach_grad()
   
   with mx.autograd.record():
   c = mx.np.where(a, b, -1)
   c.backward()
   print(b.grad)
   ```
   Output: [0. 1. 0.]
   
   Example 2: Using np.where(array, array, array)
   ```python
   
   import mxnet as mx
   mx.npx.set_np()
   
   a = mx.np.array([1, 0, 1])
   b = mx.np.array([2, 3, 4])
   
   b.attach_grad()
   
   with mx.autograd.record():
   c = mx.np.where(a, b, mx.np.array([-1, -1, -1]))
   c.backward()
   print(b.grad)
   ```
   Output: [1. 0. 1.]
   
   The second one is correct.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (3840786 -> 9a355eb)

2020-03-23 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 3840786  cmake: Set DMLC_LOG_FATAL_THROW only for building mxnet and 
not for tvm (#17878)
 add 9a355eb  [Numpy] Kron operator (#17323)

No new revisions were added by this update.

Summary of changes:
 benchmark/python/ffi/benchmark_ffi.py  |   1 +
 python/mxnet/ndarray/numpy/_op.py  |  47 ++-
 python/mxnet/numpy/multiarray.py   |  48 ++-
 python/mxnet/numpy_dispatch_protocol.py|   1 +
 python/mxnet/symbol/numpy/_symbol.py   |  48 ++-
 .../operator/numpy/{np_memory_op.cc => np_kron.cc} |  11 +-
 src/operator/numpy/np_kron-inl.h   | 322 +
 src/operator/numpy/np_kron.cc  |  94 ++
 src/operator/numpy/{np_dot.cu => np_kron.cu}   |  14 +-
 .../python/unittest/test_numpy_interoperability.py |   8 +
 tests/python/unittest/test_numpy_op.py |  81 ++
 11 files changed, 660 insertions(+), 15 deletions(-)
 copy src/api/operator/numpy/{np_memory_op.cc => np_kron.cc} (84%)
 create mode 100644 src/operator/numpy/np_kron-inl.h
 create mode 100644 src/operator/numpy/np_kron.cc
 copy src/operator/numpy/{np_dot.cu => np_kron.cu} (75%)



[GitHub] [incubator-mxnet] haojin2 merged pull request #17323: [Numpy] Kron operator

2020-03-23 Thread GitBox
haojin2 merged pull request #17323: [Numpy] Kron operator
URL: https://github.com/apache/incubator-mxnet/pull/17323
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-03-23 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 26c9dd7  Bump the publish timestamp.
26c9dd7 is described below

commit 26c9dd7dc2d78fa2c6b62018cf5ac91ea87633b3
Author: mxnet-ci 
AuthorDate: Mon Mar 23 18:50:10 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..bd618bd
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Mon Mar 23 18:50:10 UTC 2020



[incubator-mxnet] branch master updated (83b5170 -> 3840786)

2020-03-23 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 83b5170  Add simplified HybridBlock.forward without F (#17530)
 add 3840786  cmake: Set DMLC_LOG_FATAL_THROW only for building mxnet and 
not for tvm (#17878)

No new revisions were added by this update.

Summary of changes:
 CMakeLists.txt   | 88 +---
 cmake/BuildTVM.cmake |  8 +++--
 2 files changed, 47 insertions(+), 49 deletions(-)



[GitHub] [incubator-mxnet] leezu closed issue #17875: USE_TVM_OP=1 build broken with DMLC_LOG_FATAL_THROW=0

2020-03-23 Thread GitBox
leezu closed issue #17875: USE_TVM_OP=1 build broken with DMLC_LOG_FATAL_THROW=0
URL: https://github.com/apache/incubator-mxnet/issues/17875
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu merged pull request #17878: Decouple LOG_FATAL_THROW prepocessor variables between TVM and MXNet

2020-03-23 Thread GitBox
leezu merged pull request #17878: Decouple LOG_FATAL_THROW prepocessor 
variables between TVM and MXNet
URL: https://github.com/apache/incubator-mxnet/pull/17878
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Add simplified HybridBlock.forward without F (#17530)

2020-03-23 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 83b5170  Add simplified HybridBlock.forward without F (#17530)
83b5170 is described below

commit 83b51703ed354f41024423f140de38df2ba22d50
Author: Leonard Lausen 
AuthorDate: Mon Mar 23 11:21:23 2020 -0700

Add simplified HybridBlock.forward without F (#17530)

Users can now implement HybridBlock.forward instead of 
HybridBlock.hybrid_forward.
HybridBlock.forward has the same signature as Block.forward. For example:

  class MyBlock(mx.gluon.HybridBlock):
  def __init__(self, *, prefix=None, params=None):
  super().__init__(prefix, params)
  with self.name_scope():
  self.dense = mx.gluon.nn.Dense(units=10)
  self.weight = self.params.get('weight', 
allow_deferred_init=True)
  def infer_shape(self, x):
  self.weight.shape = (x.shape[1], )
  def forward(self, x):
  return self.dense(x) + self.weight.data(x.context)

Hybridization of HybridBlock.forward is based on a deferred computation 
mode in
the MXNet backend, which enables recording computation via tracing in the
mxnet.nd and mxnet.np interfaces. The recorded computation can be exported 
to a
symbolic representation and is used for optimized execution with the 
CachedOp.

As tracing is based on the imperative APIs, users can access shape 
information
of the arrays. As x.shape for some array x is a python tuple, any use of 
that
shape will be a constant in the recorded graph and may limit the recorded 
graph
to be used with inputs of the same shape only.

As part of the change from hybrid_forward to forward, we also disable 
support
for parameter shape inference in the MXNet backend in the case of deferred
parameter initialization. Shape inference in the backend was limited and 
did by
it's very nature not support dynamic shape operators. Instead, users should 
now
always implement HybridBlock.infer_shape to set the parameter shapes if the
parameter shape was not set during HybridBlock.__init__. See the example 
above.

An example of the internal deferred compute APIs is:

  a = mx.np.arange(10)
  dc.set_variable(a, mx.sym.var('a').as_np_ndarray())
  with dc.context():
  b = a ** 2
  symbol = dc.get_symbol(b)
---
 include/mxnet/c_api.h  |  42 ++
 include/mxnet/imperative.h |  90 -
 include/mxnet/ndarray.h|  61 ++-
 python/mxnet/__init__.py   |   2 +
 python/mxnet/_deferred_compute.py  | 106 +
 python/mxnet/gluon/block.py| 103 -
 python/mxnet/gluon/parameter.py|   8 +-
 python/mxnet/ndarray/ndarray.py|  11 +-
 python/mxnet/ndarray/sparse.py |   1 +
 python/mxnet/numpy/multiarray.py   |  16 +-
 src/api/operator/utils.cc  |  21 +-
 src/c_api/c_api.cc |  17 +-
 src/c_api/c_api_ndarray.cc |  55 ++-
 src/imperative/cached_op.h |   2 +-
 src/imperative/imperative.cc   | 199 -
 src/imperative/imperative_utils.h  |  47 ++-
 src/ndarray/ndarray.cc | 121 +-
 tests/python/gpu/test_deferred_compute_gpu.py  |  33 ++
 tests/python/unittest/test_deferred_compute.py | 536 +
 19 files changed, 1357 insertions(+), 114 deletions(-)

diff --git a/include/mxnet/c_api.h b/include/mxnet/c_api.h
index 637b31d..638385b 100644
--- a/include/mxnet/c_api.h
+++ b/include/mxnet/c_api.h
@@ -1423,6 +1423,44 @@ MXNET_DLL int MXCachedOpRegisterOpHook(NDArrayHandle 
handle,
CachedOpMonitorCallback callback,
bool monitor_all);
 
+/*!
+ * \brief Get current status of deferred compute mode
+ * \param curr returns the current status.
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL int MXNDArrayIsDeferredCompute(int *curr);
+
+/*!
+ * \brief set whether to enable deferred compute mode
+ * \param deferred_compute_enabled 1 to enable, 0 to disable.
+ * \param prev returns the previous status before this set.
+ * \return 0 when success, -1 when failure happens
+ */
+MXNET_DLL int MXNDArraySetIsDeferredCompute(int deferred_compute_enabled, int 
*prev);
+
+/*!
+ * \brief Associate variables with deferred compute arrays
+ * \param arrays ndarray handles to be matched with variables
+ * \param variables symbol handles of variables to be matched with ndarrays
+ * \param num number of arrays and variables respectively
+ * \return 0 when success, -1 when 

[GitHub] [incubator-mxnet] leezu merged pull request #17530: Add deferred compute support

2020-03-23 Thread GitBox
leezu merged pull request #17530: Add deferred compute support
URL: https://github.com/apache/incubator-mxnet/pull/17530
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] yzhliu commented on issue #17878: Decouple LOG_FATAL_THROW prepocessor variables between TVM and MXNet

2020-03-23 Thread GitBox
yzhliu commented on issue #17878: Decouple LOG_FATAL_THROW prepocessor 
variables between TVM and MXNet
URL: https://github.com/apache/incubator-mxnet/pull/17878#issuecomment-602758442
 
 
   @leezu thanks for the finding and fix.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17891: [DNNL] Enable primitive cache in build

2020-03-23 Thread GitBox
leezu commented on issue #17891: [DNNL] Enable primitive cache in build
URL: https://github.com/apache/incubator-mxnet/pull/17891#issuecomment-602739593
 
 
   Reference 
https://github.com/intel/mkl-dnn/blob/8d5fc054f7a8d2abc84c1315262f8d5ff12e1129/doc/advanced/primitive_cache.md


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Use FP32 copy of weights for norm (multitensor LAMB optimizer) (#17700)

2020-03-23 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 8e39518  Use FP32 copy of weights for norm (multitensor LAMB 
optimizer) (#17700)
8e39518 is described below

commit 8e3951876b3598c8b52606a467add5f239d88b38
Author: MoisesHer <50716238+moises...@users.noreply.github.com>
AuthorDate: Mon Mar 23 09:55:24 2020 -0700

Use FP32 copy of weights for norm (multitensor LAMB optimizer) (#17700)

* Use fp32 copy of weights for computing norm in LAMB optimizer

* Fix cpplint
---
 src/operator/contrib/multi_lamb-inl.h | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/src/operator/contrib/multi_lamb-inl.h 
b/src/operator/contrib/multi_lamb-inl.h
index 7fb186f..256445a 100644
--- a/src/operator/contrib/multi_lamb-inl.h
+++ b/src/operator/contrib/multi_lamb-inl.h
@@ -282,10 +282,14 @@ inline void MultiLAMB(const nnvm::NodeAttrs& attrs,
 FillMultiLAMBKernelParam
 (attrs, ctx, inputs, outputs, _params);
 
-// create vector of TBlob with all the weights contiguous
-std::vector weights;
+// create vector of TBlob with all the weights contiguous to compute the 
norm
+// if mixed precision, use fp32 copy
+std::vector weights_for_norm;
+int position_weights = 0;
+if (!std::is_same::value)
+  position_weights = input_stride - 1;
 for (size_t index = 0; index < kernel_params.ntensors; ++index) {
-weights.emplace_back(inputs[index*input_stride]);
+  weights_for_norm.emplace_back(inputs[index * input_stride + 
position_weights]);
 }
 
 // Calculate amount of temporary storage (temp_g, r1, r2, block_to_tensor, 
block_to_chunk)
@@ -327,7 +331,7 @@ inline void MultiLAMB(const nnvm::NodeAttrs& attrs,
 Tensor 
block_to_chunk(reinterpret_cast([pos_wspace]),
   Shape1(kernel_params.nchunks), s);
 
-MultiSumSqRun(weights, kernel_params.ntensors, r1.dptr_, ctx);
+MultiSumSqRun(weights_for_norm, kernel_params.ntensors, r1.dptr_, 
ctx);
 CallKernel1(s, kernel_params, param, temp_g.dptr_,
 block_to_tensor.dptr_,
 block_to_chunk.dptr_);



[GitHub] [incubator-mxnet] eric-haibin-lin merged pull request #17700: Use FP32 copy of weights for norm (multitensor LAMB optimizer)

2020-03-23 Thread GitBox
eric-haibin-lin merged pull request #17700: Use FP32 copy of weights for norm 
(multitensor LAMB optimizer)
URL: https://github.com/apache/incubator-mxnet/pull/17700
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (2f358fd -> b133899)

2020-03-23 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 2f358fd  [Numpy] Add op fmax, fmin, fmod (#17567)
 add b133899  Use multi-tensor sumSQ in clip_global_norm (#17652)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/utils.py| 24 
 tests/python/gpu/test_gluon_gpu.py | 14 +-
 2 files changed, 25 insertions(+), 13 deletions(-)



[incubator-mxnet] branch master updated (2f358fd -> b133899)

2020-03-23 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 2f358fd  [Numpy] Add op fmax, fmin, fmod (#17567)
 add b133899  Use multi-tensor sumSQ in clip_global_norm (#17652)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/utils.py| 24 
 tests/python/gpu/test_gluon_gpu.py | 14 +-
 2 files changed, 25 insertions(+), 13 deletions(-)



[GitHub] [incubator-mxnet] eric-haibin-lin merged pull request #17652: Use multi-tensor sumSQ in clip_global_norm

2020-03-23 Thread GitBox
eric-haibin-lin merged pull request #17652: Use multi-tensor sumSQ in 
clip_global_norm
URL: https://github.com/apache/incubator-mxnet/pull/17652
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] float123 opened a new issue #17892: How to get the output of the specified layer?

2020-03-23 Thread GitBox
float123 opened a new issue #17892: How to get the output of the specified 
layer?
URL: https://github.com/apache/incubator-mxnet/issues/17892
 
 
   Hi,
   I need to get the output of a certain layer, I did the following:
   ```
   sym, arg_params, aux_params = mx.model.load_checkpoint('./data/retina', 0)
   print('sym', sym) # [face_rpn_cls_prob_reshape_stride32, 
face_rpn_bbox_pred_stride32, face_rpn_landmark_pred_stride32,...]
   
   data = mx.sym.Variable('data')
   sym1 = mx.sym.Variable('face_rpn_cls_prob_reshape_stride32')
   group = data + sym1
   group= group.get_internals()
   
   mod = mx.mod.Module(symbol=group, context=mx.gpu(0), label_names=None)
   mod.bind(data_shapes=[('data', (1, 3, 600, 600))], for_training=False)
   mod.set_params(arg_params, aux_params)
   
   mod.forward(img, is_train=False)
   net_out = mod.get_outputs()
   ```
   RuntimeError: face_rpn_cls_prob_reshape_stride32 is not presented
   
   or
   ```
   data = mx.sym.Variable('data')
   sym1 = mx.sym.Variable('face_rpn_cls_prob_reshape_stride32')
   sym1 = sym1.get_internals()
   group = mx.symbol.Group([data, sym1])
   mod = mx.mod.Module(symbol=group, context=mx.gpu(0), label_names=None)
   
   ```
   RuntimeError: simple_bind error. Arguments:
   data: (1, 3, 600, 600)
   [23:49:46] src/executor/../common/exec_utils.h:392: InferShape pass cannot 
decide shapes for the following arguments (-1 means unknown dimensions). Please 
consider providing them as inputs:
   face_rpn_cls_prob_reshape_stride32: None
   
   or
   ```
   sym1 = mx.sym.Variable('face_rpn_cls_prob_reshape_stride32')
   group= sym1.get_internals()
   
   mod = mx.mod.Module(symbol=group, context=mx.gpu(0), label_names=None)
   ```
   
   ValueError: You created Module with Module(..., data_names=['data']) but 
input with name 'data' is not found in symbol.list_arguments(). Did you mean 
one of:
face_rpn_cls_prob_reshape_stride32
   
   
   I always encounter these  problems when using .get_internals () and 
mx.symbol.Group (),Using the output face_rpn_cls_prob_reshape_stride32_output 
will also have an error. how do I need to do it? thank you very much.
   
   
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] caishanli commented on issue #17890: ndarray.cc:640 Check failed: !is_view

2020-03-23 Thread GitBox
caishanli commented on issue #17890: ndarray.cc:640 Check failed: !is_view
URL: 
https://github.com/apache/incubator-mxnet/issues/17890#issuecomment-602688368
 
 
   I find the lastest linux build in https://dist.mxnet.io/python/mkl is 
mxnet_mkl-1.6.0b20200215-py2.py3-none-manylinux1_x86_64.whl
   is it ok?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv opened a new pull request #17891: [DNNL] Enable primitive cache in build

2020-03-23 Thread GitBox
TaoLv opened a new pull request #17891: [DNNL] Enable primitive cache in build
URL: https://github.com/apache/incubator-mxnet/pull/17891
 
 
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #17890: ndarray.cc:640 Check failed: !is_view

2020-03-23 Thread GitBox
TaoLv commented on issue #17890: ndarray.cc:640 Check failed: !is_view
URL: 
https://github.com/apache/incubator-mxnet/issues/17890#issuecomment-602667694
 
 
   Yes. You can also install the nightly build from here: 
https://repo.mxnet.io/dist/index.html. I think the cpu variant should have 
MKL-DNN enabled.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #17884: [MKL-DNN] Integrate Conv3d and Pool3d/1d

2020-03-23 Thread GitBox
TaoLv commented on a change in pull request #17884: [MKL-DNN] Integrate Conv3d 
and Pool3d/1d
URL: https://github.com/apache/incubator-mxnet/pull/17884#discussion_r396524720
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_pooling.cc
 ##
 @@ -127,61 +116,139 @@ mkldnn::algorithm GetMKLDNNPoolAlgo(const PoolingParam 
) {
   }
 }
 
+void InitPoolingPrimitiveParams(const PoolingParam ,
+const mkldnn::memory::desc _md,
+mkldnn::memory::dims *new_kernel,
 
 Review comment:
   How about pass-by-reference?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #17884: [MKL-DNN] Integrate Conv3d and Pool3d/1d

2020-03-23 Thread GitBox
TaoLv commented on a change in pull request #17884: [MKL-DNN] Integrate Conv3d 
and Pool3d/1d
URL: https://github.com/apache/incubator-mxnet/pull/17884#discussion_r396523088
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_pooling-inl.h
 ##
 @@ -114,15 +115,21 @@ inline bool SupportMKLDNNPooling(const PoolingParam 
,
 return true;
   } else {
 if (param.pool_type == pool_enum::kAvgPooling) {
-  CHECK_EQ(dshape.ndim(), 4);
+  CHECK(dshape.ndim() == 3 || dshape.ndim() == 4 || dshape.ndim() == 5);
   // mkldnn works differently when padding is asymmetric, so let's skip 
this case.
-  if (param.pad[0] == GetPaddingSizeFull(dshape[2], param.pad[0], 
param.pad[0], param.kernel[0],
- param.stride[0]) &&
-  param.pad[1] == GetPaddingSizeFull(dshape[3], param.pad[1], 
param.pad[1], param.kernel[1],
- param.stride[1])) {
-return true;
+  bool is_symmetric = true;
+  switch (dshape.ndim()) {
+case 5:
+  is_symmetric = is_symmetric && (param.pad[2] == 
GetPaddingSizeFull(dshape[4],
+param.pad[2], param.pad[2], param.kernel[2], 
param.stride[2]));
+case 4:
+  is_symmetric = is_symmetric && (param.pad[1] == 
GetPaddingSizeFull(dshape[3],
+param.pad[1], param.pad[1], param.kernel[1], 
param.stride[1]));
 
 Review comment:
   I see both pad[0] and pad[1] are checked in previous code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #17884: [MKL-DNN] Integrate Conv3d and Pool3d/1d

2020-03-23 Thread GitBox
TaoLv commented on a change in pull request #17884: [MKL-DNN] Integrate Conv3d 
and Pool3d/1d
URL: https://github.com/apache/incubator-mxnet/pull/17884#discussion_r396512804
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_base-inl.h
 ##
 @@ -153,9 +153,8 @@ static inline bool SupportMKLDNN(int dtype, const 
mxnet::TShape ) {
 // MKLDNN currently does not support 0-dim Tensor and 0-size Tensor
 return false;
   }
-
   return (dtype == mshadow::kFloat32 || dtype == mshadow::kBfloat16) &&
- (ndim == 1 || ndim == 2 || ndim == 4);
+(ndim >= 1 && ndim <= 5);
 
 Review comment:
   Please fix the indent and make sure you also want to enable ndim=3 here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #17884: [MKL-DNN] Integrate Conv3d and Pool3d/1d

2020-03-23 Thread GitBox
TaoLv commented on a change in pull request #17884: [MKL-DNN] Integrate Conv3d 
and Pool3d/1d
URL: https://github.com/apache/incubator-mxnet/pull/17884#discussion_r396517882
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_base-inl.h
 ##
 @@ -324,20 +323,27 @@ inline static mkldnn::memory::desc GetWeightDesc(const 
NDArray ,
   if (num_groups == 1) {
 return GetMemDesc(arr, dtype);
   } else {
-auto ndim = arr.shape().ndim();
-CHECK((ndim == 3) || (ndim == 4))
-<< "MKL-DNN weight currectly supports 3d and 4d layout";
+const auto ndim = arr.shape().ndim();
+CHECK((ndim == 3) || (ndim == 4) || (ndim == 5))
+<< "MKL-DNN weight currently supports 3d or 4d or 5d layout";
 auto tz = mkldnn::memory::dims{0};
-const int N = 0, H = 2, W = 3, C = 1;
-if (ndim == 3) {
-  tz = mkldnn::memory::dims{
-  num_groups, static_cast(arr.shape()[N] / num_groups),
-  static_cast(arr.shape()[C]), static_cast(arr.shape()[H])};
-} else {
-  tz = mkldnn::memory::dims{
-  num_groups, static_cast(arr.shape()[N] / num_groups),
-  static_cast(arr.shape()[C]), static_cast(arr.shape()[H]),
-  static_cast(arr.shape()[W])};
+const int D = (ndim == 5) ? 2 : 1;
+const int N = 0, C = 1, H = D + 1, W = D + 2;
 
 Review comment:
   Let's be more descriptive here:
   ```suggestion
   int N = 0, C = 1, H = 2, W = 3;
   int D = -1;
   if (ndim == 5) {
 D = 2;
 H = 3;
 W = 4;
   }
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] caishanli commented on issue #17890: ndarray.cc:640 Check failed: !is_view

2020-03-23 Thread GitBox
caishanli commented on issue #17890: ndarray.cc:640 Check failed: !is_view
URL: 
https://github.com/apache/incubator-mxnet/issues/17890#issuecomment-602661304
 
 
   > @caishanli Could you please try the latest master branch to see if the 
error is still there? Thanks.
   
   do you mean clone master and build with mkl and test?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] nicklhy commented on issue #17863: Inconsistent results in gluon.nn.BatchNorm with autograd.record()

2020-03-23 Thread GitBox
nicklhy commented on issue #17863: Inconsistent results in gluon.nn.BatchNorm 
with autograd.record()
URL: 
https://github.com/apache/incubator-mxnet/issues/17863#issuecomment-602652408
 
 
   I guess this is the expected result. In training mode, BN calculates the 
mean and variance of batch data first. And since you set momentum as 0, the 
data_mean is exactly the input data. Thus, you will get an output of all zeros 
here.
   
   You can refer the 
[doc](https://mxnet.incubator.apache.org/api/python/docs/api/ndarray/ndarray.html#mxnet.ndarray.BatchNorm)
 of BN for details.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #17890: ndarray.cc:640 Check failed: !is_view

2020-03-23 Thread GitBox
TaoLv commented on issue #17890: ndarray.cc:640 Check failed: !is_view
URL: 
https://github.com/apache/incubator-mxnet/issues/17890#issuecomment-602632820
 
 
   @caishanli Could you please try the latest master branch to see if the error 
is still there? Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] alinagithub edited a comment on issue #17887: Import mxnet for mxnet-cu92 fails

2020-03-23 Thread GitBox
alinagithub edited a comment on issue #17887: Import mxnet for mxnet-cu92 fails
URL: 
https://github.com/apache/incubator-mxnet/issues/17887#issuecomment-602603442
 
 
   Turning around an issue with PATH in Windows.
   Something really bugs me.
   The path to the CUDA DLLs is there, but the DLL does not load.
   If I copy exact the path, present in Python environment path, the dll 
loads...
   
   (.env38) C:\.env38\Scripts>python
   Python 3.8.2 (tags/v3.8.2:7b3ab59, Feb 25 2020, 23:03:10) [MSC v.1916 64 bit 
(AMD64)] on win32
   Type "help", "copyright", "credits" or "license" for more information.
   >>> import os
   >>> os.environ
   environ({'ALLUSERSPROFILE': 'C:\\ProgramData', 'APPDATA': 
'C:\\Users\\arnau\\AppData\\Roaming', 'CAMLIBS': 'C:\\Program 
Files\\darktable\\lib\\libgphoto2\\2.5.23', [...] 'PATH': 'C:\\Program 
Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v9.2;C:\\Program Files\\NVIDIA GPU 
Computing Toolkit\\CUDA\\v9.2\\bin;C:\\.env38\\Scripts;[...]})
   >>> from ctypes import*
   >>> mydll = cdll.LoadLibrary("cudnn64_7.dll")
   Traceback (most recent call last):
 File "", line 1, in 
 File 
"C:\Users\arnau\AppData\Local\Programs\Python\Python38\lib\ctypes\__init__.py", 
line 451, in LoadLibrary
   return self._dlltype(name)
 File 
"C:\Users\arnau\AppData\Local\Programs\Python\Python38\lib\ctypes\__init__.py", 
line 373, in __init__
   self._handle = _dlopen(self._name, mode)
   FileNotFoundError: Could not find module 'cudnn64_7.dll' (or one of its 
dependencies). Try using the full path with constructor syntax.
   >>> mydll = cdll.LoadLibrary("C:\\Program Files\\NVIDIA GPU Computing 
Toolkit\\CUDA\\v9.2\\bin\\cudnn64_7.dll")


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] alinagithub commented on issue #17887: Import mxnet for mxnet-cu92 fails

2020-03-23 Thread GitBox
alinagithub commented on issue #17887: Import mxnet for mxnet-cu92 fails
URL: 
https://github.com/apache/incubator-mxnet/issues/17887#issuecomment-602603442
 
 
   Turning around an issue with PATH in Windows.
   Something really bugs me.
   The path to the CUDA DLLs is there, but the DLL does not load.
   If I copy exact the path, present in Python environment path, the dll 
loads...
   
   (.env38) C:\.env38\Scripts>python
   Python 3.8.2 (tags/v3.8.2:7b3ab59, Feb 25 2020, 23:03:10) [MSC v.1916 64 bit 
(AMD64)] on win32
   Type "help", "copyright", "credits" or "license" for more information.
   >>> import os
   >>> os.environ
   environ({'ALLUSERSPROFILE': 'C:\\ProgramData', 'APPDATA': 
'C:\\Users\\arnau\\AppData\\Roaming', 'CAMLIBS': 'C:\\Program 
Files\\darktable\\lib\\libgphoto2\\2.5.23', 'COMMONPROGRAMFILES': 'C:\\Program 
Files\\Common Files', 'COMMONPROGRAMFILES(X86)': 'C:\\Program Files 
(x86)\\Common Files', 'COMMONPROGRAMW6432': 'C:\\Program Files\\Common Files', 
'COMPUTERNAME': 'MSI', 'COMSPEC': 'C:\\WINDOWS\\system32\\cmd.exe', 
'CONFIGSETROOT': 'C:\\WINDOWS\\ConfigSetRoot', 'CUDA_PATH': 'C:\\Program 
Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v9.2;C:\\Program Files\\NVIDIA GPU 
Computing Toolkit\\CUDA\\v9.2\\bin', 'CUDA_PATH_V9_2': 'C:\\Program 
Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v9.2', 'DRIVERDATA': 
'C:\\Windows\\System32\\Drivers\\DriverData', 'HOMEDRIVE': 'C:', 'HOMEPATH': 
'\\Users\\arnau', 'INTEL_LICENSE_FILE': 'C:\\Program Files (x86)\\Common 
Files\\Intel\\Licenses', 'IOLIBS': 'C:\\Program 
Files\\darktable\\lib\\libgphoto2_port\\0.12.0', 'LOCALAPPDATA': 
'C:\\Users\\arnau\\AppData\\Local', 'LOGONSERVER': 'MSI', 'MAGICK_HOME': 
'C:\\Program Files\\darktable\\lib\\GraphicsMagick-1.3.33\\modules-Q8\\coders', 
'MXNET_CUDNN_AUTOTUNE_DEFAULT': '0', 'NUMBER_OF_PROCESSORS': '12', 
'NVCUDASAMPLES9_2_ROOT': 'C:\\ProgramData\\NVIDIA Corporation\\CUDA 
Samples\\v9.2', 'NVTOOLSEXT_PATH': 'C:\\Program Files\\NVIDIA 
Corporation\\NvToolsExt\\', 'ONEDRIVE': 'C:\\Users\\arnau\\OneDrive', 
'ONEDRIVECONSUMER': 'C:\\Users\\arnau\\OneDrive', 'OS': 'Windows_NT', 'PATH': 
'C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v9.2;C:\\Program 
Files\\NVIDIA GPU Computing 
Toolkit\\CUDA\\v9.2\\bin;C:\\.env38\\Scripts;C:\\Program Files\\NVIDIA GPU 
Computing Toolkit\\CUDA\\v9.2;C:\\Program Files\\NVIDIA GPU Computing 
Toolkit\\CUDA\\v9.2\\bin;C:\\Program Files (x86)\\Common 
Files\\Oracle\\Java\\javapath;C:\\Program Files 
(x86)\\Razer\\ChromaBroadcast\\bin;C:\\Program 
Files\\Razer\\ChromaBroadcast\\bin;C:\\Program Files (x86)\\Razer Chroma 
SDK\\bin;C:\\Program Files\\Razer Chroma SDK\\bin;C:\\Program Files 
(x86)\\Intel\\Intel(R) Management Engine Components\\iCLS\\;C:\\Program 
Files\\Intel\\Intel(R) Management Engine 
Components\\iCLS\\;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Program
 Files (x86)\\Intel\\Intel(R) Management Engine Components\\DAL;C:\\Program 
Files\\Intel\\Intel(R) Management Engine Components\\DAL;C:\\Program Files 
(x86)\\Intel\\Intel(R) Management Engine Components\\IPT;C:\\Program 
Files\\Intel\\Intel(R) Management Engine Components\\IPT;C:\\Program Files 
(x86)\\Graphviz2.38\\bin;C:\\Program Files\\Git\\cmd;C:\\Program Files\\NVIDIA 
Corporation\\NVSMI;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program
 Files\\Intel\\WiFi\\bin\\;C:\\Program Files\\Common 
Files\\Intel\\WirelessCommon\\;C:\\WINDOWS\\system32\\config\\systemprofile\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\arnau\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Program
 Files\\Microsoft VS Code\\bin;C:\\Program Files\\PuTTY\\;C:\\Program 
Files\\NVIDIA Corporation\\NVIDIA NvDLISR;C:\\Program Files (x86)\\NVIDIA 
Corporation\\PhysX\\Common;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program 
Files\\TortoiseGit\\bin;C:\\Program Files\\CMake\\bin;C:\\Program Files\\NVIDIA 
Corporation\\Nsight Compute 
2019.4.0\\;C:\\Users\\arnau\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Program 
Files\\Microsoft VS Code\\bin;C:\\Program Files\\Intel\\WiFi\\bin\\;C:\\Program 
Files\\Common Files\\Intel\\WirelessCommon\\;', 'PATHEXT': 
'.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC', 
'PROCESSOR_ARCHITECTURE': 'AMD64', 'PROCESSOR_IDENTIFIER': 'Intel64 Family 6 
Model 158 Stepping 10, GenuineIntel', 'PROCESSOR_LEVEL': '6', 
'PROCESSOR_REVISION': '9e0a', 'PROGRAMDATA': 'C:\\ProgramData', 'PROGRAMFILES': 
'C:\\Program Files', 'PROGRAMFILES(X86)': 'C:\\Program Files (x86)', 
'PROGRAMW6432': 'C:\\Program Files', 'PROMPT': '(.env38) $P$G', 'PSMODULEPATH': 
'C:\\Program 
Files\\WindowsPowerShell\\Modules;C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\Modules',
 'PUBLIC': 'C:\\Users\\Public', 'SESSIONNAME': 'Console', 'SYSTEMDRIVE': 'C:', 
'SYSTEMROOT': 'C:\\WINDOWS', 'TEMP': 'C:\\Users\\arnau\\AppData\\Local\\Temp', 
'TMP': 'C:\\Users\\arnau\\AppData\\Local\\Temp', 'TOTO': 

[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-03-23 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 71f6f14  Bump the publish timestamp.
71f6f14 is described below

commit 71f6f1416f6bb5ee0ffa6b306163be3865b88e53
Author: mxnet-ci 
AuthorDate: Mon Mar 23 12:45:26 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..d109c41
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Mon Mar 23 12:45:26 UTC 2020



[incubator-mxnet] branch master updated (f01dc80 -> 2f358fd)

2020-03-23 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from f01dc80  Adding sparse support to MXTensor for custom operators 
(#17569)
 add 2f358fd  [Numpy] Add op fmax, fmin, fmod (#17567)

No new revisions were added by this update.

Summary of changes:
 benchmark/python/ffi/benchmark_ffi.py  |   3 +
 python/mxnet/ndarray/numpy/_op.py  |  76 ++-
 python/mxnet/numpy/multiarray.py   |  96 +-
 python/mxnet/numpy_dispatch_protocol.py|   3 +
 python/mxnet/symbol/numpy/_symbol.py   |  24 +++-
 .../np_elemwise_broadcast_op_extended_sec.cc}  |  52 
 src/operator/mshadow_op.h  |  50 
 .../numpy/np_elemwise_broadcast_op_extended_sec.cc | 142 +
 .../numpy/np_elemwise_broadcast_op_extended_sec.cu |  77 +++
 src/operator/operator_tune.cc  |   4 +
 .../python/unittest/test_numpy_interoperability.py |  24 
 tests/python/unittest/test_numpy_op.py |  10 ++
 12 files changed, 525 insertions(+), 36 deletions(-)
 copy src/api/{_api_internal/_api_internal.cc => 
operator/numpy/np_elemwise_broadcast_op_extended_sec.cc} (55%)
 create mode 100644 src/operator/numpy/np_elemwise_broadcast_op_extended_sec.cc
 create mode 100644 src/operator/numpy/np_elemwise_broadcast_op_extended_sec.cu



[incubator-mxnet] branch master updated (f01dc80 -> 2f358fd)

2020-03-23 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from f01dc80  Adding sparse support to MXTensor for custom operators 
(#17569)
 add 2f358fd  [Numpy] Add op fmax, fmin, fmod (#17567)

No new revisions were added by this update.

Summary of changes:
 benchmark/python/ffi/benchmark_ffi.py  |   3 +
 python/mxnet/ndarray/numpy/_op.py  |  76 ++-
 python/mxnet/numpy/multiarray.py   |  96 +-
 python/mxnet/numpy_dispatch_protocol.py|   3 +
 python/mxnet/symbol/numpy/_symbol.py   |  24 +++-
 .../np_elemwise_broadcast_op_extended_sec.cc}  |  52 
 src/operator/mshadow_op.h  |  50 
 .../numpy/np_elemwise_broadcast_op_extended_sec.cc | 142 +
 .../numpy/np_elemwise_broadcast_op_extended_sec.cu |  77 +++
 src/operator/operator_tune.cc  |   4 +
 .../python/unittest/test_numpy_interoperability.py |  24 
 tests/python/unittest/test_numpy_op.py |  10 ++
 12 files changed, 525 insertions(+), 36 deletions(-)
 copy src/api/{_api_internal/_api_internal.cc => 
operator/numpy/np_elemwise_broadcast_op_extended_sec.cc} (55%)
 create mode 100644 src/operator/numpy/np_elemwise_broadcast_op_extended_sec.cc
 create mode 100644 src/operator/numpy/np_elemwise_broadcast_op_extended_sec.cu



[GitHub] [incubator-mxnet] haojin2 merged pull request #17567: [Numpy] Add op fmax, fmin, fmod

2020-03-23 Thread GitBox
haojin2 merged pull request #17567: [Numpy] Add op fmax, fmin, fmod
URL: https://github.com/apache/incubator-mxnet/pull/17567
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-03-23 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 78c1b63  Bump the publish timestamp.
78c1b63 is described below

commit 78c1b633427cc6df1d70328631634000426c5ded
Author: mxnet-ci 
AuthorDate: Mon Mar 23 06:45:42 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..263b226
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Mon Mar 23 06:45:42 UTC 2020



[GitHub] [incubator-mxnet] sxjscience commented on issue #17045: Relocation truncation issues

2020-03-23 Thread GitBox
sxjscience commented on issue #17045: Relocation truncation issues
URL: 
https://github.com/apache/incubator-mxnet/issues/17045#issuecomment-602410744
 
 
   I ran into similar issue with the latest master.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services